00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 825 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3485 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.113 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.113 The recommended git tool is: git 00:00:00.114 using credential 00000000-0000-0000-0000-000000000002 00:00:00.115 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.170 Fetching changes from the remote Git repository 00:00:00.173 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.235 Using shallow fetch with depth 1 00:00:00.236 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.236 > git --version # timeout=10 00:00:00.281 > git --version # 'git version 2.39.2' 00:00:00.281 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.309 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.309 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.474 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.486 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.498 Checking out Revision 7510e71a2b3ec6fca98e4ec196065590f900d444 (FETCH_HEAD) 00:00:05.498 > git config core.sparsecheckout # timeout=10 00:00:05.550 > git read-tree -mu HEAD # timeout=10 00:00:05.582 > git checkout -f 7510e71a2b3ec6fca98e4ec196065590f900d444 # timeout=5 00:00:05.600 Commit message: "kid: add issue 3541" 00:00:05.600 > git rev-list --no-walk 7510e71a2b3ec6fca98e4ec196065590f900d444 # timeout=10 00:00:05.692 [Pipeline] Start of Pipeline 00:00:05.703 [Pipeline] library 00:00:05.704 Loading library shm_lib@master 00:00:05.704 Library shm_lib@master is cached. Copying from home. 00:00:05.723 [Pipeline] node 00:00:05.741 Running on CYP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.742 [Pipeline] { 00:00:05.749 [Pipeline] catchError 00:00:05.751 [Pipeline] { 00:00:05.759 [Pipeline] wrap 00:00:05.766 [Pipeline] { 00:00:05.772 [Pipeline] stage 00:00:05.774 [Pipeline] { (Prologue) 00:00:06.016 [Pipeline] sh 00:00:06.874 + logger -p user.info -t JENKINS-CI 00:00:06.909 [Pipeline] echo 00:00:06.910 Node: CYP12 00:00:06.919 [Pipeline] sh 00:00:07.270 [Pipeline] setCustomBuildProperty 00:00:07.280 [Pipeline] echo 00:00:07.281 Cleanup processes 00:00:07.285 [Pipeline] sh 00:00:07.578 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.578 5793 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.594 [Pipeline] sh 00:00:07.894 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.894 ++ grep -v 'sudo pgrep' 00:00:07.894 ++ awk '{print $1}' 00:00:07.894 + sudo kill -9 00:00:07.894 + true 00:00:07.911 [Pipeline] cleanWs 00:00:07.921 [WS-CLEANUP] Deleting project workspace... 00:00:07.921 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.935 [WS-CLEANUP] done 00:00:07.939 [Pipeline] setCustomBuildProperty 00:00:07.951 [Pipeline] sh 00:00:08.250 + sudo git config --global --replace-all safe.directory '*' 00:00:08.355 [Pipeline] httpRequest 00:00:10.254 [Pipeline] echo 00:00:10.256 Sorcerer 10.211.164.101 is alive 00:00:10.309 [Pipeline] retry 00:00:10.312 [Pipeline] { 00:00:10.328 [Pipeline] httpRequest 00:00:10.346 HttpMethod: GET 00:00:10.346 URL: http://10.211.164.101/packages/jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:10.356 Sending request to url: http://10.211.164.101/packages/jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:10.386 Response Code: HTTP/1.1 200 OK 00:00:10.387 Success: Status code 200 is in the accepted range: 200,404 00:00:10.387 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:26.787 [Pipeline] } 00:00:26.804 [Pipeline] // retry 00:00:26.811 [Pipeline] sh 00:00:27.108 + tar --no-same-owner -xf jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:27.126 [Pipeline] httpRequest 00:00:27.735 [Pipeline] echo 00:00:27.736 Sorcerer 10.211.164.101 is alive 00:00:27.744 [Pipeline] retry 00:00:27.746 [Pipeline] { 00:00:27.759 [Pipeline] httpRequest 00:00:27.764 HttpMethod: GET 00:00:27.764 URL: http://10.211.164.101/packages/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:27.765 Sending request to url: http://10.211.164.101/packages/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:27.786 Response Code: HTTP/1.1 200 OK 00:00:27.786 Success: Status code 200 is in the accepted range: 200,404 00:00:27.786 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:01:24.755 [Pipeline] } 00:01:24.772 [Pipeline] // retry 00:01:24.779 [Pipeline] sh 00:01:25.089 + tar --no-same-owner -xf spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:01:27.665 [Pipeline] sh 00:01:27.963 + git -C spdk log --oneline -n5 00:01:27.963 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:01:27.963 a67b3561a dpdk: update submodule to include alarm_cancel fix 00:01:27.963 43f6d3385 nvmf: remove use of STAILQ for last_wqe events 00:01:27.963 9645421c5 nvmf: rename nvmf_rdma_qpair_process_ibv_event() 00:01:27.963 e6da32ee1 nvmf: rename nvmf_rdma_send_qpair_async_event() 00:01:27.985 [Pipeline] withCredentials 00:01:27.999 > git --version # timeout=10 00:01:28.013 > git --version # 'git version 2.39.2' 00:01:28.049 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:28.051 [Pipeline] { 00:01:28.062 [Pipeline] retry 00:01:28.063 [Pipeline] { 00:01:28.078 [Pipeline] sh 00:01:28.636 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:28.913 [Pipeline] } 00:01:28.930 [Pipeline] // retry 00:01:28.935 [Pipeline] } 00:01:28.950 [Pipeline] // withCredentials 00:01:28.959 [Pipeline] httpRequest 00:01:29.362 [Pipeline] echo 00:01:29.364 Sorcerer 10.211.164.101 is alive 00:01:29.373 [Pipeline] retry 00:01:29.375 [Pipeline] { 00:01:29.388 [Pipeline] httpRequest 00:01:29.394 HttpMethod: GET 00:01:29.394 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:29.395 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:29.413 Response Code: HTTP/1.1 200 OK 00:01:29.413 Success: Status code 200 is in the accepted range: 200,404 00:01:29.413 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:41.816 [Pipeline] } 00:01:41.838 [Pipeline] // retry 00:01:41.845 [Pipeline] sh 00:01:42.148 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:44.091 [Pipeline] sh 00:01:44.388 + git -C dpdk log --oneline -n5 00:01:44.388 eeb0605f11 version: 23.11.0 00:01:44.388 238778122a doc: update release notes for 23.11 00:01:44.388 46aa6b3cfc doc: fix description of RSS features 00:01:44.388 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:44.388 7e421ae345 devtools: support skipping forbid rule check 00:01:44.401 [Pipeline] } 00:01:44.416 [Pipeline] // stage 00:01:44.427 [Pipeline] stage 00:01:44.429 [Pipeline] { (Prepare) 00:01:44.450 [Pipeline] writeFile 00:01:44.466 [Pipeline] sh 00:01:44.769 + logger -p user.info -t JENKINS-CI 00:01:44.786 [Pipeline] sh 00:01:45.085 + logger -p user.info -t JENKINS-CI 00:01:45.100 [Pipeline] sh 00:01:45.394 + cat autorun-spdk.conf 00:01:45.394 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:45.394 SPDK_TEST_NVMF=1 00:01:45.394 SPDK_TEST_NVME_CLI=1 00:01:45.394 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:45.394 SPDK_TEST_NVMF_NICS=e810 00:01:45.394 SPDK_TEST_VFIOUSER=1 00:01:45.394 SPDK_RUN_UBSAN=1 00:01:45.394 NET_TYPE=phy 00:01:45.394 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:45.394 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:45.405 RUN_NIGHTLY=1 00:01:45.410 [Pipeline] readFile 00:01:45.453 [Pipeline] withEnv 00:01:45.455 [Pipeline] { 00:01:45.469 [Pipeline] sh 00:01:45.769 + set -ex 00:01:45.769 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:45.769 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:45.769 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:45.769 ++ SPDK_TEST_NVMF=1 00:01:45.769 ++ SPDK_TEST_NVME_CLI=1 00:01:45.769 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:45.769 ++ SPDK_TEST_NVMF_NICS=e810 00:01:45.769 ++ SPDK_TEST_VFIOUSER=1 00:01:45.769 ++ SPDK_RUN_UBSAN=1 00:01:45.769 ++ NET_TYPE=phy 00:01:45.769 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:45.769 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:45.769 ++ RUN_NIGHTLY=1 00:01:45.769 + case $SPDK_TEST_NVMF_NICS in 00:01:45.769 + DRIVERS=ice 00:01:45.769 + [[ tcp == \r\d\m\a ]] 00:01:45.769 + [[ -n ice ]] 00:01:45.769 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:45.769 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:53.935 rmmod: ERROR: Module irdma is not currently loaded 00:01:53.935 rmmod: ERROR: Module i40iw is not currently loaded 00:01:53.935 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:53.935 + true 00:01:53.935 + for D in $DRIVERS 00:01:53.935 + sudo modprobe ice 00:01:53.935 + exit 0 00:01:53.948 [Pipeline] } 00:01:53.962 [Pipeline] // withEnv 00:01:53.968 [Pipeline] } 00:01:53.982 [Pipeline] // stage 00:01:53.990 [Pipeline] catchError 00:01:53.992 [Pipeline] { 00:01:54.008 [Pipeline] timeout 00:01:54.009 Timeout set to expire in 1 hr 0 min 00:01:54.011 [Pipeline] { 00:01:54.026 [Pipeline] stage 00:01:54.028 [Pipeline] { (Tests) 00:01:54.042 [Pipeline] sh 00:01:54.340 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:54.340 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:54.340 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:54.340 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:54.340 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:54.340 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:54.340 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:54.340 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:54.340 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:54.340 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:54.340 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:54.340 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:54.340 + source /etc/os-release 00:01:54.340 ++ NAME='Fedora Linux' 00:01:54.340 ++ VERSION='39 (Cloud Edition)' 00:01:54.340 ++ ID=fedora 00:01:54.340 ++ VERSION_ID=39 00:01:54.340 ++ VERSION_CODENAME= 00:01:54.340 ++ PLATFORM_ID=platform:f39 00:01:54.340 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:54.340 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:54.340 ++ LOGO=fedora-logo-icon 00:01:54.340 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:54.340 ++ HOME_URL=https://fedoraproject.org/ 00:01:54.340 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:54.340 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:54.340 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:54.340 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:54.340 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:54.340 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:54.340 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:54.340 ++ SUPPORT_END=2024-11-12 00:01:54.340 ++ VARIANT='Cloud Edition' 00:01:54.340 ++ VARIANT_ID=cloud 00:01:54.340 + uname -a 00:01:54.340 Linux spdk-cyp-12 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:54.340 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:57.656 Hugepages 00:01:57.656 node hugesize free / total 00:01:57.656 node0 1048576kB 0 / 0 00:01:57.656 node0 2048kB 0 / 0 00:01:57.656 node1 1048576kB 0 / 0 00:01:57.656 node1 2048kB 0 / 0 00:01:57.656 00:01:57.656 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:57.656 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:57.656 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:57.656 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:57.656 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:57.656 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:57.656 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:57.656 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:57.656 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:57.656 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:57.656 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:57.656 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:57.656 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:57.656 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:57.656 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:57.656 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:57.656 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:57.656 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:57.656 + rm -f /tmp/spdk-ld-path 00:01:57.656 + source autorun-spdk.conf 00:01:57.656 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:57.656 ++ SPDK_TEST_NVMF=1 00:01:57.656 ++ SPDK_TEST_NVME_CLI=1 00:01:57.656 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:57.656 ++ SPDK_TEST_NVMF_NICS=e810 00:01:57.656 ++ SPDK_TEST_VFIOUSER=1 00:01:57.656 ++ SPDK_RUN_UBSAN=1 00:01:57.656 ++ NET_TYPE=phy 00:01:57.656 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:57.656 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:57.656 ++ RUN_NIGHTLY=1 00:01:57.656 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:57.656 + [[ -n '' ]] 00:01:57.656 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:57.656 + for M in /var/spdk/build-*-manifest.txt 00:01:57.656 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:57.656 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:57.656 + for M in /var/spdk/build-*-manifest.txt 00:01:57.656 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:57.656 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:57.656 + for M in /var/spdk/build-*-manifest.txt 00:01:57.656 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:57.656 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:57.656 ++ uname 00:01:57.656 + [[ Linux == \L\i\n\u\x ]] 00:01:57.656 + sudo dmesg -T 00:01:57.656 + sudo dmesg --clear 00:01:57.656 + dmesg_pid=6829 00:01:57.656 + [[ Fedora Linux == FreeBSD ]] 00:01:57.656 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:57.656 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:57.656 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:57.656 + sudo dmesg -Tw 00:01:57.656 + [[ -x /usr/src/fio-static/fio ]] 00:01:57.656 + export FIO_BIN=/usr/src/fio-static/fio 00:01:57.656 + FIO_BIN=/usr/src/fio-static/fio 00:01:57.656 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:57.656 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:57.656 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:57.656 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:57.656 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:57.656 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:57.656 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:57.656 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:57.656 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:57.656 Test configuration: 00:01:57.656 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:57.656 SPDK_TEST_NVMF=1 00:01:57.656 SPDK_TEST_NVME_CLI=1 00:01:57.656 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:57.656 SPDK_TEST_NVMF_NICS=e810 00:01:57.656 SPDK_TEST_VFIOUSER=1 00:01:57.656 SPDK_RUN_UBSAN=1 00:01:57.656 NET_TYPE=phy 00:01:57.656 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:57.656 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:57.656 RUN_NIGHTLY=1 15:20:38 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:01:57.656 15:20:38 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:57.656 15:20:38 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:57.656 15:20:38 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:57.656 15:20:38 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:57.656 15:20:38 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:57.656 15:20:38 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.656 15:20:38 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.656 15:20:38 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.656 15:20:38 -- paths/export.sh@5 -- $ export PATH 00:01:57.656 15:20:38 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.921 15:20:38 -- common/autobuild_common.sh@478 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:57.921 15:20:38 -- common/autobuild_common.sh@479 -- $ date +%s 00:01:57.921 15:20:38 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727443238.XXXXXX 00:01:57.921 15:20:38 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727443238.1qrlJt 00:01:57.921 15:20:38 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:01:57.921 15:20:38 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:01:57.921 15:20:38 -- common/autobuild_common.sh@486 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:57.921 15:20:38 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:57.921 15:20:38 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:57.921 15:20:38 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:57.921 15:20:38 -- common/autobuild_common.sh@495 -- $ get_config_params 00:01:57.921 15:20:38 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:57.921 15:20:38 -- common/autotest_common.sh@10 -- $ set +x 00:01:57.921 15:20:38 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:57.921 15:20:38 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:01:57.921 15:20:38 -- pm/common@17 -- $ local monitor 00:01:57.921 15:20:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.921 15:20:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.921 15:20:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.921 15:20:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.921 15:20:38 -- pm/common@21 -- $ date +%s 00:01:57.921 15:20:38 -- pm/common@25 -- $ sleep 1 00:01:57.921 15:20:38 -- pm/common@21 -- $ date +%s 00:01:57.921 15:20:38 -- pm/common@21 -- $ date +%s 00:01:57.921 15:20:38 -- pm/common@21 -- $ date +%s 00:01:57.921 15:20:38 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727443238 00:01:57.921 15:20:38 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727443238 00:01:57.921 15:20:38 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727443238 00:01:57.921 15:20:38 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727443238 00:01:57.921 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727443238_collect-vmstat.pm.log 00:01:57.921 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727443238_collect-cpu-load.pm.log 00:01:57.921 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727443238_collect-cpu-temp.pm.log 00:01:57.921 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727443238_collect-bmc-pm.bmc.pm.log 00:01:58.871 15:20:39 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:01:58.871 15:20:39 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:58.871 15:20:39 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:58.871 15:20:39 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:58.871 15:20:39 -- spdk/autobuild.sh@16 -- $ date -u 00:01:58.871 Fri Sep 27 01:20:39 PM UTC 2024 00:01:58.871 15:20:39 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:58.871 v25.01-pre-17-g09cc66129 00:01:58.871 15:20:39 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:58.871 15:20:39 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:58.871 15:20:39 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:58.871 15:20:39 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:58.871 15:20:39 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:58.871 15:20:39 -- common/autotest_common.sh@10 -- $ set +x 00:01:58.871 ************************************ 00:01:58.871 START TEST ubsan 00:01:58.871 ************************************ 00:01:58.871 15:20:39 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:58.871 using ubsan 00:01:58.871 00:01:58.871 real 0m0.001s 00:01:58.871 user 0m0.000s 00:01:58.871 sys 0m0.000s 00:01:58.871 15:20:39 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:58.871 15:20:39 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:58.871 ************************************ 00:01:58.871 END TEST ubsan 00:01:58.871 ************************************ 00:01:58.871 15:20:39 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:58.871 15:20:39 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:58.871 15:20:39 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:58.871 15:20:39 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:01:58.871 15:20:39 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:58.871 15:20:39 -- common/autotest_common.sh@10 -- $ set +x 00:01:58.872 ************************************ 00:01:58.872 START TEST build_native_dpdk 00:01:58.872 ************************************ 00:01:58.872 15:20:39 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:01:58.872 15:20:39 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:58.872 15:20:39 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:58.872 15:20:39 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:58.872 15:20:39 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:58.872 15:20:39 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:58.872 15:20:39 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:58.872 15:20:39 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:58.872 15:20:39 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:58.872 15:20:39 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:58.872 15:20:39 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:58.872 15:20:39 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:58.872 15:20:39 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:59.135 15:20:39 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:59.135 15:20:39 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:59.135 15:20:39 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:59.135 15:20:39 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:59.135 15:20:39 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:59.135 15:20:39 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:59.135 15:20:39 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:59.135 15:20:39 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:59.135 eeb0605f11 version: 23.11.0 00:01:59.135 238778122a doc: update release notes for 23.11 00:01:59.135 46aa6b3cfc doc: fix description of RSS features 00:01:59.135 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:59.135 7e421ae345 devtools: support skipping forbid rule check 00:01:59.135 15:20:39 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:59.135 15:20:39 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:59.135 15:20:39 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:59.135 15:20:39 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:59.135 15:20:39 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:59.135 15:20:39 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:59.135 15:20:39 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:59.135 15:20:39 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:59.135 15:20:39 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:59.135 15:20:39 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:59.135 15:20:39 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:59.135 15:20:39 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:59.135 15:20:39 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:59.135 15:20:39 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:59.135 15:20:39 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:59.135 15:20:39 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:59.135 15:20:39 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:59.135 15:20:39 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:59.135 15:20:39 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:59.135 15:20:39 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:59.135 15:20:39 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:59.135 15:20:39 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:59.135 15:20:39 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:59.135 15:20:39 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:59.135 15:20:39 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:59.135 15:20:39 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:59.135 15:20:39 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:59.135 15:20:39 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:59.135 15:20:39 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:59.135 15:20:39 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:59.135 15:20:39 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:01:59.136 15:20:39 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:59.136 patching file config/rte_config.h 00:01:59.136 Hunk #1 succeeded at 60 (offset 1 line). 00:01:59.136 15:20:39 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:01:59.136 15:20:39 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:59.136 patching file lib/pcapng/rte_pcapng.c 00:01:59.136 15:20:39 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:59.136 15:20:39 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:01:59.136 15:20:39 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:01:59.136 15:20:39 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:01:59.136 15:20:39 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:01:59.136 15:20:39 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:59.136 15:20:39 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:05.815 The Meson build system 00:02:05.815 Version: 1.5.0 00:02:05.815 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:05.815 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:02:05.815 Build type: native build 00:02:05.815 Program cat found: YES (/usr/bin/cat) 00:02:05.815 Project name: DPDK 00:02:05.815 Project version: 23.11.0 00:02:05.815 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:05.815 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:05.815 Host machine cpu family: x86_64 00:02:05.815 Host machine cpu: x86_64 00:02:05.815 Message: ## Building in Developer Mode ## 00:02:05.815 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:05.815 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:02:05.815 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:02:05.815 Program python3 found: YES (/usr/bin/python3) 00:02:05.815 Program cat found: YES (/usr/bin/cat) 00:02:05.815 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:05.815 Compiler for C supports arguments -march=native: YES 00:02:05.815 Checking for size of "void *" : 8 00:02:05.815 Checking for size of "void *" : 8 (cached) 00:02:05.815 Library m found: YES 00:02:05.815 Library numa found: YES 00:02:05.815 Has header "numaif.h" : YES 00:02:05.815 Library fdt found: NO 00:02:05.815 Library execinfo found: NO 00:02:05.815 Has header "execinfo.h" : YES 00:02:05.815 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:05.815 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:05.815 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:05.815 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:05.815 Run-time dependency openssl found: YES 3.1.1 00:02:05.815 Run-time dependency libpcap found: YES 1.10.4 00:02:05.815 Has header "pcap.h" with dependency libpcap: YES 00:02:05.815 Compiler for C supports arguments -Wcast-qual: YES 00:02:05.815 Compiler for C supports arguments -Wdeprecated: YES 00:02:05.815 Compiler for C supports arguments -Wformat: YES 00:02:05.815 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:05.815 Compiler for C supports arguments -Wformat-security: NO 00:02:05.815 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:05.815 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:05.815 Compiler for C supports arguments -Wnested-externs: YES 00:02:05.815 Compiler for C supports arguments -Wold-style-definition: YES 00:02:05.815 Compiler for C supports arguments -Wpointer-arith: YES 00:02:05.815 Compiler for C supports arguments -Wsign-compare: YES 00:02:05.815 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:05.815 Compiler for C supports arguments -Wundef: YES 00:02:05.815 Compiler for C supports arguments -Wwrite-strings: YES 00:02:05.815 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:05.815 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:05.815 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:05.815 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:05.815 Program objdump found: YES (/usr/bin/objdump) 00:02:05.815 Compiler for C supports arguments -mavx512f: YES 00:02:05.815 Checking if "AVX512 checking" compiles: YES 00:02:05.815 Fetching value of define "__SSE4_2__" : 1 00:02:05.815 Fetching value of define "__AES__" : 1 00:02:05.815 Fetching value of define "__AVX__" : 1 00:02:05.815 Fetching value of define "__AVX2__" : 1 00:02:05.815 Fetching value of define "__AVX512BW__" : 1 00:02:05.815 Fetching value of define "__AVX512CD__" : 1 00:02:05.815 Fetching value of define "__AVX512DQ__" : 1 00:02:05.815 Fetching value of define "__AVX512F__" : 1 00:02:05.815 Fetching value of define "__AVX512VL__" : 1 00:02:05.815 Fetching value of define "__PCLMUL__" : 1 00:02:05.815 Fetching value of define "__RDRND__" : 1 00:02:05.815 Fetching value of define "__RDSEED__" : 1 00:02:05.815 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:05.815 Fetching value of define "__znver1__" : (undefined) 00:02:05.815 Fetching value of define "__znver2__" : (undefined) 00:02:05.815 Fetching value of define "__znver3__" : (undefined) 00:02:05.815 Fetching value of define "__znver4__" : (undefined) 00:02:05.815 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:05.815 Message: lib/log: Defining dependency "log" 00:02:05.815 Message: lib/kvargs: Defining dependency "kvargs" 00:02:05.815 Message: lib/telemetry: Defining dependency "telemetry" 00:02:05.815 Checking for function "getentropy" : NO 00:02:05.815 Message: lib/eal: Defining dependency "eal" 00:02:05.815 Message: lib/ring: Defining dependency "ring" 00:02:05.815 Message: lib/rcu: Defining dependency "rcu" 00:02:05.815 Message: lib/mempool: Defining dependency "mempool" 00:02:05.815 Message: lib/mbuf: Defining dependency "mbuf" 00:02:05.815 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:05.815 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:05.815 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:05.815 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:05.815 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:05.815 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:05.815 Compiler for C supports arguments -mpclmul: YES 00:02:05.815 Compiler for C supports arguments -maes: YES 00:02:05.815 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:05.815 Compiler for C supports arguments -mavx512bw: YES 00:02:05.815 Compiler for C supports arguments -mavx512dq: YES 00:02:05.815 Compiler for C supports arguments -mavx512vl: YES 00:02:05.815 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:05.815 Compiler for C supports arguments -mavx2: YES 00:02:05.815 Compiler for C supports arguments -mavx: YES 00:02:05.815 Message: lib/net: Defining dependency "net" 00:02:05.815 Message: lib/meter: Defining dependency "meter" 00:02:05.815 Message: lib/ethdev: Defining dependency "ethdev" 00:02:05.815 Message: lib/pci: Defining dependency "pci" 00:02:05.815 Message: lib/cmdline: Defining dependency "cmdline" 00:02:05.815 Message: lib/metrics: Defining dependency "metrics" 00:02:05.815 Message: lib/hash: Defining dependency "hash" 00:02:05.815 Message: lib/timer: Defining dependency "timer" 00:02:05.815 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:05.815 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:05.815 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:05.815 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:05.815 Message: lib/acl: Defining dependency "acl" 00:02:05.815 Message: lib/bbdev: Defining dependency "bbdev" 00:02:05.815 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:05.815 Run-time dependency libelf found: YES 0.191 00:02:05.815 Message: lib/bpf: Defining dependency "bpf" 00:02:05.815 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:05.815 Message: lib/compressdev: Defining dependency "compressdev" 00:02:05.815 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:05.815 Message: lib/distributor: Defining dependency "distributor" 00:02:05.815 Message: lib/dmadev: Defining dependency "dmadev" 00:02:05.815 Message: lib/efd: Defining dependency "efd" 00:02:05.815 Message: lib/eventdev: Defining dependency "eventdev" 00:02:05.815 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:05.815 Message: lib/gpudev: Defining dependency "gpudev" 00:02:05.815 Message: lib/gro: Defining dependency "gro" 00:02:05.815 Message: lib/gso: Defining dependency "gso" 00:02:05.815 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:05.815 Message: lib/jobstats: Defining dependency "jobstats" 00:02:05.815 Message: lib/latencystats: Defining dependency "latencystats" 00:02:05.815 Message: lib/lpm: Defining dependency "lpm" 00:02:05.815 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:05.815 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:05.815 Fetching value of define "__AVX512IFMA__" : 1 00:02:05.815 Message: lib/member: Defining dependency "member" 00:02:05.815 Message: lib/pcapng: Defining dependency "pcapng" 00:02:05.815 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:05.815 Message: lib/power: Defining dependency "power" 00:02:05.815 Message: lib/rawdev: Defining dependency "rawdev" 00:02:05.815 Message: lib/regexdev: Defining dependency "regexdev" 00:02:05.815 Message: lib/mldev: Defining dependency "mldev" 00:02:05.815 Message: lib/rib: Defining dependency "rib" 00:02:05.815 Message: lib/reorder: Defining dependency "reorder" 00:02:05.815 Message: lib/sched: Defining dependency "sched" 00:02:05.815 Message: lib/security: Defining dependency "security" 00:02:05.815 Message: lib/stack: Defining dependency "stack" 00:02:05.815 Has header "linux/userfaultfd.h" : YES 00:02:05.816 Has header "linux/vduse.h" : YES 00:02:05.816 Message: lib/vhost: Defining dependency "vhost" 00:02:05.816 Message: lib/ipsec: Defining dependency "ipsec" 00:02:05.816 Message: lib/pdcp: Defining dependency "pdcp" 00:02:05.816 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:05.816 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:05.816 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:05.816 Message: lib/fib: Defining dependency "fib" 00:02:05.816 Message: lib/port: Defining dependency "port" 00:02:05.816 Message: lib/pdump: Defining dependency "pdump" 00:02:05.816 Message: lib/table: Defining dependency "table" 00:02:05.816 Message: lib/pipeline: Defining dependency "pipeline" 00:02:05.816 Message: lib/graph: Defining dependency "graph" 00:02:05.816 Message: lib/node: Defining dependency "node" 00:02:05.816 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:05.816 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:05.816 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:07.208 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:07.208 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:07.208 Compiler for C supports arguments -Wno-unused-value: YES 00:02:07.208 Compiler for C supports arguments -Wno-format: YES 00:02:07.208 Compiler for C supports arguments -Wno-format-security: YES 00:02:07.208 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:07.208 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:07.208 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:07.208 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:07.208 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:07.208 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:07.208 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:07.208 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:07.208 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:07.208 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:07.208 Has header "sys/epoll.h" : YES 00:02:07.208 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:07.208 Configuring doxy-api-html.conf using configuration 00:02:07.208 Configuring doxy-api-man.conf using configuration 00:02:07.208 Program mandb found: YES (/usr/bin/mandb) 00:02:07.208 Program sphinx-build found: NO 00:02:07.208 Configuring rte_build_config.h using configuration 00:02:07.208 Message: 00:02:07.208 ================= 00:02:07.208 Applications Enabled 00:02:07.208 ================= 00:02:07.208 00:02:07.208 apps: 00:02:07.208 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:07.208 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:07.208 test-pmd, test-regex, test-sad, test-security-perf, 00:02:07.208 00:02:07.208 Message: 00:02:07.208 ================= 00:02:07.208 Libraries Enabled 00:02:07.208 ================= 00:02:07.208 00:02:07.208 libs: 00:02:07.208 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:07.208 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:07.208 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:07.208 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:07.208 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:07.208 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:07.208 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:07.208 00:02:07.208 00:02:07.208 Message: 00:02:07.208 =============== 00:02:07.208 Drivers Enabled 00:02:07.208 =============== 00:02:07.208 00:02:07.208 common: 00:02:07.208 00:02:07.208 bus: 00:02:07.208 pci, vdev, 00:02:07.208 mempool: 00:02:07.208 ring, 00:02:07.208 dma: 00:02:07.208 00:02:07.208 net: 00:02:07.208 i40e, 00:02:07.208 raw: 00:02:07.208 00:02:07.208 crypto: 00:02:07.208 00:02:07.208 compress: 00:02:07.208 00:02:07.208 regex: 00:02:07.208 00:02:07.208 ml: 00:02:07.208 00:02:07.208 vdpa: 00:02:07.208 00:02:07.208 event: 00:02:07.208 00:02:07.208 baseband: 00:02:07.208 00:02:07.208 gpu: 00:02:07.208 00:02:07.208 00:02:07.208 Message: 00:02:07.208 ================= 00:02:07.208 Content Skipped 00:02:07.208 ================= 00:02:07.208 00:02:07.208 apps: 00:02:07.208 00:02:07.208 libs: 00:02:07.208 00:02:07.208 drivers: 00:02:07.208 common/cpt: not in enabled drivers build config 00:02:07.208 common/dpaax: not in enabled drivers build config 00:02:07.208 common/iavf: not in enabled drivers build config 00:02:07.208 common/idpf: not in enabled drivers build config 00:02:07.208 common/mvep: not in enabled drivers build config 00:02:07.208 common/octeontx: not in enabled drivers build config 00:02:07.208 bus/auxiliary: not in enabled drivers build config 00:02:07.208 bus/cdx: not in enabled drivers build config 00:02:07.208 bus/dpaa: not in enabled drivers build config 00:02:07.208 bus/fslmc: not in enabled drivers build config 00:02:07.208 bus/ifpga: not in enabled drivers build config 00:02:07.208 bus/platform: not in enabled drivers build config 00:02:07.208 bus/vmbus: not in enabled drivers build config 00:02:07.208 common/cnxk: not in enabled drivers build config 00:02:07.208 common/mlx5: not in enabled drivers build config 00:02:07.208 common/nfp: not in enabled drivers build config 00:02:07.208 common/qat: not in enabled drivers build config 00:02:07.208 common/sfc_efx: not in enabled drivers build config 00:02:07.208 mempool/bucket: not in enabled drivers build config 00:02:07.208 mempool/cnxk: not in enabled drivers build config 00:02:07.208 mempool/dpaa: not in enabled drivers build config 00:02:07.208 mempool/dpaa2: not in enabled drivers build config 00:02:07.208 mempool/octeontx: not in enabled drivers build config 00:02:07.208 mempool/stack: not in enabled drivers build config 00:02:07.208 dma/cnxk: not in enabled drivers build config 00:02:07.208 dma/dpaa: not in enabled drivers build config 00:02:07.208 dma/dpaa2: not in enabled drivers build config 00:02:07.208 dma/hisilicon: not in enabled drivers build config 00:02:07.208 dma/idxd: not in enabled drivers build config 00:02:07.208 dma/ioat: not in enabled drivers build config 00:02:07.208 dma/skeleton: not in enabled drivers build config 00:02:07.208 net/af_packet: not in enabled drivers build config 00:02:07.208 net/af_xdp: not in enabled drivers build config 00:02:07.208 net/ark: not in enabled drivers build config 00:02:07.208 net/atlantic: not in enabled drivers build config 00:02:07.208 net/avp: not in enabled drivers build config 00:02:07.208 net/axgbe: not in enabled drivers build config 00:02:07.208 net/bnx2x: not in enabled drivers build config 00:02:07.208 net/bnxt: not in enabled drivers build config 00:02:07.208 net/bonding: not in enabled drivers build config 00:02:07.208 net/cnxk: not in enabled drivers build config 00:02:07.208 net/cpfl: not in enabled drivers build config 00:02:07.208 net/cxgbe: not in enabled drivers build config 00:02:07.208 net/dpaa: not in enabled drivers build config 00:02:07.208 net/dpaa2: not in enabled drivers build config 00:02:07.208 net/e1000: not in enabled drivers build config 00:02:07.208 net/ena: not in enabled drivers build config 00:02:07.208 net/enetc: not in enabled drivers build config 00:02:07.208 net/enetfec: not in enabled drivers build config 00:02:07.208 net/enic: not in enabled drivers build config 00:02:07.208 net/failsafe: not in enabled drivers build config 00:02:07.208 net/fm10k: not in enabled drivers build config 00:02:07.208 net/gve: not in enabled drivers build config 00:02:07.208 net/hinic: not in enabled drivers build config 00:02:07.208 net/hns3: not in enabled drivers build config 00:02:07.208 net/iavf: not in enabled drivers build config 00:02:07.208 net/ice: not in enabled drivers build config 00:02:07.208 net/idpf: not in enabled drivers build config 00:02:07.208 net/igc: not in enabled drivers build config 00:02:07.208 net/ionic: not in enabled drivers build config 00:02:07.208 net/ipn3ke: not in enabled drivers build config 00:02:07.208 net/ixgbe: not in enabled drivers build config 00:02:07.208 net/mana: not in enabled drivers build config 00:02:07.208 net/memif: not in enabled drivers build config 00:02:07.208 net/mlx4: not in enabled drivers build config 00:02:07.208 net/mlx5: not in enabled drivers build config 00:02:07.208 net/mvneta: not in enabled drivers build config 00:02:07.208 net/mvpp2: not in enabled drivers build config 00:02:07.208 net/netvsc: not in enabled drivers build config 00:02:07.208 net/nfb: not in enabled drivers build config 00:02:07.208 net/nfp: not in enabled drivers build config 00:02:07.208 net/ngbe: not in enabled drivers build config 00:02:07.208 net/null: not in enabled drivers build config 00:02:07.208 net/octeontx: not in enabled drivers build config 00:02:07.208 net/octeon_ep: not in enabled drivers build config 00:02:07.208 net/pcap: not in enabled drivers build config 00:02:07.208 net/pfe: not in enabled drivers build config 00:02:07.208 net/qede: not in enabled drivers build config 00:02:07.208 net/ring: not in enabled drivers build config 00:02:07.208 net/sfc: not in enabled drivers build config 00:02:07.208 net/softnic: not in enabled drivers build config 00:02:07.208 net/tap: not in enabled drivers build config 00:02:07.208 net/thunderx: not in enabled drivers build config 00:02:07.208 net/txgbe: not in enabled drivers build config 00:02:07.208 net/vdev_netvsc: not in enabled drivers build config 00:02:07.208 net/vhost: not in enabled drivers build config 00:02:07.208 net/virtio: not in enabled drivers build config 00:02:07.208 net/vmxnet3: not in enabled drivers build config 00:02:07.208 raw/cnxk_bphy: not in enabled drivers build config 00:02:07.208 raw/cnxk_gpio: not in enabled drivers build config 00:02:07.208 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:07.208 raw/ifpga: not in enabled drivers build config 00:02:07.208 raw/ntb: not in enabled drivers build config 00:02:07.208 raw/skeleton: not in enabled drivers build config 00:02:07.208 crypto/armv8: not in enabled drivers build config 00:02:07.208 crypto/bcmfs: not in enabled drivers build config 00:02:07.208 crypto/caam_jr: not in enabled drivers build config 00:02:07.208 crypto/ccp: not in enabled drivers build config 00:02:07.208 crypto/cnxk: not in enabled drivers build config 00:02:07.208 crypto/dpaa_sec: not in enabled drivers build config 00:02:07.208 crypto/dpaa2_sec: not in enabled drivers build config 00:02:07.208 crypto/ipsec_mb: not in enabled drivers build config 00:02:07.208 crypto/mlx5: not in enabled drivers build config 00:02:07.208 crypto/mvsam: not in enabled drivers build config 00:02:07.208 crypto/nitrox: not in enabled drivers build config 00:02:07.208 crypto/null: not in enabled drivers build config 00:02:07.208 crypto/octeontx: not in enabled drivers build config 00:02:07.208 crypto/openssl: not in enabled drivers build config 00:02:07.208 crypto/scheduler: not in enabled drivers build config 00:02:07.208 crypto/uadk: not in enabled drivers build config 00:02:07.208 crypto/virtio: not in enabled drivers build config 00:02:07.208 compress/isal: not in enabled drivers build config 00:02:07.208 compress/mlx5: not in enabled drivers build config 00:02:07.208 compress/octeontx: not in enabled drivers build config 00:02:07.208 compress/zlib: not in enabled drivers build config 00:02:07.208 regex/mlx5: not in enabled drivers build config 00:02:07.208 regex/cn9k: not in enabled drivers build config 00:02:07.208 ml/cnxk: not in enabled drivers build config 00:02:07.208 vdpa/ifc: not in enabled drivers build config 00:02:07.208 vdpa/mlx5: not in enabled drivers build config 00:02:07.208 vdpa/nfp: not in enabled drivers build config 00:02:07.208 vdpa/sfc: not in enabled drivers build config 00:02:07.208 event/cnxk: not in enabled drivers build config 00:02:07.208 event/dlb2: not in enabled drivers build config 00:02:07.208 event/dpaa: not in enabled drivers build config 00:02:07.208 event/dpaa2: not in enabled drivers build config 00:02:07.209 event/dsw: not in enabled drivers build config 00:02:07.209 event/opdl: not in enabled drivers build config 00:02:07.209 event/skeleton: not in enabled drivers build config 00:02:07.209 event/sw: not in enabled drivers build config 00:02:07.209 event/octeontx: not in enabled drivers build config 00:02:07.209 baseband/acc: not in enabled drivers build config 00:02:07.209 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:07.209 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:07.209 baseband/la12xx: not in enabled drivers build config 00:02:07.209 baseband/null: not in enabled drivers build config 00:02:07.209 baseband/turbo_sw: not in enabled drivers build config 00:02:07.209 gpu/cuda: not in enabled drivers build config 00:02:07.209 00:02:07.209 00:02:07.209 Build targets in project: 215 00:02:07.209 00:02:07.209 DPDK 23.11.0 00:02:07.209 00:02:07.209 User defined options 00:02:07.209 libdir : lib 00:02:07.209 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:07.209 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:07.209 c_link_args : 00:02:07.209 enable_docs : false 00:02:07.209 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:07.209 enable_kmods : false 00:02:07.209 machine : native 00:02:07.209 tests : false 00:02:07.209 00:02:07.209 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:07.209 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:07.480 15:20:47 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j144 00:02:07.480 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:07.817 [1/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:07.817 [2/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:07.817 [3/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:07.817 [4/705] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:07.817 [5/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:07.817 [6/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:07.817 [7/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:07.817 [8/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:08.122 [9/705] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:08.122 [10/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:08.122 [11/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:08.122 [12/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:08.122 [13/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:08.122 [14/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:08.122 [15/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:08.122 [16/705] Linking static target lib/librte_kvargs.a 00:02:08.122 [17/705] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:08.122 [18/705] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:08.122 [19/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:08.122 [20/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:08.122 [21/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:08.122 [22/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:08.122 [23/705] Linking static target lib/librte_log.a 00:02:08.122 [24/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:08.395 [25/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:08.395 [26/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:08.395 [27/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:08.395 [28/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:08.395 [29/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:08.395 [30/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:08.395 [31/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:08.395 [32/705] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:08.395 [33/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:08.395 [34/705] Linking static target lib/librte_pci.a 00:02:08.395 [35/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:08.395 [36/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:08.395 [37/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:08.395 [38/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:08.395 [39/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:08.395 [40/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:08.395 [41/705] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:08.395 [42/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:08.395 [43/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:08.395 [44/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:08.395 [45/705] Linking static target lib/librte_ring.a 00:02:08.656 [46/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:08.656 [47/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:08.656 [48/705] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:08.656 [49/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:08.656 [50/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:08.656 [51/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:08.656 [52/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:08.656 [53/705] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:08.656 [54/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:08.656 [55/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:08.656 [56/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:08.656 [57/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:08.656 [58/705] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.656 [59/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:08.656 [60/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:08.656 [61/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:08.656 [62/705] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:08.656 [63/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:08.656 [64/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:08.656 [65/705] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:08.656 [66/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:08.656 [67/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:08.656 [68/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:08.656 [69/705] Linking static target lib/librte_meter.a 00:02:08.656 [70/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:08.656 [71/705] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:08.656 [72/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:08.656 [73/705] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:08.656 [74/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:08.656 [75/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:08.656 [76/705] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.656 [77/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:08.656 [78/705] Linking static target lib/librte_cfgfile.a 00:02:08.656 [79/705] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:08.656 [80/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:08.918 [81/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:08.918 [82/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:08.918 [83/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:08.918 [84/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:08.918 [85/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:08.918 [86/705] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:08.918 [87/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:08.918 [88/705] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:08.918 [89/705] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:08.918 [90/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:08.918 [91/705] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:08.918 [92/705] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:08.918 [93/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:08.918 [94/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:08.918 [95/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:08.918 [96/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:08.918 [97/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:08.918 [98/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:08.918 [99/705] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.918 [100/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:08.918 [101/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:08.918 [102/705] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:08.918 [103/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:08.918 [104/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:08.918 [105/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:08.918 [106/705] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:08.918 [107/705] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:08.918 [108/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:08.918 [109/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:08.918 [110/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:08.918 [111/705] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.918 [112/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:08.918 [113/705] Linking static target lib/librte_cmdline.a 00:02:08.918 [114/705] Linking static target lib/librte_timer.a 00:02:08.918 [115/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:08.918 [116/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:08.918 [117/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:08.918 [118/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:08.918 [119/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:08.918 [120/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:09.182 [121/705] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:09.182 [122/705] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:09.182 [123/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:09.182 [124/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:09.182 [125/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:09.182 [126/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:09.182 [127/705] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:09.182 [128/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:09.182 [129/705] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:09.182 [130/705] Linking static target lib/librte_compressdev.a 00:02:09.182 [131/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:09.182 [132/705] Linking target lib/librte_log.so.24.0 00:02:09.182 [133/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:09.182 [134/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:09.182 [135/705] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.182 [136/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:09.182 [137/705] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:09.182 [138/705] Linking static target lib/librte_net.a 00:02:09.182 [139/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:09.182 [140/705] Linking static target lib/librte_mempool.a 00:02:09.182 [141/705] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:09.182 [142/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:09.182 [143/705] Linking static target lib/librte_bitratestats.a 00:02:09.182 [144/705] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:09.182 [145/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:09.182 [146/705] Linking static target lib/librte_metrics.a 00:02:09.182 [147/705] Linking static target lib/librte_bbdev.a 00:02:09.182 [148/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:09.182 [149/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:09.182 [150/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:09.182 [151/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:09.182 [152/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:09.182 [153/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:09.182 [154/705] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:09.182 [155/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:09.182 [156/705] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:09.182 [157/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:09.182 [158/705] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:09.182 [159/705] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:09.182 [160/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:09.182 [161/705] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:09.182 [162/705] Linking static target lib/librte_jobstats.a 00:02:09.182 [163/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:09.182 [164/705] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:09.451 [165/705] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.451 [166/705] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:09.451 [167/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:09.451 [168/705] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:09.451 [169/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:09.451 [170/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:09.451 [171/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:09.451 [172/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:09.451 [173/705] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:09.451 [174/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:09.451 [175/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:09.451 [176/705] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:09.451 [177/705] Linking static target lib/librte_latencystats.a 00:02:09.451 [178/705] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:09.451 [179/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:09.451 [180/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:09.451 [181/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:09.451 [182/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:09.451 [183/705] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:09.451 [184/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:02:09.451 [185/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:09.451 [186/705] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:09.451 [187/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:09.451 [188/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:09.451 [189/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:09.451 [190/705] Linking static target lib/librte_telemetry.a 00:02:09.451 [191/705] Linking static target lib/librte_gso.a 00:02:09.451 [192/705] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:09.451 [193/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:09.451 [194/705] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:09.451 [195/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:09.451 [196/705] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:09.451 [197/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:09.451 [198/705] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:09.451 [199/705] Linking static target lib/librte_dispatcher.a 00:02:09.451 [200/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:09.451 [201/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:09.451 [202/705] Linking static target lib/librte_gpudev.a 00:02:09.451 [203/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:09.451 [204/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:09.451 [205/705] Linking static target lib/librte_gro.a 00:02:09.451 [206/705] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:09.451 [207/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:09.451 [208/705] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:09.451 [209/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:09.451 [210/705] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.451 [211/705] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.451 [212/705] Linking static target lib/librte_dmadev.a 00:02:09.451 [213/705] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:09.451 [214/705] Linking static target lib/librte_regexdev.a 00:02:09.451 [215/705] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:09.451 [216/705] Linking static target lib/librte_rawdev.a 00:02:09.715 [217/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:09.715 [218/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:09.715 [219/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:09.715 [220/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:09.715 [221/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:09.715 [222/705] Linking static target lib/librte_distributor.a 00:02:09.715 [223/705] Linking static target lib/librte_stack.a 00:02:09.715 [224/705] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:09.715 [225/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:09.715 [226/705] Linking static target lib/librte_mbuf.a 00:02:09.715 [227/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:09.715 [228/705] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:09.715 [229/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:09.715 [230/705] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:09.715 [231/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:09.715 [232/705] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.715 [233/705] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:09.715 [234/705] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:09.715 [235/705] Linking static target lib/librte_rcu.a 00:02:09.715 [236/705] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:09.715 [237/705] Linking static target lib/librte_pcapng.a 00:02:09.715 [238/705] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:09.715 [239/705] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.715 [240/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:09.715 [241/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:09.715 [242/705] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:09.715 [243/705] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:09.715 [244/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:09.715 [245/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:09.715 [246/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:09.715 [247/705] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.715 [248/705] Linking static target lib/librte_power.a 00:02:09.715 [249/705] Linking static target lib/librte_ip_frag.a 00:02:09.715 [250/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:09.715 [251/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:09.981 [252/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:09.981 [253/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:09.981 [254/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:09.981 [255/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:09.981 [256/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:09.981 [257/705] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:09.981 [258/705] Linking static target lib/librte_mldev.a 00:02:09.981 [259/705] Linking static target lib/librte_bpf.a 00:02:09.981 [260/705] Linking static target lib/librte_security.a 00:02:09.981 [261/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:09.981 [262/705] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.981 [263/705] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.981 [264/705] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:09.981 [265/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:09.981 [266/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:09.981 [267/705] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.981 [268/705] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:09.981 [269/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:09.981 [270/705] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:09.981 [271/705] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:09.981 [272/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:09.981 [273/705] Linking static target lib/librte_lpm.a 00:02:09.981 [274/705] Linking static target lib/librte_eal.a 00:02:09.981 [275/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:09.981 [276/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:09.981 [277/705] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:09.981 [278/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:09.981 [279/705] Linking static target lib/librte_reorder.a 00:02:09.981 [280/705] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.981 [281/705] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:09.981 [282/705] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:09.981 [283/705] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.981 [284/705] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:09.981 [285/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:10.246 [286/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:10.246 [287/705] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:10.246 [288/705] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.246 [289/705] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:10.246 [290/705] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:10.246 [291/705] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.246 [292/705] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:10.246 [293/705] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.246 [294/705] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:10.246 [295/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:10.246 [296/705] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.246 [297/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:10.246 [298/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:10.246 [299/705] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:10.246 [300/705] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.246 [301/705] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:10.246 [302/705] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:10.246 [303/705] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:10.246 [304/705] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:10.246 [305/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:10.246 [306/705] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:10.246 [307/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:10.246 [308/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:10.246 [309/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:10.246 [310/705] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:10.246 [311/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:10.246 [312/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:10.246 [313/705] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:10.246 [314/705] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:10.246 [315/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:10.246 [316/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:10.246 [317/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:10.246 [318/705] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:10.246 [319/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:10.246 [320/705] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.246 [321/705] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.246 [322/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:10.246 [323/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:10.246 [324/705] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.246 [325/705] Linking static target lib/librte_rib.a 00:02:10.246 [326/705] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:10.246 [327/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:10.246 [328/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:10.246 [329/705] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:10.246 [330/705] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.246 [331/705] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:10.246 [332/705] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:10.246 [333/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:10.511 [334/705] Linking static target lib/librte_efd.a 00:02:10.511 [335/705] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:10.511 [336/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:10.511 [337/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:10.511 [338/705] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:10.511 [339/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:10.511 [340/705] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:10.511 [341/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:10.511 [342/705] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:10.511 [343/705] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:10.511 [344/705] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:10.511 [345/705] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.511 [346/705] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:10.511 [347/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:10.511 [348/705] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.511 [349/705] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:10.511 [350/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:10.511 [351/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:10.511 [352/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:10.511 [353/705] Linking static target lib/librte_fib.a 00:02:10.780 [354/705] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.780 [355/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:10.780 [356/705] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:10.780 [357/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:10.780 [358/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:10.780 [359/705] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.780 [360/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:10.780 [361/705] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.780 [362/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:10.780 [363/705] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:10.780 [364/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:10.780 [365/705] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:10.780 [366/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:10.780 [367/705] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.780 [368/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:10.780 [369/705] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:10.780 [370/705] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:10.780 [371/705] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.780 [372/705] Linking static target lib/librte_graph.a 00:02:10.780 [373/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:10.780 [374/705] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:10.780 [375/705] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.780 [376/705] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:10.780 [377/705] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:10.780 [378/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:10.780 [379/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:10.780 [380/705] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:10.780 [381/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:10.780 [382/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:10.780 [383/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:10.780 [384/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:10.780 [385/705] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:10.780 [386/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:11.046 [387/705] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:11.046 [388/705] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:11.046 [389/705] Linking static target lib/librte_pdump.a 00:02:11.046 [390/705] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:11.046 [391/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:11.046 [392/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:11.046 [393/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:11.046 [394/705] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:11.046 [395/705] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:11.046 [396/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:11.046 [397/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:11.046 [398/705] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.046 [399/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:11.046 [400/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:11.046 [401/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:11.046 [402/705] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:11.046 [403/705] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:11.046 [404/705] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:11.046 [405/705] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:11.046 [406/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:11.046 [407/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:11.046 [408/705] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.046 [409/705] Linking static target lib/librte_sched.a 00:02:11.046 [410/705] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:11.046 [411/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:11.046 [412/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:11.046 [413/705] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.046 [414/705] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:11.046 [415/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:11.046 [416/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:11.046 [417/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:11.046 [418/705] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:11.046 [419/705] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:11.046 [420/705] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.046 [421/705] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:11.305 [422/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:11.305 [423/705] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:11.305 [424/705] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:11.305 [425/705] Linking static target drivers/librte_bus_vdev.a 00:02:11.305 [426/705] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:11.305 [427/705] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:11.305 [428/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:11.305 [429/705] Linking static target drivers/librte_bus_pci.a 00:02:11.305 [430/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:11.305 [431/705] Linking static target lib/librte_cryptodev.a 00:02:11.305 [432/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:11.305 [433/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:11.305 [434/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:11.305 [435/705] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.305 [436/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:11.305 [437/705] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:11.305 [438/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:11.305 [439/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:11.305 [440/705] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:11.305 [441/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:11.305 [442/705] Linking static target lib/librte_table.a 00:02:11.305 [443/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:11.305 [444/705] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:11.305 [445/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:11.305 [446/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:11.305 [447/705] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.305 [448/705] Linking static target lib/librte_node.a 00:02:11.305 [449/705] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:11.305 [450/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:11.305 [451/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:11.305 [452/705] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:11.305 [453/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:11.305 [454/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:11.305 [455/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:11.305 [456/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:11.305 [457/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:11.306 [458/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:11.306 [459/705] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:11.306 [460/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:11.306 [461/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:11.306 [462/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:11.306 [463/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:11.306 [464/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:11.306 [465/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:11.306 [466/705] Linking static target lib/librte_ipsec.a 00:02:11.306 [467/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:11.306 [468/705] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:11.306 [469/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:11.306 [470/705] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:11.306 [471/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:11.565 [472/705] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.565 [473/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:11.565 [474/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:11.565 [475/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:11.565 [476/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:11.565 [477/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:11.565 [478/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:11.565 [479/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:11.565 [480/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:11.565 [481/705] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:11.565 [482/705] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.565 [483/705] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:11.565 [484/705] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:11.565 [485/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:11.565 [486/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:11.565 [487/705] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:11.565 [488/705] Linking static target lib/acl/libavx2_tmp.a 00:02:11.565 [489/705] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.565 [490/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:11.566 [491/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:11.566 [492/705] Linking static target lib/librte_member.a 00:02:11.566 [493/705] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:11.566 [494/705] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:11.566 [495/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:11.566 [496/705] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:11.566 [497/705] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:11.566 [498/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:11.566 [499/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:11.566 [500/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:11.566 [501/705] Linking static target drivers/librte_mempool_ring.a 00:02:11.566 [502/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:11.566 [503/705] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:11.566 [504/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:11.566 [505/705] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:11.566 [506/705] Linking target lib/librte_telemetry.so.24.0 00:02:11.566 [507/705] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.566 [508/705] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:11.566 [509/705] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:11.566 [510/705] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:11.566 [511/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:11.566 [512/705] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:11.566 [513/705] Linking target lib/librte_kvargs.so.24.0 00:02:11.827 [514/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:11.827 [515/705] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:11.827 [516/705] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:11.827 [517/705] Linking static target lib/librte_hash.a 00:02:11.827 [518/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:11.827 [519/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:11.827 [520/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:11.827 [521/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:11.827 [522/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:11.827 [523/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:11.827 [524/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:11.827 [525/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:11.827 [526/705] Linking static target lib/librte_pdcp.a 00:02:11.827 [527/705] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:11.827 [528/705] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:11.827 [529/705] Linking static target lib/librte_eventdev.a 00:02:11.827 [530/705] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:11.827 [531/705] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.827 [532/705] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:11.827 [533/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:11.827 [534/705] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.827 [535/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:11.827 [536/705] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:11.827 [537/705] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:11.827 [538/705] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.827 [539/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:11.827 [540/705] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:11.827 [541/705] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:11.827 [542/705] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:11.827 [543/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:11.827 [544/705] Linking static target lib/librte_port.a 00:02:11.827 [545/705] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:11.828 [546/705] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:11.828 [547/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:12.089 [548/705] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.090 [549/705] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:12.090 [550/705] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:12.090 [551/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:12.090 [552/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:12.090 [553/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:12.090 [554/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:12.090 [555/705] Linking static target lib/librte_acl.a 00:02:12.090 [556/705] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:12.090 [557/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:12.090 [558/705] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:12.090 [559/705] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:12.352 [560/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:12.352 [561/705] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.352 [562/705] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.352 [563/705] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:12.352 [564/705] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:12.352 [565/705] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:12.352 [566/705] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:12.613 [567/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:12.613 [568/705] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.613 [569/705] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.613 [570/705] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:12.613 [571/705] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.874 [572/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:12.874 [573/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:12.874 [574/705] Linking static target lib/librte_ethdev.a 00:02:12.874 [575/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:13.135 [576/705] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.135 [577/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:13.135 [578/705] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:13.708 [579/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:13.969 [580/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:13.969 [581/705] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:13.969 [582/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:14.232 [583/705] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:14.232 [584/705] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:14.232 [585/705] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:14.232 [586/705] Linking static target drivers/librte_net_i40e.a 00:02:14.806 [587/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:15.380 [588/705] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.380 [589/705] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.380 [590/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:19.593 [591/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:19.593 [592/705] Linking static target lib/librte_pipeline.a 00:02:20.538 [593/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:20.538 [594/705] Linking static target lib/librte_vhost.a 00:02:20.800 [595/705] Linking target app/dpdk-pdump 00:02:20.800 [596/705] Linking target app/dpdk-test-fib 00:02:20.800 [597/705] Linking target app/dpdk-test-security-perf 00:02:20.800 [598/705] Linking target app/dpdk-test-cmdline 00:02:20.800 [599/705] Linking target app/dpdk-test-regex 00:02:20.800 [600/705] Linking target app/dpdk-test-sad 00:02:20.800 [601/705] Linking target app/dpdk-test-acl 00:02:20.800 [602/705] Linking target app/dpdk-proc-info 00:02:20.800 [603/705] Linking target app/dpdk-test-crypto-perf 00:02:20.800 [604/705] Linking target app/dpdk-test-flow-perf 00:02:20.800 [605/705] Linking target app/dpdk-test-compress-perf 00:02:21.062 [606/705] Linking target app/dpdk-dumpcap 00:02:21.062 [607/705] Linking target app/dpdk-test-bbdev 00:02:21.062 [608/705] Linking target app/dpdk-graph 00:02:21.062 [609/705] Linking target app/dpdk-test-dma-perf 00:02:21.062 [610/705] Linking target app/dpdk-test-gpudev 00:02:21.062 [611/705] Linking target app/dpdk-test-mldev 00:02:21.062 [612/705] Linking target app/dpdk-test-pipeline 00:02:21.062 [613/705] Linking target app/dpdk-test-eventdev 00:02:21.062 [614/705] Linking target app/dpdk-testpmd 00:02:21.062 [615/705] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.326 [616/705] Linking target lib/librte_eal.so.24.0 00:02:21.326 [617/705] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:21.590 [618/705] Linking target lib/librte_meter.so.24.0 00:02:21.590 [619/705] Linking target lib/librte_ring.so.24.0 00:02:21.590 [620/705] Linking target lib/librte_cfgfile.so.24.0 00:02:21.590 [621/705] Linking target lib/librte_pci.so.24.0 00:02:21.590 [622/705] Linking target lib/librte_timer.so.24.0 00:02:21.590 [623/705] Linking target lib/librte_dmadev.so.24.0 00:02:21.590 [624/705] Linking target lib/librte_stack.so.24.0 00:02:21.590 [625/705] Linking target lib/librte_rawdev.so.24.0 00:02:21.590 [626/705] Linking target lib/librte_jobstats.so.24.0 00:02:21.590 [627/705] Linking target drivers/librte_bus_vdev.so.24.0 00:02:21.590 [628/705] Linking target lib/librte_acl.so.24.0 00:02:21.590 [629/705] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.590 [630/705] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:21.590 [631/705] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:21.590 [632/705] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:21.590 [633/705] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:21.590 [634/705] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:21.590 [635/705] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:21.590 [636/705] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:21.590 [637/705] Linking target lib/librte_rcu.so.24.0 00:02:21.590 [638/705] Linking target lib/librte_mempool.so.24.0 00:02:21.590 [639/705] Linking target drivers/librte_bus_pci.so.24.0 00:02:21.854 [640/705] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:21.854 [641/705] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:21.854 [642/705] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:21.854 [643/705] Linking target drivers/librte_mempool_ring.so.24.0 00:02:21.854 [644/705] Linking target lib/librte_rib.so.24.0 00:02:21.854 [645/705] Linking target lib/librte_mbuf.so.24.0 00:02:22.115 [646/705] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:22.115 [647/705] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:22.116 [648/705] Linking target lib/librte_fib.so.24.0 00:02:22.116 [649/705] Linking target lib/librte_distributor.so.24.0 00:02:22.116 [650/705] Linking target lib/librte_bbdev.so.24.0 00:02:22.116 [651/705] Linking target lib/librte_net.so.24.0 00:02:22.116 [652/705] Linking target lib/librte_compressdev.so.24.0 00:02:22.116 [653/705] Linking target lib/librte_gpudev.so.24.0 00:02:22.116 [654/705] Linking target lib/librte_regexdev.so.24.0 00:02:22.116 [655/705] Linking target lib/librte_mldev.so.24.0 00:02:22.116 [656/705] Linking target lib/librte_reorder.so.24.0 00:02:22.116 [657/705] Linking target lib/librte_sched.so.24.0 00:02:22.116 [658/705] Linking target lib/librte_cryptodev.so.24.0 00:02:22.379 [659/705] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:22.379 [660/705] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:22.379 [661/705] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:22.379 [662/705] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:22.379 [663/705] Linking target lib/librte_cmdline.so.24.0 00:02:22.379 [664/705] Linking target lib/librte_hash.so.24.0 00:02:22.379 [665/705] Linking target lib/librte_security.so.24.0 00:02:22.379 [666/705] Linking target lib/librte_ethdev.so.24.0 00:02:22.379 [667/705] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:22.379 [668/705] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:22.379 [669/705] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:22.642 [670/705] Linking target lib/librte_lpm.so.24.0 00:02:22.642 [671/705] Linking target lib/librte_efd.so.24.0 00:02:22.642 [672/705] Linking target lib/librte_member.so.24.0 00:02:22.642 [673/705] Linking target lib/librte_ipsec.so.24.0 00:02:22.642 [674/705] Linking target lib/librte_pdcp.so.24.0 00:02:22.642 [675/705] Linking target lib/librte_gro.so.24.0 00:02:22.642 [676/705] Linking target lib/librte_metrics.so.24.0 00:02:22.642 [677/705] Linking target lib/librte_pcapng.so.24.0 00:02:22.642 [678/705] Linking target lib/librte_bpf.so.24.0 00:02:22.642 [679/705] Linking target lib/librte_gso.so.24.0 00:02:22.642 [680/705] Linking target lib/librte_ip_frag.so.24.0 00:02:22.642 [681/705] Linking target lib/librte_power.so.24.0 00:02:22.642 [682/705] Linking target lib/librte_eventdev.so.24.0 00:02:22.642 [683/705] Linking target drivers/librte_net_i40e.so.24.0 00:02:22.642 [684/705] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.642 [685/705] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:22.643 [686/705] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:22.643 [687/705] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:22.643 [688/705] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:22.643 [689/705] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:22.643 [690/705] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:22.643 [691/705] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:22.643 [692/705] Linking target lib/librte_vhost.so.24.0 00:02:22.643 [693/705] Linking target lib/librte_dispatcher.so.24.0 00:02:22.643 [694/705] Linking target lib/librte_graph.so.24.0 00:02:22.643 [695/705] Linking target lib/librte_latencystats.so.24.0 00:02:22.643 [696/705] Linking target lib/librte_bitratestats.so.24.0 00:02:22.643 [697/705] Linking target lib/librte_pdump.so.24.0 00:02:22.906 [698/705] Linking target lib/librte_port.so.24.0 00:02:22.906 [699/705] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:22.906 [700/705] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:22.906 [701/705] Linking target lib/librte_node.so.24.0 00:02:22.906 [702/705] Linking target lib/librte_table.so.24.0 00:02:23.169 [703/705] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:25.093 [704/705] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.093 [705/705] Linking target lib/librte_pipeline.so.24.0 00:02:25.093 15:21:05 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:02:25.093 15:21:05 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:25.093 15:21:05 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j144 install 00:02:25.093 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:25.093 [0/1] Installing files. 00:02:25.361 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:25.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:25.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:25.367 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:25.368 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:25.368 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:25.368 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:25.368 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:25.368 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:25.368 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:25.368 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:25.368 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:25.368 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:25.368 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:25.368 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:25.368 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:25.368 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.368 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.369 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.369 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.369 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.369 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.369 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.369 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.369 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.369 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.369 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.369 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.369 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.369 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.369 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.636 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.636 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.636 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.636 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.636 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.636 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.636 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.636 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.636 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.636 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.636 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.636 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.636 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.636 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.636 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.636 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.636 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.636 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.636 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.636 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.636 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.636 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.636 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.636 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.636 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.636 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.636 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.636 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.636 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:25.636 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.636 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:25.636 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.636 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:25.636 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.636 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:25.636 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.636 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.636 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.636 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.636 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.636 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.636 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.636 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.636 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.636 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.636 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.636 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.636 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.636 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.636 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.636 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.636 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.636 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.636 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.636 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.638 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:25.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:25.641 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:02:25.641 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:02:25.641 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:02:25.641 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:25.641 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:02:25.641 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:25.641 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:02:25.641 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:25.641 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:02:25.641 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:25.641 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:02:25.641 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:25.641 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:02:25.641 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:25.641 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:02:25.641 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:25.641 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:02:25.641 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:25.641 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:02:25.641 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:25.641 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:02:25.641 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:25.641 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:02:25.641 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:25.641 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:02:25.641 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:25.641 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:02:25.641 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:25.641 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:02:25.641 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:25.641 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:02:25.641 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:25.641 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:02:25.641 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:25.641 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:02:25.641 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:25.641 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:02:25.641 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:25.641 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:02:25.641 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:25.641 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:02:25.641 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:25.641 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:02:25.641 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:25.641 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:02:25.641 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:25.641 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:02:25.641 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:25.641 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:02:25.641 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:25.641 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:02:25.641 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:25.641 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:02:25.641 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:25.641 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:02:25.641 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:25.642 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:02:25.642 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:25.642 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:02:25.642 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:25.642 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:02:25.642 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:25.642 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:02:25.642 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:25.642 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:02:25.642 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:25.642 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:02:25.642 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:25.642 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:02:25.642 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:25.642 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:02:25.642 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:25.642 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:02:25.642 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:25.642 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:02:25.642 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:25.642 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:25.642 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:25.642 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:25.642 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:25.642 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:25.642 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:25.642 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:25.642 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:25.642 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:25.642 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:25.642 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:25.642 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:25.642 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:02:25.642 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:25.642 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:02:25.642 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:25.642 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:02:25.642 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:25.642 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:02:25.642 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:25.642 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:02:25.642 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:25.642 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:02:25.642 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:25.642 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:02:25.642 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:25.642 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:02:25.642 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:25.642 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:02:25.642 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:25.642 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:02:25.642 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:25.642 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:02:25.642 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:25.642 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:02:25.642 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:25.642 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:02:25.642 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:25.642 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:02:25.642 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:25.642 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:02:25.642 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:25.642 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:02:25.642 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:25.642 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:02:25.642 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:25.642 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:02:25.642 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:25.642 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:25.642 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:25.642 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:25.642 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:25.642 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:25.642 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:25.642 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:25.642 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:25.642 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:25.642 15:21:06 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:02:25.642 15:21:06 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:25.642 00:02:25.642 real 0m26.749s 00:02:25.642 user 7m13.265s 00:02:25.642 sys 3m52.737s 00:02:25.642 15:21:06 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:25.642 15:21:06 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:25.643 ************************************ 00:02:25.643 END TEST build_native_dpdk 00:02:25.643 ************************************ 00:02:25.906 15:21:06 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:25.906 15:21:06 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:25.906 15:21:06 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:25.906 15:21:06 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:25.906 15:21:06 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:25.906 15:21:06 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:25.906 15:21:06 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:25.906 15:21:06 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:25.906 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:26.169 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:26.169 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.169 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:26.745 Using 'verbs' RDMA provider 00:02:42.629 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:57.559 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:57.559 Creating mk/config.mk...done. 00:02:57.559 Creating mk/cc.flags.mk...done. 00:02:57.559 Type 'make' to build. 00:02:57.559 15:21:36 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:02:57.559 15:21:36 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:57.559 15:21:36 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:57.559 15:21:36 -- common/autotest_common.sh@10 -- $ set +x 00:02:57.559 ************************************ 00:02:57.559 START TEST make 00:02:57.559 ************************************ 00:02:57.559 15:21:36 make -- common/autotest_common.sh@1125 -- $ make -j144 00:02:57.559 make[1]: Nothing to be done for 'all'. 00:02:57.820 The Meson build system 00:02:57.820 Version: 1.5.0 00:02:57.820 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:57.820 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:57.820 Build type: native build 00:02:57.820 Project name: libvfio-user 00:02:57.820 Project version: 0.0.1 00:02:57.820 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:57.820 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:57.820 Host machine cpu family: x86_64 00:02:57.820 Host machine cpu: x86_64 00:02:57.820 Run-time dependency threads found: YES 00:02:57.820 Library dl found: YES 00:02:57.820 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:57.820 Run-time dependency json-c found: YES 0.17 00:02:57.820 Run-time dependency cmocka found: YES 1.1.7 00:02:57.820 Program pytest-3 found: NO 00:02:57.820 Program flake8 found: NO 00:02:57.820 Program misspell-fixer found: NO 00:02:57.820 Program restructuredtext-lint found: NO 00:02:57.820 Program valgrind found: YES (/usr/bin/valgrind) 00:02:57.820 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:57.820 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:57.820 Compiler for C supports arguments -Wwrite-strings: YES 00:02:57.820 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:57.820 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:57.820 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:57.820 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:57.820 Build targets in project: 8 00:02:57.820 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:57.820 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:57.820 00:02:57.820 libvfio-user 0.0.1 00:02:57.820 00:02:57.820 User defined options 00:02:57.820 buildtype : debug 00:02:57.820 default_library: shared 00:02:57.820 libdir : /usr/local/lib 00:02:57.820 00:02:57.820 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:58.395 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:58.395 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:58.395 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:58.395 [3/37] Compiling C object samples/null.p/null.c.o 00:02:58.395 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:58.395 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:58.395 [6/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:58.395 [7/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:58.395 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:58.395 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:58.395 [10/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:58.395 [11/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:58.395 [12/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:58.395 [13/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:58.395 [14/37] Compiling C object samples/server.p/server.c.o 00:02:58.395 [15/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:58.395 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:58.395 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:58.395 [18/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:58.395 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:58.395 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:58.395 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:58.395 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:58.395 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:58.395 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:58.395 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:58.395 [26/37] Compiling C object samples/client.p/client.c.o 00:02:58.395 [27/37] Linking target samples/client 00:02:58.395 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:58.395 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:58.659 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:02:58.659 [31/37] Linking target test/unit_tests 00:02:58.659 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:58.659 [33/37] Linking target samples/lspci 00:02:58.659 [34/37] Linking target samples/gpio-pci-idio-16 00:02:58.659 [35/37] Linking target samples/null 00:02:58.659 [36/37] Linking target samples/server 00:02:58.659 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:58.659 INFO: autodetecting backend as ninja 00:02:58.659 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:58.923 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:59.186 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:59.186 ninja: no work to do. 00:03:21.176 CC lib/log/log.o 00:03:21.176 CC lib/log/log_flags.o 00:03:21.176 CC lib/log/log_deprecated.o 00:03:21.176 CC lib/ut_mock/mock.o 00:03:21.176 CC lib/ut/ut.o 00:03:21.176 LIB libspdk_ut.a 00:03:21.176 LIB libspdk_log.a 00:03:21.176 SO libspdk_ut.so.2.0 00:03:21.176 LIB libspdk_ut_mock.a 00:03:21.176 SO libspdk_log.so.7.0 00:03:21.176 SO libspdk_ut_mock.so.6.0 00:03:21.176 SYMLINK libspdk_ut.so 00:03:21.176 SYMLINK libspdk_log.so 00:03:21.176 SYMLINK libspdk_ut_mock.so 00:03:21.439 CC lib/util/base64.o 00:03:21.439 CC lib/util/bit_array.o 00:03:21.439 CC lib/util/cpuset.o 00:03:21.439 CC lib/util/crc16.o 00:03:21.439 CC lib/util/crc32.o 00:03:21.439 CC lib/util/crc32c.o 00:03:21.439 CC lib/util/crc32_ieee.o 00:03:21.439 CC lib/util/crc64.o 00:03:21.439 CC lib/util/dif.o 00:03:21.439 CC lib/util/fd.o 00:03:21.439 CC lib/util/fd_group.o 00:03:21.439 CC lib/dma/dma.o 00:03:21.439 CC lib/util/file.o 00:03:21.439 CC lib/util/hexlify.o 00:03:21.439 CC lib/util/iov.o 00:03:21.439 CC lib/util/math.o 00:03:21.439 CC lib/util/net.o 00:03:21.439 CC lib/util/pipe.o 00:03:21.439 CC lib/ioat/ioat.o 00:03:21.439 CC lib/util/strerror_tls.o 00:03:21.439 CXX lib/trace_parser/trace.o 00:03:21.439 CC lib/util/string.o 00:03:21.439 CC lib/util/uuid.o 00:03:21.439 CC lib/util/xor.o 00:03:21.439 CC lib/util/zipf.o 00:03:21.439 CC lib/util/md5.o 00:03:21.701 CC lib/vfio_user/host/vfio_user_pci.o 00:03:21.701 CC lib/vfio_user/host/vfio_user.o 00:03:21.701 LIB libspdk_dma.a 00:03:21.701 SO libspdk_dma.so.5.0 00:03:21.701 LIB libspdk_ioat.a 00:03:21.701 SO libspdk_ioat.so.7.0 00:03:21.701 SYMLINK libspdk_dma.so 00:03:21.963 SYMLINK libspdk_ioat.so 00:03:21.963 LIB libspdk_vfio_user.a 00:03:21.963 SO libspdk_vfio_user.so.5.0 00:03:21.963 LIB libspdk_util.a 00:03:21.963 SYMLINK libspdk_vfio_user.so 00:03:21.963 SO libspdk_util.so.10.0 00:03:22.225 SYMLINK libspdk_util.so 00:03:22.486 CC lib/rdma_provider/common.o 00:03:22.486 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:22.486 CC lib/idxd/idxd.o 00:03:22.486 CC lib/rdma_utils/rdma_utils.o 00:03:22.486 CC lib/idxd/idxd_user.o 00:03:22.486 CC lib/idxd/idxd_kernel.o 00:03:22.486 CC lib/vmd/vmd.o 00:03:22.486 CC lib/vmd/led.o 00:03:22.486 CC lib/conf/conf.o 00:03:22.486 CC lib/json/json_parse.o 00:03:22.486 CC lib/json/json_util.o 00:03:22.486 CC lib/json/json_write.o 00:03:22.486 CC lib/env_dpdk/env.o 00:03:22.486 CC lib/env_dpdk/memory.o 00:03:22.486 CC lib/env_dpdk/pci.o 00:03:22.486 CC lib/env_dpdk/init.o 00:03:22.486 CC lib/env_dpdk/threads.o 00:03:22.486 CC lib/env_dpdk/pci_ioat.o 00:03:22.486 CC lib/env_dpdk/pci_virtio.o 00:03:22.486 CC lib/env_dpdk/pci_vmd.o 00:03:22.486 CC lib/env_dpdk/pci_idxd.o 00:03:22.486 CC lib/env_dpdk/pci_event.o 00:03:22.486 CC lib/env_dpdk/sigbus_handler.o 00:03:22.486 CC lib/env_dpdk/pci_dpdk.o 00:03:22.486 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:22.486 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:22.748 LIB libspdk_rdma_provider.a 00:03:22.748 SO libspdk_rdma_provider.so.6.0 00:03:22.748 LIB libspdk_conf.a 00:03:22.748 SO libspdk_conf.so.6.0 00:03:22.748 SYMLINK libspdk_rdma_provider.so 00:03:22.748 LIB libspdk_rdma_utils.a 00:03:22.748 LIB libspdk_json.a 00:03:22.748 SO libspdk_rdma_utils.so.1.0 00:03:23.011 SO libspdk_json.so.6.0 00:03:23.011 SYMLINK libspdk_conf.so 00:03:23.011 SYMLINK libspdk_rdma_utils.so 00:03:23.011 SYMLINK libspdk_json.so 00:03:23.011 LIB libspdk_trace_parser.a 00:03:23.011 SO libspdk_trace_parser.so.6.0 00:03:23.011 LIB libspdk_idxd.a 00:03:23.011 LIB libspdk_vmd.a 00:03:23.275 SYMLINK libspdk_trace_parser.so 00:03:23.275 SO libspdk_idxd.so.12.1 00:03:23.275 SO libspdk_vmd.so.6.0 00:03:23.275 SYMLINK libspdk_idxd.so 00:03:23.275 SYMLINK libspdk_vmd.so 00:03:23.275 CC lib/jsonrpc/jsonrpc_server.o 00:03:23.275 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:23.275 CC lib/jsonrpc/jsonrpc_client.o 00:03:23.275 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:23.538 LIB libspdk_jsonrpc.a 00:03:23.538 SO libspdk_jsonrpc.so.6.0 00:03:23.801 SYMLINK libspdk_jsonrpc.so 00:03:23.801 LIB libspdk_env_dpdk.a 00:03:23.801 SO libspdk_env_dpdk.so.15.0 00:03:24.063 SYMLINK libspdk_env_dpdk.so 00:03:24.063 CC lib/rpc/rpc.o 00:03:24.326 LIB libspdk_rpc.a 00:03:24.326 SO libspdk_rpc.so.6.0 00:03:24.326 SYMLINK libspdk_rpc.so 00:03:24.902 CC lib/notify/notify.o 00:03:24.902 CC lib/trace/trace.o 00:03:24.902 CC lib/keyring/keyring.o 00:03:24.902 CC lib/notify/notify_rpc.o 00:03:24.902 CC lib/trace/trace_flags.o 00:03:24.902 CC lib/keyring/keyring_rpc.o 00:03:24.902 CC lib/trace/trace_rpc.o 00:03:24.902 LIB libspdk_notify.a 00:03:24.902 SO libspdk_notify.so.6.0 00:03:24.902 LIB libspdk_keyring.a 00:03:24.902 LIB libspdk_trace.a 00:03:25.164 SO libspdk_keyring.so.2.0 00:03:25.164 SO libspdk_trace.so.11.0 00:03:25.164 SYMLINK libspdk_notify.so 00:03:25.164 SYMLINK libspdk_keyring.so 00:03:25.164 SYMLINK libspdk_trace.so 00:03:25.427 CC lib/thread/thread.o 00:03:25.427 CC lib/thread/iobuf.o 00:03:25.427 CC lib/sock/sock.o 00:03:25.427 CC lib/sock/sock_rpc.o 00:03:26.003 LIB libspdk_sock.a 00:03:26.003 SO libspdk_sock.so.10.0 00:03:26.003 SYMLINK libspdk_sock.so 00:03:26.265 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:26.265 CC lib/nvme/nvme_ctrlr.o 00:03:26.265 CC lib/nvme/nvme_fabric.o 00:03:26.265 CC lib/nvme/nvme_ns_cmd.o 00:03:26.265 CC lib/nvme/nvme_ns.o 00:03:26.265 CC lib/nvme/nvme_pcie_common.o 00:03:26.265 CC lib/nvme/nvme_pcie.o 00:03:26.265 CC lib/nvme/nvme_qpair.o 00:03:26.265 CC lib/nvme/nvme.o 00:03:26.265 CC lib/nvme/nvme_quirks.o 00:03:26.265 CC lib/nvme/nvme_transport.o 00:03:26.265 CC lib/nvme/nvme_discovery.o 00:03:26.265 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:26.265 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:26.265 CC lib/nvme/nvme_tcp.o 00:03:26.265 CC lib/nvme/nvme_opal.o 00:03:26.265 CC lib/nvme/nvme_poll_group.o 00:03:26.265 CC lib/nvme/nvme_io_msg.o 00:03:26.265 CC lib/nvme/nvme_zns.o 00:03:26.265 CC lib/nvme/nvme_stubs.o 00:03:26.265 CC lib/nvme/nvme_auth.o 00:03:26.265 CC lib/nvme/nvme_cuse.o 00:03:26.265 CC lib/nvme/nvme_vfio_user.o 00:03:26.265 CC lib/nvme/nvme_rdma.o 00:03:26.838 LIB libspdk_thread.a 00:03:26.838 SO libspdk_thread.so.10.1 00:03:26.838 SYMLINK libspdk_thread.so 00:03:27.414 CC lib/vfu_tgt/tgt_rpc.o 00:03:27.414 CC lib/vfu_tgt/tgt_endpoint.o 00:03:27.414 CC lib/accel/accel.o 00:03:27.414 CC lib/init/json_config.o 00:03:27.414 CC lib/accel/accel_rpc.o 00:03:27.414 CC lib/virtio/virtio_vhost_user.o 00:03:27.414 CC lib/accel/accel_sw.o 00:03:27.414 CC lib/init/subsystem.o 00:03:27.414 CC lib/virtio/virtio.o 00:03:27.414 CC lib/init/subsystem_rpc.o 00:03:27.414 CC lib/fsdev/fsdev.o 00:03:27.414 CC lib/virtio/virtio_vfio_user.o 00:03:27.414 CC lib/init/rpc.o 00:03:27.414 CC lib/fsdev/fsdev_rpc.o 00:03:27.414 CC lib/virtio/virtio_pci.o 00:03:27.414 CC lib/fsdev/fsdev_io.o 00:03:27.414 CC lib/blob/blobstore.o 00:03:27.414 CC lib/blob/request.o 00:03:27.414 CC lib/blob/zeroes.o 00:03:27.414 CC lib/blob/blob_bs_dev.o 00:03:27.676 LIB libspdk_init.a 00:03:27.676 SO libspdk_init.so.6.0 00:03:27.676 LIB libspdk_vfu_tgt.a 00:03:27.676 LIB libspdk_virtio.a 00:03:27.676 SO libspdk_vfu_tgt.so.3.0 00:03:27.676 SYMLINK libspdk_init.so 00:03:27.676 SO libspdk_virtio.so.7.0 00:03:27.676 SYMLINK libspdk_vfu_tgt.so 00:03:27.676 SYMLINK libspdk_virtio.so 00:03:27.939 LIB libspdk_fsdev.a 00:03:27.939 SO libspdk_fsdev.so.1.0 00:03:27.939 CC lib/event/app.o 00:03:27.939 CC lib/event/reactor.o 00:03:27.939 CC lib/event/log_rpc.o 00:03:27.939 CC lib/event/app_rpc.o 00:03:27.939 CC lib/event/scheduler_static.o 00:03:27.939 SYMLINK libspdk_fsdev.so 00:03:28.203 LIB libspdk_accel.a 00:03:28.203 LIB libspdk_nvme.a 00:03:28.203 SO libspdk_accel.so.16.0 00:03:28.466 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:28.466 LIB libspdk_event.a 00:03:28.466 SYMLINK libspdk_accel.so 00:03:28.466 SO libspdk_nvme.so.14.0 00:03:28.466 SO libspdk_event.so.14.0 00:03:28.466 SYMLINK libspdk_event.so 00:03:28.730 SYMLINK libspdk_nvme.so 00:03:28.730 CC lib/bdev/bdev.o 00:03:28.730 CC lib/bdev/bdev_rpc.o 00:03:28.730 CC lib/bdev/bdev_zone.o 00:03:28.730 CC lib/bdev/part.o 00:03:28.730 CC lib/bdev/scsi_nvme.o 00:03:28.993 LIB libspdk_fuse_dispatcher.a 00:03:28.993 SO libspdk_fuse_dispatcher.so.1.0 00:03:28.993 SYMLINK libspdk_fuse_dispatcher.so 00:03:29.941 LIB libspdk_blob.a 00:03:29.941 SO libspdk_blob.so.11.0 00:03:30.203 SYMLINK libspdk_blob.so 00:03:30.465 CC lib/lvol/lvol.o 00:03:30.465 CC lib/blobfs/blobfs.o 00:03:30.465 CC lib/blobfs/tree.o 00:03:31.040 LIB libspdk_bdev.a 00:03:31.040 SO libspdk_bdev.so.16.0 00:03:31.040 LIB libspdk_blobfs.a 00:03:31.302 SO libspdk_blobfs.so.10.0 00:03:31.302 SYMLINK libspdk_bdev.so 00:03:31.302 LIB libspdk_lvol.a 00:03:31.302 SYMLINK libspdk_blobfs.so 00:03:31.302 SO libspdk_lvol.so.10.0 00:03:31.302 SYMLINK libspdk_lvol.so 00:03:31.573 CC lib/ftl/ftl_core.o 00:03:31.573 CC lib/ftl/ftl_init.o 00:03:31.573 CC lib/ftl/ftl_layout.o 00:03:31.573 CC lib/ftl/ftl_debug.o 00:03:31.573 CC lib/ftl/ftl_io.o 00:03:31.573 CC lib/ftl/ftl_sb.o 00:03:31.573 CC lib/ftl/ftl_l2p.o 00:03:31.573 CC lib/ftl/ftl_l2p_flat.o 00:03:31.573 CC lib/ftl/ftl_band.o 00:03:31.573 CC lib/ftl/ftl_nv_cache.o 00:03:31.573 CC lib/ftl/ftl_band_ops.o 00:03:31.573 CC lib/ftl/ftl_writer.o 00:03:31.573 CC lib/ftl/ftl_rq.o 00:03:31.573 CC lib/ftl/ftl_p2l.o 00:03:31.573 CC lib/ftl/ftl_reloc.o 00:03:31.573 CC lib/ftl/ftl_l2p_cache.o 00:03:31.573 CC lib/ftl/ftl_p2l_log.o 00:03:31.573 CC lib/ftl/mngt/ftl_mngt.o 00:03:31.573 CC lib/scsi/dev.o 00:03:31.573 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:31.573 CC lib/ublk/ublk.o 00:03:31.573 CC lib/scsi/lun.o 00:03:31.573 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:31.573 CC lib/ublk/ublk_rpc.o 00:03:31.573 CC lib/scsi/port.o 00:03:31.573 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:31.573 CC lib/scsi/scsi.o 00:03:31.573 CC lib/nvmf/ctrlr.o 00:03:31.573 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:31.573 CC lib/nbd/nbd.o 00:03:31.573 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:31.573 CC lib/nbd/nbd_rpc.o 00:03:31.573 CC lib/scsi/scsi_bdev.o 00:03:31.573 CC lib/nvmf/ctrlr_discovery.o 00:03:31.573 CC lib/nvmf/ctrlr_bdev.o 00:03:31.573 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:31.573 CC lib/scsi/scsi_pr.o 00:03:31.573 CC lib/nvmf/subsystem.o 00:03:31.573 CC lib/scsi/scsi_rpc.o 00:03:31.573 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:31.573 CC lib/nvmf/nvmf_rpc.o 00:03:31.573 CC lib/nvmf/transport.o 00:03:31.573 CC lib/scsi/task.o 00:03:31.573 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:31.573 CC lib/nvmf/nvmf.o 00:03:31.573 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:31.573 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:31.573 CC lib/nvmf/tcp.o 00:03:31.573 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:31.573 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:31.573 CC lib/nvmf/stubs.o 00:03:31.573 CC lib/nvmf/rdma.o 00:03:31.573 CC lib/nvmf/mdns_server.o 00:03:31.573 CC lib/nvmf/auth.o 00:03:31.573 CC lib/ftl/utils/ftl_conf.o 00:03:31.573 CC lib/ftl/utils/ftl_bitmap.o 00:03:31.573 CC lib/nvmf/vfio_user.o 00:03:31.573 CC lib/ftl/utils/ftl_mempool.o 00:03:31.573 CC lib/ftl/utils/ftl_md.o 00:03:31.573 CC lib/ftl/utils/ftl_property.o 00:03:31.573 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:31.573 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:31.573 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:31.573 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:31.573 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:31.573 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:31.573 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:31.573 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:31.573 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:31.573 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:31.573 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:31.573 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:31.573 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:31.573 CC lib/ftl/base/ftl_base_bdev.o 00:03:31.573 CC lib/ftl/base/ftl_base_dev.o 00:03:31.573 CC lib/ftl/ftl_trace.o 00:03:32.149 LIB libspdk_nbd.a 00:03:32.149 SO libspdk_nbd.so.7.0 00:03:32.149 SYMLINK libspdk_nbd.so 00:03:32.149 LIB libspdk_scsi.a 00:03:32.149 SO libspdk_scsi.so.9.0 00:03:32.412 LIB libspdk_ublk.a 00:03:32.412 SYMLINK libspdk_scsi.so 00:03:32.412 SO libspdk_ublk.so.3.0 00:03:32.412 SYMLINK libspdk_ublk.so 00:03:32.674 LIB libspdk_ftl.a 00:03:32.674 CC lib/iscsi/conn.o 00:03:32.674 CC lib/iscsi/init_grp.o 00:03:32.674 CC lib/iscsi/iscsi.o 00:03:32.674 CC lib/iscsi/param.o 00:03:32.674 CC lib/iscsi/portal_grp.o 00:03:32.674 CC lib/iscsi/tgt_node.o 00:03:32.674 CC lib/iscsi/iscsi_subsystem.o 00:03:32.674 CC lib/iscsi/iscsi_rpc.o 00:03:32.674 CC lib/iscsi/task.o 00:03:32.674 CC lib/vhost/vhost.o 00:03:32.674 CC lib/vhost/vhost_rpc.o 00:03:32.674 CC lib/vhost/vhost_scsi.o 00:03:32.674 CC lib/vhost/vhost_blk.o 00:03:32.674 CC lib/vhost/rte_vhost_user.o 00:03:32.674 SO libspdk_ftl.so.9.0 00:03:32.937 SYMLINK libspdk_ftl.so 00:03:33.884 LIB libspdk_nvmf.a 00:03:33.884 LIB libspdk_vhost.a 00:03:33.884 SO libspdk_vhost.so.8.0 00:03:33.884 SO libspdk_nvmf.so.19.0 00:03:33.884 SYMLINK libspdk_vhost.so 00:03:33.884 LIB libspdk_iscsi.a 00:03:33.884 SYMLINK libspdk_nvmf.so 00:03:33.884 SO libspdk_iscsi.so.8.0 00:03:34.147 SYMLINK libspdk_iscsi.so 00:03:34.722 CC module/vfu_device/vfu_virtio.o 00:03:34.722 CC module/vfu_device/vfu_virtio_blk.o 00:03:34.723 CC module/vfu_device/vfu_virtio_scsi.o 00:03:34.723 CC module/vfu_device/vfu_virtio_rpc.o 00:03:34.723 CC module/vfu_device/vfu_virtio_fs.o 00:03:34.723 CC module/env_dpdk/env_dpdk_rpc.o 00:03:34.985 CC module/scheduler/gscheduler/gscheduler.o 00:03:34.985 CC module/accel/dsa/accel_dsa.o 00:03:34.985 CC module/accel/dsa/accel_dsa_rpc.o 00:03:34.985 LIB libspdk_env_dpdk_rpc.a 00:03:34.985 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:34.985 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:34.985 CC module/accel/error/accel_error.o 00:03:34.985 CC module/accel/error/accel_error_rpc.o 00:03:34.985 CC module/keyring/linux/keyring.o 00:03:34.985 CC module/keyring/linux/keyring_rpc.o 00:03:34.985 CC module/blob/bdev/blob_bdev.o 00:03:34.985 CC module/sock/posix/posix.o 00:03:34.985 CC module/accel/iaa/accel_iaa.o 00:03:34.985 CC module/accel/iaa/accel_iaa_rpc.o 00:03:34.985 CC module/fsdev/aio/fsdev_aio.o 00:03:34.985 CC module/accel/ioat/accel_ioat.o 00:03:34.985 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:34.985 CC module/keyring/file/keyring.o 00:03:34.985 CC module/keyring/file/keyring_rpc.o 00:03:34.985 CC module/fsdev/aio/linux_aio_mgr.o 00:03:34.985 CC module/accel/ioat/accel_ioat_rpc.o 00:03:34.985 SO libspdk_env_dpdk_rpc.so.6.0 00:03:34.985 SYMLINK libspdk_env_dpdk_rpc.so 00:03:34.985 LIB libspdk_scheduler_gscheduler.a 00:03:34.985 LIB libspdk_keyring_linux.a 00:03:34.985 LIB libspdk_keyring_file.a 00:03:34.985 SO libspdk_scheduler_gscheduler.so.4.0 00:03:34.985 LIB libspdk_scheduler_dpdk_governor.a 00:03:35.247 SO libspdk_keyring_linux.so.1.0 00:03:35.247 LIB libspdk_accel_error.a 00:03:35.247 LIB libspdk_accel_ioat.a 00:03:35.247 SO libspdk_keyring_file.so.2.0 00:03:35.247 LIB libspdk_scheduler_dynamic.a 00:03:35.247 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:35.247 LIB libspdk_accel_iaa.a 00:03:35.247 SO libspdk_accel_ioat.so.6.0 00:03:35.247 SO libspdk_accel_error.so.2.0 00:03:35.247 SYMLINK libspdk_scheduler_gscheduler.so 00:03:35.247 SO libspdk_scheduler_dynamic.so.4.0 00:03:35.247 SYMLINK libspdk_keyring_linux.so 00:03:35.247 LIB libspdk_blob_bdev.a 00:03:35.247 SO libspdk_accel_iaa.so.3.0 00:03:35.247 SYMLINK libspdk_keyring_file.so 00:03:35.247 LIB libspdk_accel_dsa.a 00:03:35.247 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:35.247 SYMLINK libspdk_accel_ioat.so 00:03:35.247 SYMLINK libspdk_scheduler_dynamic.so 00:03:35.247 SO libspdk_blob_bdev.so.11.0 00:03:35.247 SYMLINK libspdk_accel_error.so 00:03:35.247 SO libspdk_accel_dsa.so.5.0 00:03:35.247 SYMLINK libspdk_accel_iaa.so 00:03:35.247 LIB libspdk_vfu_device.a 00:03:35.247 SYMLINK libspdk_blob_bdev.so 00:03:35.247 SYMLINK libspdk_accel_dsa.so 00:03:35.247 SO libspdk_vfu_device.so.3.0 00:03:35.510 SYMLINK libspdk_vfu_device.so 00:03:35.510 LIB libspdk_sock_posix.a 00:03:35.510 SO libspdk_sock_posix.so.6.0 00:03:35.510 LIB libspdk_fsdev_aio.a 00:03:35.510 SO libspdk_fsdev_aio.so.1.0 00:03:35.510 SYMLINK libspdk_sock_posix.so 00:03:35.774 SYMLINK libspdk_fsdev_aio.so 00:03:35.774 CC module/bdev/error/vbdev_error.o 00:03:35.774 CC module/bdev/error/vbdev_error_rpc.o 00:03:35.774 CC module/bdev/lvol/vbdev_lvol.o 00:03:35.774 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:35.774 CC module/bdev/delay/vbdev_delay.o 00:03:35.774 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:35.774 CC module/bdev/null/bdev_null.o 00:03:35.774 CC module/bdev/null/bdev_null_rpc.o 00:03:35.774 CC module/bdev/gpt/gpt.o 00:03:35.774 CC module/bdev/gpt/vbdev_gpt.o 00:03:35.774 CC module/bdev/split/vbdev_split_rpc.o 00:03:35.774 CC module/bdev/split/vbdev_split.o 00:03:35.774 CC module/bdev/passthru/vbdev_passthru.o 00:03:35.774 CC module/bdev/ftl/bdev_ftl.o 00:03:35.774 CC module/bdev/iscsi/bdev_iscsi.o 00:03:35.774 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:35.775 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:35.775 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:35.775 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:35.775 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:35.775 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:35.775 CC module/blobfs/bdev/blobfs_bdev.o 00:03:35.775 CC module/bdev/raid/bdev_raid.o 00:03:35.775 CC module/bdev/aio/bdev_aio.o 00:03:35.775 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:35.775 CC module/bdev/malloc/bdev_malloc.o 00:03:35.775 CC module/bdev/nvme/bdev_nvme.o 00:03:35.775 CC module/bdev/raid/bdev_raid_rpc.o 00:03:35.775 CC module/bdev/aio/bdev_aio_rpc.o 00:03:35.775 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:35.775 CC module/bdev/raid/raid0.o 00:03:35.775 CC module/bdev/raid/bdev_raid_sb.o 00:03:35.775 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:35.775 CC module/bdev/nvme/nvme_rpc.o 00:03:35.775 CC module/bdev/nvme/bdev_mdns_client.o 00:03:35.775 CC module/bdev/raid/raid1.o 00:03:35.775 CC module/bdev/raid/concat.o 00:03:35.775 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:35.775 CC module/bdev/nvme/vbdev_opal.o 00:03:36.036 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:36.036 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:36.036 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:36.297 LIB libspdk_blobfs_bdev.a 00:03:36.297 LIB libspdk_bdev_split.a 00:03:36.297 SO libspdk_blobfs_bdev.so.6.0 00:03:36.297 SO libspdk_bdev_split.so.6.0 00:03:36.297 SYMLINK libspdk_blobfs_bdev.so 00:03:36.297 LIB libspdk_bdev_ftl.a 00:03:36.297 LIB libspdk_bdev_gpt.a 00:03:36.297 LIB libspdk_bdev_null.a 00:03:36.297 LIB libspdk_bdev_error.a 00:03:36.297 LIB libspdk_bdev_zone_block.a 00:03:36.297 SYMLINK libspdk_bdev_split.so 00:03:36.297 LIB libspdk_bdev_aio.a 00:03:36.297 LIB libspdk_bdev_malloc.a 00:03:36.297 SO libspdk_bdev_null.so.6.0 00:03:36.297 SO libspdk_bdev_ftl.so.6.0 00:03:36.297 SO libspdk_bdev_gpt.so.6.0 00:03:36.297 SO libspdk_bdev_error.so.6.0 00:03:36.297 SO libspdk_bdev_zone_block.so.6.0 00:03:36.297 SO libspdk_bdev_aio.so.6.0 00:03:36.297 LIB libspdk_bdev_passthru.a 00:03:36.297 SO libspdk_bdev_malloc.so.6.0 00:03:36.297 LIB libspdk_bdev_delay.a 00:03:36.558 SYMLINK libspdk_bdev_null.so 00:03:36.558 SO libspdk_bdev_passthru.so.6.0 00:03:36.558 SYMLINK libspdk_bdev_gpt.so 00:03:36.558 SYMLINK libspdk_bdev_ftl.so 00:03:36.558 SYMLINK libspdk_bdev_error.so 00:03:36.558 SYMLINK libspdk_bdev_zone_block.so 00:03:36.558 LIB libspdk_bdev_iscsi.a 00:03:36.558 LIB libspdk_bdev_lvol.a 00:03:36.558 SYMLINK libspdk_bdev_aio.so 00:03:36.558 SYMLINK libspdk_bdev_malloc.so 00:03:36.558 SO libspdk_bdev_delay.so.6.0 00:03:36.558 SO libspdk_bdev_iscsi.so.6.0 00:03:36.558 SO libspdk_bdev_lvol.so.6.0 00:03:36.558 SYMLINK libspdk_bdev_passthru.so 00:03:36.558 SYMLINK libspdk_bdev_delay.so 00:03:36.558 SYMLINK libspdk_bdev_iscsi.so 00:03:36.558 LIB libspdk_bdev_virtio.a 00:03:36.558 SYMLINK libspdk_bdev_lvol.so 00:03:36.558 SO libspdk_bdev_virtio.so.6.0 00:03:36.819 SYMLINK libspdk_bdev_virtio.so 00:03:36.819 LIB libspdk_bdev_raid.a 00:03:37.082 SO libspdk_bdev_raid.so.6.0 00:03:37.082 SYMLINK libspdk_bdev_raid.so 00:03:38.029 LIB libspdk_bdev_nvme.a 00:03:38.029 SO libspdk_bdev_nvme.so.7.0 00:03:38.291 SYMLINK libspdk_bdev_nvme.so 00:03:38.866 CC module/event/subsystems/vmd/vmd.o 00:03:38.866 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:38.866 CC module/event/subsystems/sock/sock.o 00:03:38.866 CC module/event/subsystems/iobuf/iobuf.o 00:03:38.866 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:38.866 CC module/event/subsystems/keyring/keyring.o 00:03:38.866 CC module/event/subsystems/fsdev/fsdev.o 00:03:38.866 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:38.866 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:38.866 CC module/event/subsystems/scheduler/scheduler.o 00:03:39.129 LIB libspdk_event_vhost_blk.a 00:03:39.129 LIB libspdk_event_vmd.a 00:03:39.129 LIB libspdk_event_scheduler.a 00:03:39.129 LIB libspdk_event_sock.a 00:03:39.129 LIB libspdk_event_keyring.a 00:03:39.129 LIB libspdk_event_fsdev.a 00:03:39.129 LIB libspdk_event_vfu_tgt.a 00:03:39.129 LIB libspdk_event_iobuf.a 00:03:39.129 SO libspdk_event_sock.so.5.0 00:03:39.129 SO libspdk_event_vhost_blk.so.3.0 00:03:39.129 SO libspdk_event_keyring.so.1.0 00:03:39.129 SO libspdk_event_vmd.so.6.0 00:03:39.129 SO libspdk_event_scheduler.so.4.0 00:03:39.129 SO libspdk_event_fsdev.so.1.0 00:03:39.129 SO libspdk_event_vfu_tgt.so.3.0 00:03:39.129 SO libspdk_event_iobuf.so.3.0 00:03:39.129 SYMLINK libspdk_event_vhost_blk.so 00:03:39.129 SYMLINK libspdk_event_sock.so 00:03:39.129 SYMLINK libspdk_event_keyring.so 00:03:39.129 SYMLINK libspdk_event_vmd.so 00:03:39.129 SYMLINK libspdk_event_scheduler.so 00:03:39.129 SYMLINK libspdk_event_fsdev.so 00:03:39.129 SYMLINK libspdk_event_vfu_tgt.so 00:03:39.129 SYMLINK libspdk_event_iobuf.so 00:03:39.705 CC module/event/subsystems/accel/accel.o 00:03:39.705 LIB libspdk_event_accel.a 00:03:39.705 SO libspdk_event_accel.so.6.0 00:03:39.968 SYMLINK libspdk_event_accel.so 00:03:40.231 CC module/event/subsystems/bdev/bdev.o 00:03:40.493 LIB libspdk_event_bdev.a 00:03:40.493 SO libspdk_event_bdev.so.6.0 00:03:40.493 SYMLINK libspdk_event_bdev.so 00:03:40.756 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:40.756 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:40.756 CC module/event/subsystems/nbd/nbd.o 00:03:40.756 CC module/event/subsystems/ublk/ublk.o 00:03:40.756 CC module/event/subsystems/scsi/scsi.o 00:03:41.019 LIB libspdk_event_ublk.a 00:03:41.019 LIB libspdk_event_nbd.a 00:03:41.019 LIB libspdk_event_scsi.a 00:03:41.019 SO libspdk_event_ublk.so.3.0 00:03:41.019 SO libspdk_event_nbd.so.6.0 00:03:41.019 LIB libspdk_event_nvmf.a 00:03:41.019 SO libspdk_event_scsi.so.6.0 00:03:41.019 SYMLINK libspdk_event_nbd.so 00:03:41.019 SYMLINK libspdk_event_ublk.so 00:03:41.019 SO libspdk_event_nvmf.so.6.0 00:03:41.283 SYMLINK libspdk_event_scsi.so 00:03:41.283 SYMLINK libspdk_event_nvmf.so 00:03:41.546 CC module/event/subsystems/iscsi/iscsi.o 00:03:41.546 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:41.809 LIB libspdk_event_vhost_scsi.a 00:03:41.809 LIB libspdk_event_iscsi.a 00:03:41.809 SO libspdk_event_vhost_scsi.so.3.0 00:03:41.809 SO libspdk_event_iscsi.so.6.0 00:03:41.809 SYMLINK libspdk_event_vhost_scsi.so 00:03:41.809 SYMLINK libspdk_event_iscsi.so 00:03:42.073 SO libspdk.so.6.0 00:03:42.073 SYMLINK libspdk.so 00:03:42.336 CC app/trace_record/trace_record.o 00:03:42.336 CXX app/trace/trace.o 00:03:42.336 CC test/rpc_client/rpc_client_test.o 00:03:42.336 CC app/spdk_lspci/spdk_lspci.o 00:03:42.336 TEST_HEADER include/spdk/accel.h 00:03:42.336 CC app/spdk_nvme_identify/identify.o 00:03:42.336 TEST_HEADER include/spdk/assert.h 00:03:42.336 CC app/spdk_nvme_perf/perf.o 00:03:42.336 TEST_HEADER include/spdk/accel_module.h 00:03:42.336 TEST_HEADER include/spdk/barrier.h 00:03:42.336 CC app/spdk_top/spdk_top.o 00:03:42.336 CC app/spdk_nvme_discover/discovery_aer.o 00:03:42.336 TEST_HEADER include/spdk/base64.h 00:03:42.336 TEST_HEADER include/spdk/bdev.h 00:03:42.336 TEST_HEADER include/spdk/bdev_module.h 00:03:42.336 TEST_HEADER include/spdk/bdev_zone.h 00:03:42.336 TEST_HEADER include/spdk/bit_array.h 00:03:42.336 TEST_HEADER include/spdk/bit_pool.h 00:03:42.336 TEST_HEADER include/spdk/blob_bdev.h 00:03:42.336 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:42.336 TEST_HEADER include/spdk/blobfs.h 00:03:42.336 TEST_HEADER include/spdk/blob.h 00:03:42.336 TEST_HEADER include/spdk/conf.h 00:03:42.336 TEST_HEADER include/spdk/config.h 00:03:42.336 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:42.336 TEST_HEADER include/spdk/cpuset.h 00:03:42.336 TEST_HEADER include/spdk/crc16.h 00:03:42.336 TEST_HEADER include/spdk/crc32.h 00:03:42.609 TEST_HEADER include/spdk/crc64.h 00:03:42.609 TEST_HEADER include/spdk/dma.h 00:03:42.609 TEST_HEADER include/spdk/dif.h 00:03:42.609 TEST_HEADER include/spdk/endian.h 00:03:42.609 TEST_HEADER include/spdk/env.h 00:03:42.609 TEST_HEADER include/spdk/env_dpdk.h 00:03:42.609 TEST_HEADER include/spdk/event.h 00:03:42.609 TEST_HEADER include/spdk/fd_group.h 00:03:42.609 TEST_HEADER include/spdk/fd.h 00:03:42.609 TEST_HEADER include/spdk/file.h 00:03:42.609 CC app/spdk_dd/spdk_dd.o 00:03:42.609 TEST_HEADER include/spdk/fsdev.h 00:03:42.609 TEST_HEADER include/spdk/fsdev_module.h 00:03:42.609 TEST_HEADER include/spdk/ftl.h 00:03:42.609 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:42.609 TEST_HEADER include/spdk/gpt_spec.h 00:03:42.609 TEST_HEADER include/spdk/histogram_data.h 00:03:42.609 TEST_HEADER include/spdk/hexlify.h 00:03:42.609 TEST_HEADER include/spdk/idxd_spec.h 00:03:42.609 TEST_HEADER include/spdk/idxd.h 00:03:42.609 TEST_HEADER include/spdk/ioat.h 00:03:42.609 TEST_HEADER include/spdk/init.h 00:03:42.609 TEST_HEADER include/spdk/ioat_spec.h 00:03:42.609 TEST_HEADER include/spdk/iscsi_spec.h 00:03:42.609 TEST_HEADER include/spdk/json.h 00:03:42.609 TEST_HEADER include/spdk/jsonrpc.h 00:03:42.609 TEST_HEADER include/spdk/keyring.h 00:03:42.609 TEST_HEADER include/spdk/keyring_module.h 00:03:42.609 TEST_HEADER include/spdk/likely.h 00:03:42.609 TEST_HEADER include/spdk/log.h 00:03:42.609 TEST_HEADER include/spdk/lvol.h 00:03:42.609 TEST_HEADER include/spdk/md5.h 00:03:42.609 TEST_HEADER include/spdk/memory.h 00:03:42.609 TEST_HEADER include/spdk/mmio.h 00:03:42.609 TEST_HEADER include/spdk/nbd.h 00:03:42.609 TEST_HEADER include/spdk/net.h 00:03:42.609 TEST_HEADER include/spdk/notify.h 00:03:42.609 CC app/spdk_tgt/spdk_tgt.o 00:03:42.609 TEST_HEADER include/spdk/nvme_intel.h 00:03:42.609 TEST_HEADER include/spdk/nvme.h 00:03:42.609 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:42.609 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:42.609 TEST_HEADER include/spdk/nvme_zns.h 00:03:42.609 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:42.609 TEST_HEADER include/spdk/nvme_spec.h 00:03:42.609 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:42.609 CC app/nvmf_tgt/nvmf_main.o 00:03:42.609 TEST_HEADER include/spdk/nvmf.h 00:03:42.609 CC app/iscsi_tgt/iscsi_tgt.o 00:03:42.609 TEST_HEADER include/spdk/nvmf_transport.h 00:03:42.609 TEST_HEADER include/spdk/opal_spec.h 00:03:42.609 TEST_HEADER include/spdk/opal.h 00:03:42.609 TEST_HEADER include/spdk/nvmf_spec.h 00:03:42.609 TEST_HEADER include/spdk/pci_ids.h 00:03:42.609 TEST_HEADER include/spdk/pipe.h 00:03:42.609 TEST_HEADER include/spdk/rpc.h 00:03:42.609 TEST_HEADER include/spdk/queue.h 00:03:42.609 TEST_HEADER include/spdk/reduce.h 00:03:42.609 TEST_HEADER include/spdk/scheduler.h 00:03:42.609 TEST_HEADER include/spdk/scsi.h 00:03:42.609 TEST_HEADER include/spdk/scsi_spec.h 00:03:42.609 TEST_HEADER include/spdk/sock.h 00:03:42.609 TEST_HEADER include/spdk/stdinc.h 00:03:42.609 TEST_HEADER include/spdk/string.h 00:03:42.609 TEST_HEADER include/spdk/thread.h 00:03:42.609 TEST_HEADER include/spdk/trace.h 00:03:42.609 TEST_HEADER include/spdk/trace_parser.h 00:03:42.609 TEST_HEADER include/spdk/tree.h 00:03:42.609 TEST_HEADER include/spdk/ublk.h 00:03:42.609 TEST_HEADER include/spdk/util.h 00:03:42.609 TEST_HEADER include/spdk/uuid.h 00:03:42.609 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:42.609 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:42.609 TEST_HEADER include/spdk/version.h 00:03:42.609 TEST_HEADER include/spdk/vhost.h 00:03:42.609 TEST_HEADER include/spdk/xor.h 00:03:42.609 TEST_HEADER include/spdk/vmd.h 00:03:42.609 CXX test/cpp_headers/accel.o 00:03:42.609 TEST_HEADER include/spdk/zipf.h 00:03:42.609 CXX test/cpp_headers/accel_module.o 00:03:42.609 CXX test/cpp_headers/assert.o 00:03:42.609 CXX test/cpp_headers/base64.o 00:03:42.609 CXX test/cpp_headers/barrier.o 00:03:42.609 CXX test/cpp_headers/bdev_module.o 00:03:42.609 CXX test/cpp_headers/bdev.o 00:03:42.609 CXX test/cpp_headers/bit_array.o 00:03:42.609 CXX test/cpp_headers/bit_pool.o 00:03:42.609 CXX test/cpp_headers/blob_bdev.o 00:03:42.609 CXX test/cpp_headers/blobfs_bdev.o 00:03:42.609 CXX test/cpp_headers/bdev_zone.o 00:03:42.609 CXX test/cpp_headers/conf.o 00:03:42.609 CXX test/cpp_headers/blobfs.o 00:03:42.609 CXX test/cpp_headers/blob.o 00:03:42.609 CXX test/cpp_headers/config.o 00:03:42.609 CXX test/cpp_headers/cpuset.o 00:03:42.609 CXX test/cpp_headers/crc16.o 00:03:42.609 CXX test/cpp_headers/crc32.o 00:03:42.609 CXX test/cpp_headers/crc64.o 00:03:42.609 CXX test/cpp_headers/endian.o 00:03:42.609 CXX test/cpp_headers/dif.o 00:03:42.610 CXX test/cpp_headers/dma.o 00:03:42.610 CXX test/cpp_headers/env_dpdk.o 00:03:42.610 CXX test/cpp_headers/env.o 00:03:42.610 CXX test/cpp_headers/event.o 00:03:42.610 CXX test/cpp_headers/fd_group.o 00:03:42.610 CXX test/cpp_headers/fsdev.o 00:03:42.610 CXX test/cpp_headers/fd.o 00:03:42.610 CXX test/cpp_headers/fsdev_module.o 00:03:42.610 CXX test/cpp_headers/ftl.o 00:03:42.610 CXX test/cpp_headers/gpt_spec.o 00:03:42.610 CXX test/cpp_headers/file.o 00:03:42.610 CXX test/cpp_headers/fuse_dispatcher.o 00:03:42.610 CXX test/cpp_headers/histogram_data.o 00:03:42.610 CXX test/cpp_headers/hexlify.o 00:03:42.610 CXX test/cpp_headers/init.o 00:03:42.610 CC examples/ioat/perf/perf.o 00:03:42.610 CXX test/cpp_headers/idxd.o 00:03:42.610 CXX test/cpp_headers/idxd_spec.o 00:03:42.610 CXX test/cpp_headers/ioat.o 00:03:42.610 CXX test/cpp_headers/ioat_spec.o 00:03:42.610 CXX test/cpp_headers/iscsi_spec.o 00:03:42.610 CXX test/cpp_headers/json.o 00:03:42.610 CXX test/cpp_headers/keyring.o 00:03:42.610 CXX test/cpp_headers/jsonrpc.o 00:03:42.610 CXX test/cpp_headers/keyring_module.o 00:03:42.610 CXX test/cpp_headers/log.o 00:03:42.610 CXX test/cpp_headers/lvol.o 00:03:42.610 CXX test/cpp_headers/likely.o 00:03:42.610 CXX test/cpp_headers/md5.o 00:03:42.610 CXX test/cpp_headers/memory.o 00:03:42.610 CXX test/cpp_headers/nbd.o 00:03:42.610 LINK spdk_lspci 00:03:42.610 CC test/env/pci/pci_ut.o 00:03:42.610 CXX test/cpp_headers/mmio.o 00:03:42.610 CXX test/cpp_headers/net.o 00:03:42.610 CXX test/cpp_headers/notify.o 00:03:42.610 CC test/env/memory/memory_ut.o 00:03:42.610 CXX test/cpp_headers/nvme.o 00:03:42.610 CC test/env/vtophys/vtophys.o 00:03:42.610 CXX test/cpp_headers/nvme_intel.o 00:03:42.610 CC test/app/jsoncat/jsoncat.o 00:03:42.610 CXX test/cpp_headers/nvme_ocssd.o 00:03:42.610 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:42.610 CC app/fio/nvme/fio_plugin.o 00:03:42.610 CXX test/cpp_headers/nvme_zns.o 00:03:42.899 CXX test/cpp_headers/nvme_spec.o 00:03:42.899 CC test/thread/poller_perf/poller_perf.o 00:03:42.899 CXX test/cpp_headers/nvmf_cmd.o 00:03:42.899 CXX test/cpp_headers/nvmf_spec.o 00:03:42.899 CC test/app/histogram_perf/histogram_perf.o 00:03:42.899 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:42.899 CXX test/cpp_headers/nvmf.o 00:03:42.899 CC test/app/stub/stub.o 00:03:42.899 CC examples/util/zipf/zipf.o 00:03:42.899 CXX test/cpp_headers/nvmf_transport.o 00:03:42.899 CXX test/cpp_headers/opal_spec.o 00:03:42.900 CXX test/cpp_headers/opal.o 00:03:42.900 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:42.900 CXX test/cpp_headers/pipe.o 00:03:42.900 CXX test/cpp_headers/pci_ids.o 00:03:42.900 CXX test/cpp_headers/queue.o 00:03:42.900 CXX test/cpp_headers/reduce.o 00:03:42.900 LINK rpc_client_test 00:03:42.900 CXX test/cpp_headers/rpc.o 00:03:42.900 CXX test/cpp_headers/scheduler.o 00:03:42.900 CXX test/cpp_headers/scsi_spec.o 00:03:42.900 CC examples/ioat/verify/verify.o 00:03:42.900 CXX test/cpp_headers/scsi.o 00:03:42.900 CXX test/cpp_headers/stdinc.o 00:03:42.900 CXX test/cpp_headers/sock.o 00:03:42.900 CXX test/cpp_headers/string.o 00:03:42.900 CXX test/cpp_headers/thread.o 00:03:42.900 CXX test/cpp_headers/tree.o 00:03:42.900 CXX test/cpp_headers/trace.o 00:03:42.900 CXX test/cpp_headers/trace_parser.o 00:03:42.900 CXX test/cpp_headers/ublk.o 00:03:42.900 CXX test/cpp_headers/vfio_user_pci.o 00:03:42.900 CXX test/cpp_headers/version.o 00:03:42.900 CXX test/cpp_headers/util.o 00:03:42.900 CXX test/cpp_headers/vfio_user_spec.o 00:03:42.900 CXX test/cpp_headers/uuid.o 00:03:42.900 CXX test/cpp_headers/vmd.o 00:03:42.900 CC test/app/bdev_svc/bdev_svc.o 00:03:42.900 CXX test/cpp_headers/vhost.o 00:03:42.900 CXX test/cpp_headers/zipf.o 00:03:42.900 CXX test/cpp_headers/xor.o 00:03:42.900 LINK spdk_nvme_discover 00:03:42.900 LINK spdk_trace_record 00:03:42.900 LINK interrupt_tgt 00:03:43.196 CC test/dma/test_dma/test_dma.o 00:03:43.196 CC app/fio/bdev/fio_plugin.o 00:03:43.196 LINK spdk_tgt 00:03:43.493 LINK iscsi_tgt 00:03:43.493 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:43.493 CC test/env/mem_callbacks/mem_callbacks.o 00:03:43.783 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:43.783 LINK jsoncat 00:03:43.783 LINK vtophys 00:03:43.783 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:43.783 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:43.783 LINK env_dpdk_post_init 00:03:43.783 LINK stub 00:03:43.783 LINK spdk_trace 00:03:43.783 LINK verify 00:03:44.049 LINK zipf 00:03:44.049 LINK bdev_svc 00:03:44.049 LINK nvmf_tgt 00:03:44.049 LINK ioat_perf 00:03:44.049 LINK poller_perf 00:03:44.049 LINK pci_ut 00:03:44.311 LINK nvme_fuzz 00:03:44.311 LINK vhost_fuzz 00:03:44.311 LINK spdk_nvme 00:03:44.311 LINK spdk_bdev 00:03:44.311 CC app/vhost/vhost.o 00:03:44.311 LINK histogram_perf 00:03:44.311 LINK mem_callbacks 00:03:44.311 LINK spdk_dd 00:03:44.574 CC examples/idxd/perf/perf.o 00:03:44.574 CC examples/sock/hello_world/hello_sock.o 00:03:44.574 CC examples/vmd/lsvmd/lsvmd.o 00:03:44.574 CC examples/vmd/led/led.o 00:03:44.574 CC examples/thread/thread/thread_ex.o 00:03:44.574 LINK vhost 00:03:44.574 CC test/event/reactor/reactor.o 00:03:44.574 CC test/event/reactor_perf/reactor_perf.o 00:03:44.574 CC test/event/event_perf/event_perf.o 00:03:44.574 LINK lsvmd 00:03:44.574 CC test/event/app_repeat/app_repeat.o 00:03:44.574 LINK led 00:03:44.574 LINK test_dma 00:03:44.574 CC test/event/scheduler/scheduler.o 00:03:44.836 LINK hello_sock 00:03:44.836 LINK spdk_nvme_identify 00:03:44.836 LINK thread 00:03:44.836 LINK reactor 00:03:44.836 LINK event_perf 00:03:44.836 LINK reactor_perf 00:03:44.836 LINK idxd_perf 00:03:44.836 LINK spdk_top 00:03:44.836 LINK app_repeat 00:03:44.836 LINK spdk_nvme_perf 00:03:45.098 LINK scheduler 00:03:45.360 LINK memory_ut 00:03:45.360 CC test/accel/dif/dif.o 00:03:45.360 CC test/nvme/overhead/overhead.o 00:03:45.360 CC test/nvme/e2edp/nvme_dp.o 00:03:45.360 CC test/nvme/aer/aer.o 00:03:45.360 CC examples/nvme/arbitration/arbitration.o 00:03:45.360 CC test/nvme/connect_stress/connect_stress.o 00:03:45.360 CC test/nvme/fused_ordering/fused_ordering.o 00:03:45.360 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:45.360 CC examples/nvme/hello_world/hello_world.o 00:03:45.360 CC test/nvme/simple_copy/simple_copy.o 00:03:45.360 CC test/nvme/sgl/sgl.o 00:03:45.360 CC test/nvme/err_injection/err_injection.o 00:03:45.360 CC test/nvme/reserve/reserve.o 00:03:45.360 CC test/nvme/fdp/fdp.o 00:03:45.360 CC examples/nvme/reconnect/reconnect.o 00:03:45.360 CC test/nvme/boot_partition/boot_partition.o 00:03:45.360 CC examples/nvme/hotplug/hotplug.o 00:03:45.360 CC examples/nvme/abort/abort.o 00:03:45.360 CC test/nvme/reset/reset.o 00:03:45.360 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:45.360 CC test/nvme/compliance/nvme_compliance.o 00:03:45.360 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:45.360 CC test/nvme/startup/startup.o 00:03:45.360 CC test/nvme/cuse/cuse.o 00:03:45.360 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:45.360 CC test/blobfs/mkfs/mkfs.o 00:03:45.360 LINK iscsi_fuzz 00:03:45.360 CC examples/accel/perf/accel_perf.o 00:03:45.360 CC examples/blob/cli/blobcli.o 00:03:45.360 CC examples/blob/hello_world/hello_blob.o 00:03:45.621 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:45.621 CC test/lvol/esnap/esnap.o 00:03:45.621 LINK pmr_persistence 00:03:45.621 LINK boot_partition 00:03:45.621 LINK fused_ordering 00:03:45.621 LINK connect_stress 00:03:45.621 LINK doorbell_aers 00:03:45.621 LINK cmb_copy 00:03:45.621 LINK startup 00:03:45.621 LINK err_injection 00:03:45.621 LINK hello_world 00:03:45.621 LINK hotplug 00:03:45.621 LINK simple_copy 00:03:45.621 LINK overhead 00:03:45.621 LINK nvme_dp 00:03:45.621 LINK reserve 00:03:45.621 LINK reset 00:03:45.621 LINK mkfs 00:03:45.621 LINK aer 00:03:45.621 LINK sgl 00:03:45.621 LINK nvme_compliance 00:03:45.621 LINK reconnect 00:03:45.621 LINK arbitration 00:03:45.883 LINK fdp 00:03:45.883 LINK abort 00:03:45.883 LINK hello_blob 00:03:45.883 LINK hello_fsdev 00:03:45.883 LINK nvme_manage 00:03:45.883 LINK dif 00:03:45.883 LINK blobcli 00:03:45.883 LINK accel_perf 00:03:46.460 LINK cuse 00:03:46.460 CC test/bdev/bdevio/bdevio.o 00:03:46.460 CC examples/bdev/hello_world/hello_bdev.o 00:03:46.460 CC examples/bdev/bdevperf/bdevperf.o 00:03:47.035 LINK hello_bdev 00:03:47.035 LINK bdevio 00:03:47.297 LINK bdevperf 00:03:47.872 CC examples/nvmf/nvmf/nvmf.o 00:03:48.446 LINK nvmf 00:03:49.393 LINK esnap 00:03:49.654 00:03:49.654 real 0m53.923s 00:03:49.654 user 6m36.078s 00:03:49.654 sys 4m39.570s 00:03:49.654 15:22:30 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:49.654 15:22:30 make -- common/autotest_common.sh@10 -- $ set +x 00:03:49.654 ************************************ 00:03:49.654 END TEST make 00:03:49.654 ************************************ 00:03:49.654 15:22:30 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:49.654 15:22:30 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:49.654 15:22:30 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:49.654 15:22:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.918 15:22:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:49.918 15:22:30 -- pm/common@44 -- $ pid=6863 00:03:49.918 15:22:30 -- pm/common@50 -- $ kill -TERM 6863 00:03:49.918 15:22:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.918 15:22:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:49.918 15:22:30 -- pm/common@44 -- $ pid=6864 00:03:49.918 15:22:30 -- pm/common@50 -- $ kill -TERM 6864 00:03:49.918 15:22:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.918 15:22:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:49.918 15:22:30 -- pm/common@44 -- $ pid=6866 00:03:49.918 15:22:30 -- pm/common@50 -- $ kill -TERM 6866 00:03:49.918 15:22:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.918 15:22:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:49.918 15:22:30 -- pm/common@44 -- $ pid=6889 00:03:49.918 15:22:30 -- pm/common@50 -- $ sudo -E kill -TERM 6889 00:03:49.918 15:22:30 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:49.918 15:22:30 -- common/autotest_common.sh@1681 -- # lcov --version 00:03:49.918 15:22:30 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:49.918 15:22:30 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:49.918 15:22:30 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:49.918 15:22:30 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:49.918 15:22:30 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:49.918 15:22:30 -- scripts/common.sh@336 -- # IFS=.-: 00:03:49.918 15:22:30 -- scripts/common.sh@336 -- # read -ra ver1 00:03:49.918 15:22:30 -- scripts/common.sh@337 -- # IFS=.-: 00:03:49.918 15:22:30 -- scripts/common.sh@337 -- # read -ra ver2 00:03:49.918 15:22:30 -- scripts/common.sh@338 -- # local 'op=<' 00:03:49.918 15:22:30 -- scripts/common.sh@340 -- # ver1_l=2 00:03:49.918 15:22:30 -- scripts/common.sh@341 -- # ver2_l=1 00:03:49.918 15:22:30 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:49.918 15:22:30 -- scripts/common.sh@344 -- # case "$op" in 00:03:49.918 15:22:30 -- scripts/common.sh@345 -- # : 1 00:03:49.918 15:22:30 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:49.918 15:22:30 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:49.918 15:22:30 -- scripts/common.sh@365 -- # decimal 1 00:03:49.918 15:22:30 -- scripts/common.sh@353 -- # local d=1 00:03:49.918 15:22:30 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:49.918 15:22:30 -- scripts/common.sh@355 -- # echo 1 00:03:49.918 15:22:30 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:49.918 15:22:30 -- scripts/common.sh@366 -- # decimal 2 00:03:49.918 15:22:30 -- scripts/common.sh@353 -- # local d=2 00:03:49.918 15:22:30 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:49.918 15:22:30 -- scripts/common.sh@355 -- # echo 2 00:03:49.918 15:22:30 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:49.918 15:22:30 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:49.918 15:22:30 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:49.918 15:22:30 -- scripts/common.sh@368 -- # return 0 00:03:49.918 15:22:30 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:49.918 15:22:30 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:49.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.918 --rc genhtml_branch_coverage=1 00:03:49.918 --rc genhtml_function_coverage=1 00:03:49.918 --rc genhtml_legend=1 00:03:49.918 --rc geninfo_all_blocks=1 00:03:49.918 --rc geninfo_unexecuted_blocks=1 00:03:49.918 00:03:49.918 ' 00:03:49.918 15:22:30 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:49.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.918 --rc genhtml_branch_coverage=1 00:03:49.918 --rc genhtml_function_coverage=1 00:03:49.918 --rc genhtml_legend=1 00:03:49.918 --rc geninfo_all_blocks=1 00:03:49.918 --rc geninfo_unexecuted_blocks=1 00:03:49.918 00:03:49.918 ' 00:03:49.918 15:22:30 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:49.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.918 --rc genhtml_branch_coverage=1 00:03:49.918 --rc genhtml_function_coverage=1 00:03:49.918 --rc genhtml_legend=1 00:03:49.918 --rc geninfo_all_blocks=1 00:03:49.918 --rc geninfo_unexecuted_blocks=1 00:03:49.918 00:03:49.918 ' 00:03:49.918 15:22:30 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:49.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.918 --rc genhtml_branch_coverage=1 00:03:49.918 --rc genhtml_function_coverage=1 00:03:49.918 --rc genhtml_legend=1 00:03:49.918 --rc geninfo_all_blocks=1 00:03:49.918 --rc geninfo_unexecuted_blocks=1 00:03:49.918 00:03:49.918 ' 00:03:49.918 15:22:30 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:49.918 15:22:30 -- nvmf/common.sh@7 -- # uname -s 00:03:49.918 15:22:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:49.918 15:22:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:49.918 15:22:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:49.918 15:22:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:49.918 15:22:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:49.918 15:22:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:49.918 15:22:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:49.918 15:22:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:49.918 15:22:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:49.918 15:22:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:50.182 15:22:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:03:50.182 15:22:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:03:50.182 15:22:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:50.182 15:22:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:50.182 15:22:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:50.182 15:22:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:50.182 15:22:30 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:50.182 15:22:30 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:50.182 15:22:30 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:50.182 15:22:30 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:50.182 15:22:30 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:50.182 15:22:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:50.182 15:22:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:50.182 15:22:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:50.182 15:22:30 -- paths/export.sh@5 -- # export PATH 00:03:50.182 15:22:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:50.182 15:22:30 -- nvmf/common.sh@51 -- # : 0 00:03:50.182 15:22:30 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:50.182 15:22:30 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:50.182 15:22:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:50.182 15:22:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:50.182 15:22:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:50.182 15:22:30 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:50.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:50.182 15:22:30 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:50.182 15:22:30 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:50.182 15:22:30 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:50.182 15:22:30 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:50.182 15:22:30 -- spdk/autotest.sh@32 -- # uname -s 00:03:50.182 15:22:30 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:50.182 15:22:30 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:50.182 15:22:30 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:50.182 15:22:30 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:50.182 15:22:30 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:50.182 15:22:30 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:50.182 15:22:30 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:50.182 15:22:30 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:50.182 15:22:30 -- spdk/autotest.sh@48 -- # udevadm_pid=89372 00:03:50.182 15:22:30 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:50.182 15:22:30 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:50.182 15:22:30 -- pm/common@17 -- # local monitor 00:03:50.182 15:22:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:50.182 15:22:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:50.182 15:22:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:50.182 15:22:30 -- pm/common@21 -- # date +%s 00:03:50.182 15:22:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:50.182 15:22:30 -- pm/common@25 -- # sleep 1 00:03:50.182 15:22:30 -- pm/common@21 -- # date +%s 00:03:50.182 15:22:30 -- pm/common@21 -- # date +%s 00:03:50.182 15:22:30 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727443350 00:03:50.182 15:22:30 -- pm/common@21 -- # date +%s 00:03:50.182 15:22:30 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727443350 00:03:50.182 15:22:30 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727443350 00:03:50.182 15:22:30 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727443350 00:03:50.182 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727443350_collect-cpu-load.pm.log 00:03:50.182 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727443350_collect-vmstat.pm.log 00:03:50.182 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727443350_collect-cpu-temp.pm.log 00:03:50.182 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727443350_collect-bmc-pm.bmc.pm.log 00:03:51.130 15:22:31 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:51.130 15:22:31 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:51.130 15:22:31 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:51.130 15:22:31 -- common/autotest_common.sh@10 -- # set +x 00:03:51.130 15:22:31 -- spdk/autotest.sh@59 -- # create_test_list 00:03:51.130 15:22:31 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:51.130 15:22:31 -- common/autotest_common.sh@10 -- # set +x 00:03:51.130 15:22:31 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:51.130 15:22:31 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:51.130 15:22:31 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:51.130 15:22:31 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:51.130 15:22:31 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:51.130 15:22:31 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:51.130 15:22:31 -- common/autotest_common.sh@1455 -- # uname 00:03:51.130 15:22:31 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:51.130 15:22:31 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:51.130 15:22:31 -- common/autotest_common.sh@1475 -- # uname 00:03:51.130 15:22:31 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:51.130 15:22:31 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:51.130 15:22:31 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:51.392 lcov: LCOV version 1.15 00:03:51.392 15:22:31 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:06.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:06.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:21.226 15:23:01 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:21.226 15:23:01 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:21.226 15:23:01 -- common/autotest_common.sh@10 -- # set +x 00:04:21.226 15:23:01 -- spdk/autotest.sh@78 -- # rm -f 00:04:21.226 15:23:01 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:25.436 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:04:25.436 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:04:25.436 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:04:25.436 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:04:25.436 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:04:25.436 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:04:25.436 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:04:25.436 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:04:25.436 0000:65:00.0 (144d a80a): Already using the nvme driver 00:04:25.436 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:04:25.436 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:04:25.436 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:04:25.436 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:04:25.436 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:04:25.436 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:04:25.436 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:04:25.436 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:04:25.436 15:23:05 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:25.436 15:23:05 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:25.436 15:23:05 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:25.436 15:23:05 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:25.436 15:23:05 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:25.437 15:23:05 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:25.437 15:23:05 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:25.437 15:23:05 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:25.437 15:23:05 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:25.437 15:23:05 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:25.437 15:23:05 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:25.437 15:23:05 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:25.437 15:23:05 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:25.437 15:23:05 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:25.437 15:23:05 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:25.437 No valid GPT data, bailing 00:04:25.437 15:23:05 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:25.437 15:23:05 -- scripts/common.sh@394 -- # pt= 00:04:25.437 15:23:05 -- scripts/common.sh@395 -- # return 1 00:04:25.437 15:23:05 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:25.437 1+0 records in 00:04:25.437 1+0 records out 00:04:25.437 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00470755 s, 223 MB/s 00:04:25.437 15:23:05 -- spdk/autotest.sh@105 -- # sync 00:04:25.437 15:23:05 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:25.437 15:23:05 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:25.437 15:23:05 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:35.446 15:23:14 -- spdk/autotest.sh@111 -- # uname -s 00:04:35.446 15:23:14 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:35.446 15:23:14 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:35.446 15:23:14 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:37.361 Hugepages 00:04:37.362 node hugesize free / total 00:04:37.362 node0 1048576kB 0 / 0 00:04:37.362 node0 2048kB 0 / 0 00:04:37.362 node1 1048576kB 0 / 0 00:04:37.362 node1 2048kB 0 / 0 00:04:37.362 00:04:37.362 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:37.362 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:37.362 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:37.362 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:37.362 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:37.362 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:37.362 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:37.362 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:37.362 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:37.622 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:37.622 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:37.622 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:37.622 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:37.622 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:37.622 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:37.622 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:37.622 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:37.622 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:37.622 15:23:17 -- spdk/autotest.sh@117 -- # uname -s 00:04:37.622 15:23:17 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:37.622 15:23:17 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:37.622 15:23:17 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:41.829 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:41.829 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:41.829 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:41.829 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:41.829 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:41.829 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:41.829 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:41.829 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:41.829 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:41.829 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:41.829 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:41.829 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:41.829 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:41.829 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:41.829 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:41.829 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:43.216 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:43.478 15:23:23 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:44.425 15:23:24 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:44.425 15:23:24 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:44.425 15:23:24 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:44.425 15:23:24 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:44.425 15:23:24 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:44.425 15:23:24 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:44.425 15:23:24 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:44.425 15:23:24 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:44.425 15:23:24 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:44.687 15:23:24 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:44.687 15:23:24 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:04:44.687 15:23:24 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:47.996 Waiting for block devices as requested 00:04:47.996 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:48.259 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:48.259 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:48.259 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:48.521 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:48.521 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:48.521 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:48.783 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:48.783 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:04:49.045 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:49.045 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:49.045 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:49.308 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:49.308 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:49.308 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:49.308 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:49.570 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:49.831 15:23:30 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:49.831 15:23:30 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:04:49.831 15:23:30 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:04:49.831 15:23:30 -- common/autotest_common.sh@1485 -- # grep 0000:65:00.0/nvme/nvme 00:04:49.831 15:23:30 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:49.831 15:23:30 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:04:49.832 15:23:30 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:49.832 15:23:30 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:49.832 15:23:30 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:49.832 15:23:30 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:49.832 15:23:30 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:49.832 15:23:30 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:49.832 15:23:30 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:49.832 15:23:30 -- common/autotest_common.sh@1529 -- # oacs=' 0x5f' 00:04:49.832 15:23:30 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:49.832 15:23:30 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:49.832 15:23:30 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:49.832 15:23:30 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:49.832 15:23:30 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:49.832 15:23:30 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:49.832 15:23:30 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:49.832 15:23:30 -- common/autotest_common.sh@1541 -- # continue 00:04:49.832 15:23:30 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:49.832 15:23:30 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:49.832 15:23:30 -- common/autotest_common.sh@10 -- # set +x 00:04:49.832 15:23:30 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:49.832 15:23:30 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:49.832 15:23:30 -- common/autotest_common.sh@10 -- # set +x 00:04:49.832 15:23:30 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:54.046 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:54.046 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:54.046 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:54.046 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:54.046 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:54.046 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:54.046 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:54.046 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:54.046 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:54.046 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:54.046 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:54.046 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:54.046 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:54.046 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:54.046 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:54.046 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:54.046 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:54.046 15:23:34 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:54.046 15:23:34 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:54.046 15:23:34 -- common/autotest_common.sh@10 -- # set +x 00:04:54.046 15:23:34 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:54.046 15:23:34 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:54.046 15:23:34 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:54.046 15:23:34 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:54.046 15:23:34 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:54.046 15:23:34 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:54.046 15:23:34 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:54.046 15:23:34 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:54.046 15:23:34 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:54.046 15:23:34 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:54.046 15:23:34 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:54.046 15:23:34 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:54.046 15:23:34 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:54.309 15:23:34 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:54.309 15:23:34 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:04:54.309 15:23:34 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:54.309 15:23:34 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:04:54.309 15:23:34 -- common/autotest_common.sh@1564 -- # device=0xa80a 00:04:54.309 15:23:34 -- common/autotest_common.sh@1565 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:54.309 15:23:34 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:54.309 15:23:34 -- common/autotest_common.sh@1570 -- # return 0 00:04:54.309 15:23:34 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:54.309 15:23:34 -- common/autotest_common.sh@1578 -- # return 0 00:04:54.309 15:23:34 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:54.309 15:23:34 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:54.309 15:23:34 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:54.309 15:23:34 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:54.309 15:23:34 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:54.309 15:23:34 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:54.309 15:23:34 -- common/autotest_common.sh@10 -- # set +x 00:04:54.309 15:23:34 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:54.309 15:23:34 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:54.309 15:23:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:54.309 15:23:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.309 15:23:34 -- common/autotest_common.sh@10 -- # set +x 00:04:54.309 ************************************ 00:04:54.309 START TEST env 00:04:54.309 ************************************ 00:04:54.309 15:23:34 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:54.309 * Looking for test storage... 00:04:54.309 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:54.309 15:23:34 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:54.309 15:23:34 env -- common/autotest_common.sh@1681 -- # lcov --version 00:04:54.309 15:23:34 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:54.571 15:23:34 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:54.571 15:23:34 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.571 15:23:34 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.571 15:23:34 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.571 15:23:34 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.571 15:23:34 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.571 15:23:34 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.571 15:23:34 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.571 15:23:34 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.571 15:23:34 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.571 15:23:34 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.571 15:23:34 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.571 15:23:34 env -- scripts/common.sh@344 -- # case "$op" in 00:04:54.571 15:23:34 env -- scripts/common.sh@345 -- # : 1 00:04:54.571 15:23:34 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.571 15:23:34 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.571 15:23:34 env -- scripts/common.sh@365 -- # decimal 1 00:04:54.571 15:23:34 env -- scripts/common.sh@353 -- # local d=1 00:04:54.571 15:23:34 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.571 15:23:34 env -- scripts/common.sh@355 -- # echo 1 00:04:54.571 15:23:34 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.571 15:23:34 env -- scripts/common.sh@366 -- # decimal 2 00:04:54.571 15:23:34 env -- scripts/common.sh@353 -- # local d=2 00:04:54.571 15:23:34 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.571 15:23:34 env -- scripts/common.sh@355 -- # echo 2 00:04:54.571 15:23:34 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.571 15:23:34 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.571 15:23:34 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.571 15:23:34 env -- scripts/common.sh@368 -- # return 0 00:04:54.571 15:23:34 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.571 15:23:34 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:54.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.571 --rc genhtml_branch_coverage=1 00:04:54.571 --rc genhtml_function_coverage=1 00:04:54.571 --rc genhtml_legend=1 00:04:54.571 --rc geninfo_all_blocks=1 00:04:54.571 --rc geninfo_unexecuted_blocks=1 00:04:54.571 00:04:54.571 ' 00:04:54.571 15:23:34 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:54.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.571 --rc genhtml_branch_coverage=1 00:04:54.571 --rc genhtml_function_coverage=1 00:04:54.571 --rc genhtml_legend=1 00:04:54.571 --rc geninfo_all_blocks=1 00:04:54.571 --rc geninfo_unexecuted_blocks=1 00:04:54.571 00:04:54.571 ' 00:04:54.571 15:23:34 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:54.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.571 --rc genhtml_branch_coverage=1 00:04:54.571 --rc genhtml_function_coverage=1 00:04:54.571 --rc genhtml_legend=1 00:04:54.571 --rc geninfo_all_blocks=1 00:04:54.571 --rc geninfo_unexecuted_blocks=1 00:04:54.571 00:04:54.571 ' 00:04:54.571 15:23:34 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:54.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.571 --rc genhtml_branch_coverage=1 00:04:54.571 --rc genhtml_function_coverage=1 00:04:54.571 --rc genhtml_legend=1 00:04:54.571 --rc geninfo_all_blocks=1 00:04:54.571 --rc geninfo_unexecuted_blocks=1 00:04:54.571 00:04:54.571 ' 00:04:54.571 15:23:34 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:54.571 15:23:34 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:54.571 15:23:34 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.571 15:23:34 env -- common/autotest_common.sh@10 -- # set +x 00:04:54.571 ************************************ 00:04:54.571 START TEST env_memory 00:04:54.571 ************************************ 00:04:54.571 15:23:34 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:54.571 00:04:54.571 00:04:54.571 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.571 http://cunit.sourceforge.net/ 00:04:54.571 00:04:54.571 00:04:54.571 Suite: memory 00:04:54.571 Test: alloc and free memory map ...[2024-09-27 15:23:34.896589] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:54.571 passed 00:04:54.571 Test: mem map translation ...[2024-09-27 15:23:34.914161] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:54.572 [2024-09-27 15:23:34.914184] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:54.572 [2024-09-27 15:23:34.914216] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:54.572 [2024-09-27 15:23:34.914221] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:54.572 passed 00:04:54.572 Test: mem map registration ...[2024-09-27 15:23:34.952228] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:54.572 [2024-09-27 15:23:34.952245] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:54.572 passed 00:04:54.572 Test: mem map adjacent registrations ...passed 00:04:54.572 00:04:54.572 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.572 suites 1 1 n/a 0 0 00:04:54.572 tests 4 4 4 0 0 00:04:54.572 asserts 152 152 152 0 n/a 00:04:54.572 00:04:54.572 Elapsed time = 0.128 seconds 00:04:54.572 00:04:54.572 real 0m0.144s 00:04:54.572 user 0m0.124s 00:04:54.572 sys 0m0.016s 00:04:54.572 15:23:35 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:54.572 15:23:35 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:54.572 ************************************ 00:04:54.572 END TEST env_memory 00:04:54.572 ************************************ 00:04:54.572 15:23:35 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:54.572 15:23:35 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:54.572 15:23:35 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.572 15:23:35 env -- common/autotest_common.sh@10 -- # set +x 00:04:54.834 ************************************ 00:04:54.835 START TEST env_vtophys 00:04:54.835 ************************************ 00:04:54.835 15:23:35 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:54.835 EAL: lib.eal log level changed from notice to debug 00:04:54.835 EAL: Detected lcore 0 as core 0 on socket 0 00:04:54.835 EAL: Detected lcore 1 as core 1 on socket 0 00:04:54.835 EAL: Detected lcore 2 as core 2 on socket 0 00:04:54.835 EAL: Detected lcore 3 as core 3 on socket 0 00:04:54.835 EAL: Detected lcore 4 as core 4 on socket 0 00:04:54.835 EAL: Detected lcore 5 as core 5 on socket 0 00:04:54.835 EAL: Detected lcore 6 as core 6 on socket 0 00:04:54.835 EAL: Detected lcore 7 as core 7 on socket 0 00:04:54.835 EAL: Detected lcore 8 as core 8 on socket 0 00:04:54.835 EAL: Detected lcore 9 as core 9 on socket 0 00:04:54.835 EAL: Detected lcore 10 as core 10 on socket 0 00:04:54.835 EAL: Detected lcore 11 as core 11 on socket 0 00:04:54.835 EAL: Detected lcore 12 as core 12 on socket 0 00:04:54.835 EAL: Detected lcore 13 as core 13 on socket 0 00:04:54.835 EAL: Detected lcore 14 as core 14 on socket 0 00:04:54.835 EAL: Detected lcore 15 as core 15 on socket 0 00:04:54.835 EAL: Detected lcore 16 as core 16 on socket 0 00:04:54.835 EAL: Detected lcore 17 as core 17 on socket 0 00:04:54.835 EAL: Detected lcore 18 as core 18 on socket 0 00:04:54.835 EAL: Detected lcore 19 as core 19 on socket 0 00:04:54.835 EAL: Detected lcore 20 as core 20 on socket 0 00:04:54.835 EAL: Detected lcore 21 as core 21 on socket 0 00:04:54.835 EAL: Detected lcore 22 as core 22 on socket 0 00:04:54.835 EAL: Detected lcore 23 as core 23 on socket 0 00:04:54.835 EAL: Detected lcore 24 as core 24 on socket 0 00:04:54.835 EAL: Detected lcore 25 as core 25 on socket 0 00:04:54.835 EAL: Detected lcore 26 as core 26 on socket 0 00:04:54.835 EAL: Detected lcore 27 as core 27 on socket 0 00:04:54.835 EAL: Detected lcore 28 as core 28 on socket 0 00:04:54.835 EAL: Detected lcore 29 as core 29 on socket 0 00:04:54.835 EAL: Detected lcore 30 as core 30 on socket 0 00:04:54.835 EAL: Detected lcore 31 as core 31 on socket 0 00:04:54.835 EAL: Detected lcore 32 as core 32 on socket 0 00:04:54.835 EAL: Detected lcore 33 as core 33 on socket 0 00:04:54.835 EAL: Detected lcore 34 as core 34 on socket 0 00:04:54.835 EAL: Detected lcore 35 as core 35 on socket 0 00:04:54.835 EAL: Detected lcore 36 as core 0 on socket 1 00:04:54.835 EAL: Detected lcore 37 as core 1 on socket 1 00:04:54.835 EAL: Detected lcore 38 as core 2 on socket 1 00:04:54.835 EAL: Detected lcore 39 as core 3 on socket 1 00:04:54.835 EAL: Detected lcore 40 as core 4 on socket 1 00:04:54.835 EAL: Detected lcore 41 as core 5 on socket 1 00:04:54.835 EAL: Detected lcore 42 as core 6 on socket 1 00:04:54.835 EAL: Detected lcore 43 as core 7 on socket 1 00:04:54.835 EAL: Detected lcore 44 as core 8 on socket 1 00:04:54.835 EAL: Detected lcore 45 as core 9 on socket 1 00:04:54.835 EAL: Detected lcore 46 as core 10 on socket 1 00:04:54.835 EAL: Detected lcore 47 as core 11 on socket 1 00:04:54.835 EAL: Detected lcore 48 as core 12 on socket 1 00:04:54.835 EAL: Detected lcore 49 as core 13 on socket 1 00:04:54.835 EAL: Detected lcore 50 as core 14 on socket 1 00:04:54.835 EAL: Detected lcore 51 as core 15 on socket 1 00:04:54.835 EAL: Detected lcore 52 as core 16 on socket 1 00:04:54.835 EAL: Detected lcore 53 as core 17 on socket 1 00:04:54.835 EAL: Detected lcore 54 as core 18 on socket 1 00:04:54.835 EAL: Detected lcore 55 as core 19 on socket 1 00:04:54.835 EAL: Detected lcore 56 as core 20 on socket 1 00:04:54.835 EAL: Detected lcore 57 as core 21 on socket 1 00:04:54.835 EAL: Detected lcore 58 as core 22 on socket 1 00:04:54.835 EAL: Detected lcore 59 as core 23 on socket 1 00:04:54.835 EAL: Detected lcore 60 as core 24 on socket 1 00:04:54.835 EAL: Detected lcore 61 as core 25 on socket 1 00:04:54.835 EAL: Detected lcore 62 as core 26 on socket 1 00:04:54.835 EAL: Detected lcore 63 as core 27 on socket 1 00:04:54.835 EAL: Detected lcore 64 as core 28 on socket 1 00:04:54.835 EAL: Detected lcore 65 as core 29 on socket 1 00:04:54.835 EAL: Detected lcore 66 as core 30 on socket 1 00:04:54.835 EAL: Detected lcore 67 as core 31 on socket 1 00:04:54.835 EAL: Detected lcore 68 as core 32 on socket 1 00:04:54.835 EAL: Detected lcore 69 as core 33 on socket 1 00:04:54.835 EAL: Detected lcore 70 as core 34 on socket 1 00:04:54.835 EAL: Detected lcore 71 as core 35 on socket 1 00:04:54.835 EAL: Detected lcore 72 as core 0 on socket 0 00:04:54.835 EAL: Detected lcore 73 as core 1 on socket 0 00:04:54.835 EAL: Detected lcore 74 as core 2 on socket 0 00:04:54.835 EAL: Detected lcore 75 as core 3 on socket 0 00:04:54.835 EAL: Detected lcore 76 as core 4 on socket 0 00:04:54.835 EAL: Detected lcore 77 as core 5 on socket 0 00:04:54.835 EAL: Detected lcore 78 as core 6 on socket 0 00:04:54.835 EAL: Detected lcore 79 as core 7 on socket 0 00:04:54.835 EAL: Detected lcore 80 as core 8 on socket 0 00:04:54.835 EAL: Detected lcore 81 as core 9 on socket 0 00:04:54.835 EAL: Detected lcore 82 as core 10 on socket 0 00:04:54.835 EAL: Detected lcore 83 as core 11 on socket 0 00:04:54.835 EAL: Detected lcore 84 as core 12 on socket 0 00:04:54.835 EAL: Detected lcore 85 as core 13 on socket 0 00:04:54.835 EAL: Detected lcore 86 as core 14 on socket 0 00:04:54.835 EAL: Detected lcore 87 as core 15 on socket 0 00:04:54.835 EAL: Detected lcore 88 as core 16 on socket 0 00:04:54.835 EAL: Detected lcore 89 as core 17 on socket 0 00:04:54.835 EAL: Detected lcore 90 as core 18 on socket 0 00:04:54.835 EAL: Detected lcore 91 as core 19 on socket 0 00:04:54.835 EAL: Detected lcore 92 as core 20 on socket 0 00:04:54.835 EAL: Detected lcore 93 as core 21 on socket 0 00:04:54.835 EAL: Detected lcore 94 as core 22 on socket 0 00:04:54.835 EAL: Detected lcore 95 as core 23 on socket 0 00:04:54.835 EAL: Detected lcore 96 as core 24 on socket 0 00:04:54.835 EAL: Detected lcore 97 as core 25 on socket 0 00:04:54.835 EAL: Detected lcore 98 as core 26 on socket 0 00:04:54.835 EAL: Detected lcore 99 as core 27 on socket 0 00:04:54.835 EAL: Detected lcore 100 as core 28 on socket 0 00:04:54.835 EAL: Detected lcore 101 as core 29 on socket 0 00:04:54.835 EAL: Detected lcore 102 as core 30 on socket 0 00:04:54.835 EAL: Detected lcore 103 as core 31 on socket 0 00:04:54.835 EAL: Detected lcore 104 as core 32 on socket 0 00:04:54.835 EAL: Detected lcore 105 as core 33 on socket 0 00:04:54.835 EAL: Detected lcore 106 as core 34 on socket 0 00:04:54.835 EAL: Detected lcore 107 as core 35 on socket 0 00:04:54.835 EAL: Detected lcore 108 as core 0 on socket 1 00:04:54.835 EAL: Detected lcore 109 as core 1 on socket 1 00:04:54.835 EAL: Detected lcore 110 as core 2 on socket 1 00:04:54.835 EAL: Detected lcore 111 as core 3 on socket 1 00:04:54.835 EAL: Detected lcore 112 as core 4 on socket 1 00:04:54.835 EAL: Detected lcore 113 as core 5 on socket 1 00:04:54.835 EAL: Detected lcore 114 as core 6 on socket 1 00:04:54.835 EAL: Detected lcore 115 as core 7 on socket 1 00:04:54.835 EAL: Detected lcore 116 as core 8 on socket 1 00:04:54.835 EAL: Detected lcore 117 as core 9 on socket 1 00:04:54.835 EAL: Detected lcore 118 as core 10 on socket 1 00:04:54.835 EAL: Detected lcore 119 as core 11 on socket 1 00:04:54.835 EAL: Detected lcore 120 as core 12 on socket 1 00:04:54.835 EAL: Detected lcore 121 as core 13 on socket 1 00:04:54.835 EAL: Detected lcore 122 as core 14 on socket 1 00:04:54.835 EAL: Detected lcore 123 as core 15 on socket 1 00:04:54.835 EAL: Detected lcore 124 as core 16 on socket 1 00:04:54.835 EAL: Detected lcore 125 as core 17 on socket 1 00:04:54.835 EAL: Detected lcore 126 as core 18 on socket 1 00:04:54.835 EAL: Detected lcore 127 as core 19 on socket 1 00:04:54.835 EAL: Skipped lcore 128 as core 20 on socket 1 00:04:54.835 EAL: Skipped lcore 129 as core 21 on socket 1 00:04:54.835 EAL: Skipped lcore 130 as core 22 on socket 1 00:04:54.835 EAL: Skipped lcore 131 as core 23 on socket 1 00:04:54.835 EAL: Skipped lcore 132 as core 24 on socket 1 00:04:54.835 EAL: Skipped lcore 133 as core 25 on socket 1 00:04:54.835 EAL: Skipped lcore 134 as core 26 on socket 1 00:04:54.835 EAL: Skipped lcore 135 as core 27 on socket 1 00:04:54.835 EAL: Skipped lcore 136 as core 28 on socket 1 00:04:54.835 EAL: Skipped lcore 137 as core 29 on socket 1 00:04:54.835 EAL: Skipped lcore 138 as core 30 on socket 1 00:04:54.835 EAL: Skipped lcore 139 as core 31 on socket 1 00:04:54.835 EAL: Skipped lcore 140 as core 32 on socket 1 00:04:54.835 EAL: Skipped lcore 141 as core 33 on socket 1 00:04:54.835 EAL: Skipped lcore 142 as core 34 on socket 1 00:04:54.835 EAL: Skipped lcore 143 as core 35 on socket 1 00:04:54.835 EAL: Maximum logical cores by configuration: 128 00:04:54.835 EAL: Detected CPU lcores: 128 00:04:54.835 EAL: Detected NUMA nodes: 2 00:04:54.835 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:54.835 EAL: Detected shared linkage of DPDK 00:04:54.835 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:04:54.835 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:04:54.835 EAL: Registered [vdev] bus. 00:04:54.835 EAL: bus.vdev log level changed from disabled to notice 00:04:54.835 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:04:54.835 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:04:54.835 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:54.835 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:54.835 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:04:54.835 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:04:54.835 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:04:54.835 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:04:54.835 EAL: No shared files mode enabled, IPC will be disabled 00:04:54.835 EAL: No shared files mode enabled, IPC is disabled 00:04:54.835 EAL: Bus pci wants IOVA as 'DC' 00:04:54.835 EAL: Bus vdev wants IOVA as 'DC' 00:04:54.835 EAL: Buses did not request a specific IOVA mode. 00:04:54.835 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:54.835 EAL: Selected IOVA mode 'VA' 00:04:54.835 EAL: Probing VFIO support... 00:04:54.835 EAL: IOMMU type 1 (Type 1) is supported 00:04:54.835 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:54.835 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:54.835 EAL: VFIO support initialized 00:04:54.835 EAL: Ask a virtual area of 0x2e000 bytes 00:04:54.835 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:54.835 EAL: Setting up physically contiguous memory... 00:04:54.836 EAL: Setting maximum number of open files to 524288 00:04:54.836 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:54.836 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:54.836 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:54.836 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.836 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:54.836 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:54.836 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.836 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:54.836 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:54.836 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.836 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:54.836 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:54.836 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.836 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:54.836 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:54.836 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.836 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:54.836 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:54.836 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.836 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:54.836 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:54.836 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.836 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:54.836 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:54.836 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.836 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:54.836 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:54.836 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:54.836 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.836 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:54.836 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:54.836 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.836 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:54.836 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:54.836 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.836 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:54.836 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:54.836 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.836 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:54.836 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:54.836 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.836 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:54.836 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:54.836 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.836 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:54.836 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:54.836 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.836 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:54.836 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:54.836 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.836 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:54.836 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:54.836 EAL: Hugepages will be freed exactly as allocated. 00:04:54.836 EAL: No shared files mode enabled, IPC is disabled 00:04:54.836 EAL: No shared files mode enabled, IPC is disabled 00:04:54.836 EAL: TSC frequency is ~2400000 KHz 00:04:54.836 EAL: Main lcore 0 is ready (tid=7fc331aafa00;cpuset=[0]) 00:04:54.836 EAL: Trying to obtain current memory policy. 00:04:54.836 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.836 EAL: Restoring previous memory policy: 0 00:04:54.836 EAL: request: mp_malloc_sync 00:04:54.836 EAL: No shared files mode enabled, IPC is disabled 00:04:54.836 EAL: Heap on socket 0 was expanded by 2MB 00:04:54.836 EAL: No shared files mode enabled, IPC is disabled 00:04:54.836 EAL: No shared files mode enabled, IPC is disabled 00:04:54.836 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:54.836 EAL: Mem event callback 'spdk:(nil)' registered 00:04:54.836 00:04:54.836 00:04:54.836 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.836 http://cunit.sourceforge.net/ 00:04:54.836 00:04:54.836 00:04:54.836 Suite: components_suite 00:04:54.836 Test: vtophys_malloc_test ...passed 00:04:54.836 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:54.836 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.836 EAL: Restoring previous memory policy: 4 00:04:54.836 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.836 EAL: request: mp_malloc_sync 00:04:54.836 EAL: No shared files mode enabled, IPC is disabled 00:04:54.836 EAL: Heap on socket 0 was expanded by 4MB 00:04:54.836 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.836 EAL: request: mp_malloc_sync 00:04:54.836 EAL: No shared files mode enabled, IPC is disabled 00:04:54.836 EAL: Heap on socket 0 was shrunk by 4MB 00:04:54.836 EAL: Trying to obtain current memory policy. 00:04:54.836 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.836 EAL: Restoring previous memory policy: 4 00:04:54.836 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.836 EAL: request: mp_malloc_sync 00:04:54.836 EAL: No shared files mode enabled, IPC is disabled 00:04:54.836 EAL: Heap on socket 0 was expanded by 6MB 00:04:54.836 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.836 EAL: request: mp_malloc_sync 00:04:54.836 EAL: No shared files mode enabled, IPC is disabled 00:04:54.836 EAL: Heap on socket 0 was shrunk by 6MB 00:04:54.836 EAL: Trying to obtain current memory policy. 00:04:54.836 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.836 EAL: Restoring previous memory policy: 4 00:04:54.836 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.836 EAL: request: mp_malloc_sync 00:04:54.836 EAL: No shared files mode enabled, IPC is disabled 00:04:54.836 EAL: Heap on socket 0 was expanded by 10MB 00:04:54.836 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.836 EAL: request: mp_malloc_sync 00:04:54.836 EAL: No shared files mode enabled, IPC is disabled 00:04:54.836 EAL: Heap on socket 0 was shrunk by 10MB 00:04:54.836 EAL: Trying to obtain current memory policy. 00:04:54.836 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.836 EAL: Restoring previous memory policy: 4 00:04:54.836 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.836 EAL: request: mp_malloc_sync 00:04:54.836 EAL: No shared files mode enabled, IPC is disabled 00:04:54.836 EAL: Heap on socket 0 was expanded by 18MB 00:04:54.836 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.836 EAL: request: mp_malloc_sync 00:04:54.836 EAL: No shared files mode enabled, IPC is disabled 00:04:54.836 EAL: Heap on socket 0 was shrunk by 18MB 00:04:54.836 EAL: Trying to obtain current memory policy. 00:04:54.836 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.836 EAL: Restoring previous memory policy: 4 00:04:54.836 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.836 EAL: request: mp_malloc_sync 00:04:54.836 EAL: No shared files mode enabled, IPC is disabled 00:04:54.836 EAL: Heap on socket 0 was expanded by 34MB 00:04:54.836 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.836 EAL: request: mp_malloc_sync 00:04:54.836 EAL: No shared files mode enabled, IPC is disabled 00:04:54.836 EAL: Heap on socket 0 was shrunk by 34MB 00:04:54.836 EAL: Trying to obtain current memory policy. 00:04:54.836 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.836 EAL: Restoring previous memory policy: 4 00:04:54.836 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.836 EAL: request: mp_malloc_sync 00:04:54.836 EAL: No shared files mode enabled, IPC is disabled 00:04:54.836 EAL: Heap on socket 0 was expanded by 66MB 00:04:54.836 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.836 EAL: request: mp_malloc_sync 00:04:54.836 EAL: No shared files mode enabled, IPC is disabled 00:04:54.836 EAL: Heap on socket 0 was shrunk by 66MB 00:04:54.836 EAL: Trying to obtain current memory policy. 00:04:54.836 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.836 EAL: Restoring previous memory policy: 4 00:04:54.836 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.836 EAL: request: mp_malloc_sync 00:04:54.836 EAL: No shared files mode enabled, IPC is disabled 00:04:54.836 EAL: Heap on socket 0 was expanded by 130MB 00:04:54.836 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.836 EAL: request: mp_malloc_sync 00:04:54.836 EAL: No shared files mode enabled, IPC is disabled 00:04:54.836 EAL: Heap on socket 0 was shrunk by 130MB 00:04:54.836 EAL: Trying to obtain current memory policy. 00:04:54.836 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.836 EAL: Restoring previous memory policy: 4 00:04:54.836 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.836 EAL: request: mp_malloc_sync 00:04:54.836 EAL: No shared files mode enabled, IPC is disabled 00:04:54.836 EAL: Heap on socket 0 was expanded by 258MB 00:04:55.097 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.097 EAL: request: mp_malloc_sync 00:04:55.097 EAL: No shared files mode enabled, IPC is disabled 00:04:55.097 EAL: Heap on socket 0 was shrunk by 258MB 00:04:55.097 EAL: Trying to obtain current memory policy. 00:04:55.097 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:55.097 EAL: Restoring previous memory policy: 4 00:04:55.097 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.097 EAL: request: mp_malloc_sync 00:04:55.097 EAL: No shared files mode enabled, IPC is disabled 00:04:55.097 EAL: Heap on socket 0 was expanded by 514MB 00:04:55.097 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.097 EAL: request: mp_malloc_sync 00:04:55.097 EAL: No shared files mode enabled, IPC is disabled 00:04:55.097 EAL: Heap on socket 0 was shrunk by 514MB 00:04:55.097 EAL: Trying to obtain current memory policy. 00:04:55.097 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:55.358 EAL: Restoring previous memory policy: 4 00:04:55.358 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.358 EAL: request: mp_malloc_sync 00:04:55.358 EAL: No shared files mode enabled, IPC is disabled 00:04:55.358 EAL: Heap on socket 0 was expanded by 1026MB 00:04:55.358 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.619 EAL: request: mp_malloc_sync 00:04:55.619 EAL: No shared files mode enabled, IPC is disabled 00:04:55.619 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:55.619 passed 00:04:55.619 00:04:55.619 Run Summary: Type Total Ran Passed Failed Inactive 00:04:55.619 suites 1 1 n/a 0 0 00:04:55.619 tests 2 2 2 0 0 00:04:55.619 asserts 497 497 497 0 n/a 00:04:55.619 00:04:55.619 Elapsed time = 0.685 seconds 00:04:55.619 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.619 EAL: request: mp_malloc_sync 00:04:55.619 EAL: No shared files mode enabled, IPC is disabled 00:04:55.619 EAL: Heap on socket 0 was shrunk by 2MB 00:04:55.619 EAL: No shared files mode enabled, IPC is disabled 00:04:55.619 EAL: No shared files mode enabled, IPC is disabled 00:04:55.619 EAL: No shared files mode enabled, IPC is disabled 00:04:55.619 00:04:55.619 real 0m0.822s 00:04:55.619 user 0m0.418s 00:04:55.619 sys 0m0.377s 00:04:55.619 15:23:35 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:55.619 15:23:35 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:55.619 ************************************ 00:04:55.619 END TEST env_vtophys 00:04:55.619 ************************************ 00:04:55.619 15:23:35 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:55.619 15:23:35 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:55.619 15:23:35 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:55.619 15:23:35 env -- common/autotest_common.sh@10 -- # set +x 00:04:55.619 ************************************ 00:04:55.619 START TEST env_pci 00:04:55.619 ************************************ 00:04:55.619 15:23:35 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:55.619 00:04:55.619 00:04:55.619 CUnit - A unit testing framework for C - Version 2.1-3 00:04:55.619 http://cunit.sourceforge.net/ 00:04:55.619 00:04:55.619 00:04:55.619 Suite: pci 00:04:55.619 Test: pci_hook ...[2024-09-27 15:23:35.993403] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 108928 has claimed it 00:04:55.619 EAL: Cannot find device (10000:00:01.0) 00:04:55.619 EAL: Failed to attach device on primary process 00:04:55.619 passed 00:04:55.619 00:04:55.619 Run Summary: Type Total Ran Passed Failed Inactive 00:04:55.619 suites 1 1 n/a 0 0 00:04:55.619 tests 1 1 1 0 0 00:04:55.619 asserts 25 25 25 0 n/a 00:04:55.619 00:04:55.619 Elapsed time = 0.031 seconds 00:04:55.619 00:04:55.619 real 0m0.051s 00:04:55.619 user 0m0.014s 00:04:55.619 sys 0m0.036s 00:04:55.619 15:23:36 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:55.619 15:23:36 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:55.619 ************************************ 00:04:55.619 END TEST env_pci 00:04:55.619 ************************************ 00:04:55.619 15:23:36 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:55.619 15:23:36 env -- env/env.sh@15 -- # uname 00:04:55.619 15:23:36 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:55.619 15:23:36 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:55.619 15:23:36 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:55.619 15:23:36 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:55.620 15:23:36 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:55.620 15:23:36 env -- common/autotest_common.sh@10 -- # set +x 00:04:55.882 ************************************ 00:04:55.882 START TEST env_dpdk_post_init 00:04:55.882 ************************************ 00:04:55.882 15:23:36 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:55.882 EAL: Detected CPU lcores: 128 00:04:55.882 EAL: Detected NUMA nodes: 2 00:04:55.882 EAL: Detected shared linkage of DPDK 00:04:55.882 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:55.882 EAL: Selected IOVA mode 'VA' 00:04:55.882 EAL: VFIO support initialized 00:04:55.882 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:55.882 EAL: Using IOMMU type 1 (Type 1) 00:04:55.882 EAL: Ignore mapping IO port bar(1) 00:04:56.143 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:04:56.143 EAL: Ignore mapping IO port bar(1) 00:04:56.405 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:04:56.405 EAL: Ignore mapping IO port bar(1) 00:04:56.667 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:04:56.667 EAL: Ignore mapping IO port bar(1) 00:04:56.667 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:04:56.929 EAL: Ignore mapping IO port bar(1) 00:04:56.929 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:04:57.191 EAL: Ignore mapping IO port bar(1) 00:04:57.191 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:04:57.452 EAL: Ignore mapping IO port bar(1) 00:04:57.452 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:04:57.452 EAL: Ignore mapping IO port bar(1) 00:04:57.713 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:04:57.975 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:04:57.975 EAL: Ignore mapping IO port bar(1) 00:04:58.237 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:04:58.237 EAL: Ignore mapping IO port bar(1) 00:04:58.237 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:04:58.499 EAL: Ignore mapping IO port bar(1) 00:04:58.499 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:04:58.760 EAL: Ignore mapping IO port bar(1) 00:04:58.760 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:04:59.022 EAL: Ignore mapping IO port bar(1) 00:04:59.022 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:04:59.022 EAL: Ignore mapping IO port bar(1) 00:04:59.284 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:04:59.284 EAL: Ignore mapping IO port bar(1) 00:04:59.546 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:04:59.546 EAL: Ignore mapping IO port bar(1) 00:04:59.808 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:04:59.808 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:04:59.808 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:04:59.808 Starting DPDK initialization... 00:04:59.808 Starting SPDK post initialization... 00:04:59.808 SPDK NVMe probe 00:04:59.808 Attaching to 0000:65:00.0 00:04:59.808 Attached to 0000:65:00.0 00:04:59.808 Cleaning up... 00:05:01.727 00:05:01.727 real 0m5.729s 00:05:01.727 user 0m0.178s 00:05:01.727 sys 0m0.107s 00:05:01.727 15:23:41 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.727 15:23:41 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:01.727 ************************************ 00:05:01.727 END TEST env_dpdk_post_init 00:05:01.727 ************************************ 00:05:01.727 15:23:41 env -- env/env.sh@26 -- # uname 00:05:01.727 15:23:41 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:01.727 15:23:41 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:01.727 15:23:41 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:01.727 15:23:41 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.727 15:23:41 env -- common/autotest_common.sh@10 -- # set +x 00:05:01.727 ************************************ 00:05:01.727 START TEST env_mem_callbacks 00:05:01.727 ************************************ 00:05:01.727 15:23:41 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:01.727 EAL: Detected CPU lcores: 128 00:05:01.727 EAL: Detected NUMA nodes: 2 00:05:01.727 EAL: Detected shared linkage of DPDK 00:05:01.727 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:01.727 EAL: Selected IOVA mode 'VA' 00:05:01.727 EAL: VFIO support initialized 00:05:01.727 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:01.727 00:05:01.727 00:05:01.727 CUnit - A unit testing framework for C - Version 2.1-3 00:05:01.727 http://cunit.sourceforge.net/ 00:05:01.727 00:05:01.727 00:05:01.727 Suite: memory 00:05:01.727 Test: test ... 00:05:01.727 register 0x200000200000 2097152 00:05:01.727 malloc 3145728 00:05:01.727 register 0x200000400000 4194304 00:05:01.727 buf 0x200000500000 len 3145728 PASSED 00:05:01.727 malloc 64 00:05:01.727 buf 0x2000004fff40 len 64 PASSED 00:05:01.727 malloc 4194304 00:05:01.727 register 0x200000800000 6291456 00:05:01.727 buf 0x200000a00000 len 4194304 PASSED 00:05:01.727 free 0x200000500000 3145728 00:05:01.727 free 0x2000004fff40 64 00:05:01.727 unregister 0x200000400000 4194304 PASSED 00:05:01.727 free 0x200000a00000 4194304 00:05:01.727 unregister 0x200000800000 6291456 PASSED 00:05:01.727 malloc 8388608 00:05:01.727 register 0x200000400000 10485760 00:05:01.727 buf 0x200000600000 len 8388608 PASSED 00:05:01.727 free 0x200000600000 8388608 00:05:01.727 unregister 0x200000400000 10485760 PASSED 00:05:01.727 passed 00:05:01.727 00:05:01.727 Run Summary: Type Total Ran Passed Failed Inactive 00:05:01.727 suites 1 1 n/a 0 0 00:05:01.727 tests 1 1 1 0 0 00:05:01.727 asserts 15 15 15 0 n/a 00:05:01.727 00:05:01.727 Elapsed time = 0.010 seconds 00:05:01.727 00:05:01.727 real 0m0.071s 00:05:01.727 user 0m0.019s 00:05:01.727 sys 0m0.051s 00:05:01.727 15:23:42 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.727 15:23:42 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:01.727 ************************************ 00:05:01.727 END TEST env_mem_callbacks 00:05:01.727 ************************************ 00:05:01.727 00:05:01.727 real 0m7.424s 00:05:01.727 user 0m1.008s 00:05:01.727 sys 0m0.973s 00:05:01.727 15:23:42 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.727 15:23:42 env -- common/autotest_common.sh@10 -- # set +x 00:05:01.727 ************************************ 00:05:01.727 END TEST env 00:05:01.727 ************************************ 00:05:01.728 15:23:42 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:01.728 15:23:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:01.728 15:23:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.728 15:23:42 -- common/autotest_common.sh@10 -- # set +x 00:05:01.728 ************************************ 00:05:01.728 START TEST rpc 00:05:01.728 ************************************ 00:05:01.728 15:23:42 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:01.990 * Looking for test storage... 00:05:01.990 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:01.990 15:23:42 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:01.990 15:23:42 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:01.990 15:23:42 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:01.990 15:23:42 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:01.990 15:23:42 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.990 15:23:42 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.990 15:23:42 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.990 15:23:42 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.990 15:23:42 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.990 15:23:42 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.990 15:23:42 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.990 15:23:42 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.990 15:23:42 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.990 15:23:42 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.990 15:23:42 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.990 15:23:42 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:01.990 15:23:42 rpc -- scripts/common.sh@345 -- # : 1 00:05:01.990 15:23:42 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.990 15:23:42 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.990 15:23:42 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:01.990 15:23:42 rpc -- scripts/common.sh@353 -- # local d=1 00:05:01.990 15:23:42 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.990 15:23:42 rpc -- scripts/common.sh@355 -- # echo 1 00:05:01.990 15:23:42 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.990 15:23:42 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:01.990 15:23:42 rpc -- scripts/common.sh@353 -- # local d=2 00:05:01.990 15:23:42 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.990 15:23:42 rpc -- scripts/common.sh@355 -- # echo 2 00:05:01.990 15:23:42 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.990 15:23:42 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.990 15:23:42 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.990 15:23:42 rpc -- scripts/common.sh@368 -- # return 0 00:05:01.990 15:23:42 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.990 15:23:42 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:01.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.990 --rc genhtml_branch_coverage=1 00:05:01.990 --rc genhtml_function_coverage=1 00:05:01.990 --rc genhtml_legend=1 00:05:01.990 --rc geninfo_all_blocks=1 00:05:01.990 --rc geninfo_unexecuted_blocks=1 00:05:01.990 00:05:01.990 ' 00:05:01.990 15:23:42 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:01.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.990 --rc genhtml_branch_coverage=1 00:05:01.990 --rc genhtml_function_coverage=1 00:05:01.990 --rc genhtml_legend=1 00:05:01.990 --rc geninfo_all_blocks=1 00:05:01.990 --rc geninfo_unexecuted_blocks=1 00:05:01.990 00:05:01.990 ' 00:05:01.990 15:23:42 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:01.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.990 --rc genhtml_branch_coverage=1 00:05:01.990 --rc genhtml_function_coverage=1 00:05:01.990 --rc genhtml_legend=1 00:05:01.990 --rc geninfo_all_blocks=1 00:05:01.990 --rc geninfo_unexecuted_blocks=1 00:05:01.990 00:05:01.990 ' 00:05:01.990 15:23:42 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:01.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.990 --rc genhtml_branch_coverage=1 00:05:01.990 --rc genhtml_function_coverage=1 00:05:01.990 --rc genhtml_legend=1 00:05:01.990 --rc geninfo_all_blocks=1 00:05:01.990 --rc geninfo_unexecuted_blocks=1 00:05:01.990 00:05:01.990 ' 00:05:01.990 15:23:42 rpc -- rpc/rpc.sh@65 -- # spdk_pid=110250 00:05:01.990 15:23:42 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:01.990 15:23:42 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:01.990 15:23:42 rpc -- rpc/rpc.sh@67 -- # waitforlisten 110250 00:05:01.990 15:23:42 rpc -- common/autotest_common.sh@831 -- # '[' -z 110250 ']' 00:05:01.990 15:23:42 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.990 15:23:42 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:01.990 15:23:42 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.990 15:23:42 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:01.990 15:23:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.990 [2024-09-27 15:23:42.381145] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:05:01.990 [2024-09-27 15:23:42.381205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110250 ] 00:05:01.990 [2024-09-27 15:23:42.455659] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.252 [2024-09-27 15:23:42.501915] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:02.252 [2024-09-27 15:23:42.501967] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 110250' to capture a snapshot of events at runtime. 00:05:02.252 [2024-09-27 15:23:42.501976] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:02.252 [2024-09-27 15:23:42.501982] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:02.252 [2024-09-27 15:23:42.501988] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid110250 for offline analysis/debug. 00:05:02.252 [2024-09-27 15:23:42.502011] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.826 15:23:43 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:02.826 15:23:43 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:02.826 15:23:43 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:02.826 15:23:43 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:02.826 15:23:43 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:02.826 15:23:43 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:02.826 15:23:43 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:02.826 15:23:43 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.826 15:23:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.826 ************************************ 00:05:02.826 START TEST rpc_integrity 00:05:02.826 ************************************ 00:05:02.826 15:23:43 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:02.826 15:23:43 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:02.826 15:23:43 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.826 15:23:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.826 15:23:43 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.826 15:23:43 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:02.826 15:23:43 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:02.826 15:23:43 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:02.826 15:23:43 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:02.826 15:23:43 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.826 15:23:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.826 15:23:43 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.826 15:23:43 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:02.826 15:23:43 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:02.826 15:23:43 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.826 15:23:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.088 15:23:43 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.088 15:23:43 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:03.088 { 00:05:03.088 "name": "Malloc0", 00:05:03.088 "aliases": [ 00:05:03.088 "0d85a263-bfdc-426e-8415-64be65267cdb" 00:05:03.088 ], 00:05:03.088 "product_name": "Malloc disk", 00:05:03.088 "block_size": 512, 00:05:03.088 "num_blocks": 16384, 00:05:03.088 "uuid": "0d85a263-bfdc-426e-8415-64be65267cdb", 00:05:03.088 "assigned_rate_limits": { 00:05:03.088 "rw_ios_per_sec": 0, 00:05:03.088 "rw_mbytes_per_sec": 0, 00:05:03.088 "r_mbytes_per_sec": 0, 00:05:03.088 "w_mbytes_per_sec": 0 00:05:03.088 }, 00:05:03.088 "claimed": false, 00:05:03.088 "zoned": false, 00:05:03.088 "supported_io_types": { 00:05:03.088 "read": true, 00:05:03.088 "write": true, 00:05:03.088 "unmap": true, 00:05:03.088 "flush": true, 00:05:03.088 "reset": true, 00:05:03.088 "nvme_admin": false, 00:05:03.088 "nvme_io": false, 00:05:03.088 "nvme_io_md": false, 00:05:03.088 "write_zeroes": true, 00:05:03.088 "zcopy": true, 00:05:03.088 "get_zone_info": false, 00:05:03.088 "zone_management": false, 00:05:03.088 "zone_append": false, 00:05:03.088 "compare": false, 00:05:03.088 "compare_and_write": false, 00:05:03.088 "abort": true, 00:05:03.088 "seek_hole": false, 00:05:03.088 "seek_data": false, 00:05:03.088 "copy": true, 00:05:03.088 "nvme_iov_md": false 00:05:03.088 }, 00:05:03.088 "memory_domains": [ 00:05:03.088 { 00:05:03.088 "dma_device_id": "system", 00:05:03.088 "dma_device_type": 1 00:05:03.088 }, 00:05:03.088 { 00:05:03.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.088 "dma_device_type": 2 00:05:03.088 } 00:05:03.088 ], 00:05:03.088 "driver_specific": {} 00:05:03.088 } 00:05:03.088 ]' 00:05:03.088 15:23:43 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:03.088 15:23:43 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:03.088 15:23:43 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:03.088 15:23:43 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.088 15:23:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.088 [2024-09-27 15:23:43.369791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:03.088 [2024-09-27 15:23:43.369836] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:03.088 [2024-09-27 15:23:43.369852] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x563950 00:05:03.088 [2024-09-27 15:23:43.369860] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:03.088 [2024-09-27 15:23:43.371435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:03.088 [2024-09-27 15:23:43.371471] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:03.088 Passthru0 00:05:03.088 15:23:43 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.088 15:23:43 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:03.088 15:23:43 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.088 15:23:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.088 15:23:43 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.088 15:23:43 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:03.088 { 00:05:03.088 "name": "Malloc0", 00:05:03.088 "aliases": [ 00:05:03.088 "0d85a263-bfdc-426e-8415-64be65267cdb" 00:05:03.088 ], 00:05:03.088 "product_name": "Malloc disk", 00:05:03.088 "block_size": 512, 00:05:03.088 "num_blocks": 16384, 00:05:03.088 "uuid": "0d85a263-bfdc-426e-8415-64be65267cdb", 00:05:03.088 "assigned_rate_limits": { 00:05:03.088 "rw_ios_per_sec": 0, 00:05:03.088 "rw_mbytes_per_sec": 0, 00:05:03.088 "r_mbytes_per_sec": 0, 00:05:03.088 "w_mbytes_per_sec": 0 00:05:03.088 }, 00:05:03.088 "claimed": true, 00:05:03.088 "claim_type": "exclusive_write", 00:05:03.088 "zoned": false, 00:05:03.088 "supported_io_types": { 00:05:03.088 "read": true, 00:05:03.088 "write": true, 00:05:03.088 "unmap": true, 00:05:03.088 "flush": true, 00:05:03.088 "reset": true, 00:05:03.088 "nvme_admin": false, 00:05:03.088 "nvme_io": false, 00:05:03.088 "nvme_io_md": false, 00:05:03.088 "write_zeroes": true, 00:05:03.088 "zcopy": true, 00:05:03.088 "get_zone_info": false, 00:05:03.088 "zone_management": false, 00:05:03.088 "zone_append": false, 00:05:03.088 "compare": false, 00:05:03.088 "compare_and_write": false, 00:05:03.088 "abort": true, 00:05:03.088 "seek_hole": false, 00:05:03.088 "seek_data": false, 00:05:03.088 "copy": true, 00:05:03.088 "nvme_iov_md": false 00:05:03.088 }, 00:05:03.088 "memory_domains": [ 00:05:03.088 { 00:05:03.088 "dma_device_id": "system", 00:05:03.088 "dma_device_type": 1 00:05:03.088 }, 00:05:03.088 { 00:05:03.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.088 "dma_device_type": 2 00:05:03.088 } 00:05:03.088 ], 00:05:03.088 "driver_specific": {} 00:05:03.088 }, 00:05:03.088 { 00:05:03.088 "name": "Passthru0", 00:05:03.088 "aliases": [ 00:05:03.088 "fba0c981-a617-52eb-84fd-3566c6c36256" 00:05:03.088 ], 00:05:03.088 "product_name": "passthru", 00:05:03.088 "block_size": 512, 00:05:03.088 "num_blocks": 16384, 00:05:03.088 "uuid": "fba0c981-a617-52eb-84fd-3566c6c36256", 00:05:03.088 "assigned_rate_limits": { 00:05:03.088 "rw_ios_per_sec": 0, 00:05:03.088 "rw_mbytes_per_sec": 0, 00:05:03.088 "r_mbytes_per_sec": 0, 00:05:03.088 "w_mbytes_per_sec": 0 00:05:03.088 }, 00:05:03.088 "claimed": false, 00:05:03.088 "zoned": false, 00:05:03.088 "supported_io_types": { 00:05:03.088 "read": true, 00:05:03.088 "write": true, 00:05:03.088 "unmap": true, 00:05:03.088 "flush": true, 00:05:03.088 "reset": true, 00:05:03.088 "nvme_admin": false, 00:05:03.088 "nvme_io": false, 00:05:03.088 "nvme_io_md": false, 00:05:03.088 "write_zeroes": true, 00:05:03.088 "zcopy": true, 00:05:03.088 "get_zone_info": false, 00:05:03.088 "zone_management": false, 00:05:03.088 "zone_append": false, 00:05:03.088 "compare": false, 00:05:03.088 "compare_and_write": false, 00:05:03.089 "abort": true, 00:05:03.089 "seek_hole": false, 00:05:03.089 "seek_data": false, 00:05:03.089 "copy": true, 00:05:03.089 "nvme_iov_md": false 00:05:03.089 }, 00:05:03.089 "memory_domains": [ 00:05:03.089 { 00:05:03.089 "dma_device_id": "system", 00:05:03.089 "dma_device_type": 1 00:05:03.089 }, 00:05:03.089 { 00:05:03.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.089 "dma_device_type": 2 00:05:03.089 } 00:05:03.089 ], 00:05:03.089 "driver_specific": { 00:05:03.089 "passthru": { 00:05:03.089 "name": "Passthru0", 00:05:03.089 "base_bdev_name": "Malloc0" 00:05:03.089 } 00:05:03.089 } 00:05:03.089 } 00:05:03.089 ]' 00:05:03.089 15:23:43 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:03.089 15:23:43 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:03.089 15:23:43 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:03.089 15:23:43 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.089 15:23:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.089 15:23:43 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.089 15:23:43 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:03.089 15:23:43 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.089 15:23:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.089 15:23:43 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.089 15:23:43 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:03.089 15:23:43 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.089 15:23:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.089 15:23:43 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.089 15:23:43 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:03.089 15:23:43 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:03.089 15:23:43 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:03.089 00:05:03.089 real 0m0.304s 00:05:03.089 user 0m0.186s 00:05:03.089 sys 0m0.044s 00:05:03.089 15:23:43 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.089 15:23:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.089 ************************************ 00:05:03.089 END TEST rpc_integrity 00:05:03.089 ************************************ 00:05:03.089 15:23:43 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:03.089 15:23:43 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:03.351 15:23:43 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:03.351 15:23:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.351 ************************************ 00:05:03.351 START TEST rpc_plugins 00:05:03.351 ************************************ 00:05:03.351 15:23:43 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:03.351 15:23:43 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:03.351 15:23:43 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.351 15:23:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:03.351 15:23:43 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.351 15:23:43 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:03.351 15:23:43 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:03.351 15:23:43 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.351 15:23:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:03.351 15:23:43 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.351 15:23:43 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:03.351 { 00:05:03.351 "name": "Malloc1", 00:05:03.351 "aliases": [ 00:05:03.351 "992a4497-a848-4614-90d9-aabe65774554" 00:05:03.351 ], 00:05:03.351 "product_name": "Malloc disk", 00:05:03.351 "block_size": 4096, 00:05:03.351 "num_blocks": 256, 00:05:03.351 "uuid": "992a4497-a848-4614-90d9-aabe65774554", 00:05:03.351 "assigned_rate_limits": { 00:05:03.351 "rw_ios_per_sec": 0, 00:05:03.351 "rw_mbytes_per_sec": 0, 00:05:03.351 "r_mbytes_per_sec": 0, 00:05:03.351 "w_mbytes_per_sec": 0 00:05:03.351 }, 00:05:03.351 "claimed": false, 00:05:03.351 "zoned": false, 00:05:03.351 "supported_io_types": { 00:05:03.351 "read": true, 00:05:03.351 "write": true, 00:05:03.351 "unmap": true, 00:05:03.351 "flush": true, 00:05:03.351 "reset": true, 00:05:03.351 "nvme_admin": false, 00:05:03.351 "nvme_io": false, 00:05:03.351 "nvme_io_md": false, 00:05:03.351 "write_zeroes": true, 00:05:03.351 "zcopy": true, 00:05:03.351 "get_zone_info": false, 00:05:03.351 "zone_management": false, 00:05:03.351 "zone_append": false, 00:05:03.351 "compare": false, 00:05:03.351 "compare_and_write": false, 00:05:03.351 "abort": true, 00:05:03.351 "seek_hole": false, 00:05:03.351 "seek_data": false, 00:05:03.351 "copy": true, 00:05:03.351 "nvme_iov_md": false 00:05:03.351 }, 00:05:03.351 "memory_domains": [ 00:05:03.351 { 00:05:03.351 "dma_device_id": "system", 00:05:03.351 "dma_device_type": 1 00:05:03.351 }, 00:05:03.351 { 00:05:03.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.351 "dma_device_type": 2 00:05:03.351 } 00:05:03.351 ], 00:05:03.351 "driver_specific": {} 00:05:03.351 } 00:05:03.351 ]' 00:05:03.351 15:23:43 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:03.351 15:23:43 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:03.351 15:23:43 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:03.351 15:23:43 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.351 15:23:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:03.351 15:23:43 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.351 15:23:43 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:03.351 15:23:43 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.351 15:23:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:03.351 15:23:43 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.351 15:23:43 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:03.351 15:23:43 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:03.351 15:23:43 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:03.351 00:05:03.351 real 0m0.153s 00:05:03.351 user 0m0.096s 00:05:03.351 sys 0m0.021s 00:05:03.351 15:23:43 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.351 15:23:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:03.351 ************************************ 00:05:03.351 END TEST rpc_plugins 00:05:03.351 ************************************ 00:05:03.351 15:23:43 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:03.351 15:23:43 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:03.351 15:23:43 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:03.351 15:23:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.613 ************************************ 00:05:03.613 START TEST rpc_trace_cmd_test 00:05:03.613 ************************************ 00:05:03.613 15:23:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:03.613 15:23:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:03.613 15:23:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:03.613 15:23:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.613 15:23:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:03.613 15:23:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.613 15:23:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:03.613 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid110250", 00:05:03.613 "tpoint_group_mask": "0x8", 00:05:03.613 "iscsi_conn": { 00:05:03.613 "mask": "0x2", 00:05:03.613 "tpoint_mask": "0x0" 00:05:03.613 }, 00:05:03.613 "scsi": { 00:05:03.613 "mask": "0x4", 00:05:03.613 "tpoint_mask": "0x0" 00:05:03.613 }, 00:05:03.613 "bdev": { 00:05:03.613 "mask": "0x8", 00:05:03.613 "tpoint_mask": "0xffffffffffffffff" 00:05:03.613 }, 00:05:03.613 "nvmf_rdma": { 00:05:03.613 "mask": "0x10", 00:05:03.613 "tpoint_mask": "0x0" 00:05:03.613 }, 00:05:03.613 "nvmf_tcp": { 00:05:03.613 "mask": "0x20", 00:05:03.613 "tpoint_mask": "0x0" 00:05:03.613 }, 00:05:03.613 "ftl": { 00:05:03.613 "mask": "0x40", 00:05:03.613 "tpoint_mask": "0x0" 00:05:03.613 }, 00:05:03.613 "blobfs": { 00:05:03.613 "mask": "0x80", 00:05:03.613 "tpoint_mask": "0x0" 00:05:03.613 }, 00:05:03.613 "dsa": { 00:05:03.613 "mask": "0x200", 00:05:03.613 "tpoint_mask": "0x0" 00:05:03.613 }, 00:05:03.613 "thread": { 00:05:03.613 "mask": "0x400", 00:05:03.613 "tpoint_mask": "0x0" 00:05:03.613 }, 00:05:03.613 "nvme_pcie": { 00:05:03.613 "mask": "0x800", 00:05:03.613 "tpoint_mask": "0x0" 00:05:03.613 }, 00:05:03.613 "iaa": { 00:05:03.613 "mask": "0x1000", 00:05:03.613 "tpoint_mask": "0x0" 00:05:03.613 }, 00:05:03.613 "nvme_tcp": { 00:05:03.613 "mask": "0x2000", 00:05:03.613 "tpoint_mask": "0x0" 00:05:03.613 }, 00:05:03.613 "bdev_nvme": { 00:05:03.613 "mask": "0x4000", 00:05:03.613 "tpoint_mask": "0x0" 00:05:03.613 }, 00:05:03.613 "sock": { 00:05:03.613 "mask": "0x8000", 00:05:03.613 "tpoint_mask": "0x0" 00:05:03.613 }, 00:05:03.613 "blob": { 00:05:03.613 "mask": "0x10000", 00:05:03.613 "tpoint_mask": "0x0" 00:05:03.613 }, 00:05:03.613 "bdev_raid": { 00:05:03.613 "mask": "0x20000", 00:05:03.613 "tpoint_mask": "0x0" 00:05:03.613 } 00:05:03.613 }' 00:05:03.613 15:23:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:03.613 15:23:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:05:03.613 15:23:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:03.613 15:23:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:03.613 15:23:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:03.613 15:23:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:03.613 15:23:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:03.613 15:23:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:03.613 15:23:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:03.876 15:23:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:03.876 00:05:03.876 real 0m0.256s 00:05:03.876 user 0m0.215s 00:05:03.876 sys 0m0.031s 00:05:03.876 15:23:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.876 15:23:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:03.876 ************************************ 00:05:03.876 END TEST rpc_trace_cmd_test 00:05:03.876 ************************************ 00:05:03.876 15:23:44 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:03.876 15:23:44 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:03.876 15:23:44 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:03.876 15:23:44 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:03.876 15:23:44 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:03.876 15:23:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.876 ************************************ 00:05:03.876 START TEST rpc_daemon_integrity 00:05:03.876 ************************************ 00:05:03.876 15:23:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:03.876 15:23:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:03.876 15:23:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.876 15:23:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.876 15:23:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.876 15:23:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:03.876 15:23:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:03.876 15:23:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:03.876 15:23:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:03.876 15:23:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.876 15:23:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.876 15:23:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.876 15:23:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:03.876 15:23:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:03.876 15:23:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.876 15:23:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.876 15:23:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.876 15:23:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:03.876 { 00:05:03.876 "name": "Malloc2", 00:05:03.876 "aliases": [ 00:05:03.876 "b937239c-e05a-45b2-939c-e68dc51cfca0" 00:05:03.876 ], 00:05:03.876 "product_name": "Malloc disk", 00:05:03.876 "block_size": 512, 00:05:03.876 "num_blocks": 16384, 00:05:03.876 "uuid": "b937239c-e05a-45b2-939c-e68dc51cfca0", 00:05:03.876 "assigned_rate_limits": { 00:05:03.876 "rw_ios_per_sec": 0, 00:05:03.876 "rw_mbytes_per_sec": 0, 00:05:03.876 "r_mbytes_per_sec": 0, 00:05:03.876 "w_mbytes_per_sec": 0 00:05:03.876 }, 00:05:03.876 "claimed": false, 00:05:03.876 "zoned": false, 00:05:03.876 "supported_io_types": { 00:05:03.876 "read": true, 00:05:03.876 "write": true, 00:05:03.876 "unmap": true, 00:05:03.876 "flush": true, 00:05:03.876 "reset": true, 00:05:03.876 "nvme_admin": false, 00:05:03.876 "nvme_io": false, 00:05:03.876 "nvme_io_md": false, 00:05:03.876 "write_zeroes": true, 00:05:03.876 "zcopy": true, 00:05:03.876 "get_zone_info": false, 00:05:03.876 "zone_management": false, 00:05:03.876 "zone_append": false, 00:05:03.876 "compare": false, 00:05:03.876 "compare_and_write": false, 00:05:03.876 "abort": true, 00:05:03.876 "seek_hole": false, 00:05:03.876 "seek_data": false, 00:05:03.876 "copy": true, 00:05:03.876 "nvme_iov_md": false 00:05:03.876 }, 00:05:03.876 "memory_domains": [ 00:05:03.876 { 00:05:03.876 "dma_device_id": "system", 00:05:03.876 "dma_device_type": 1 00:05:03.876 }, 00:05:03.876 { 00:05:03.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.876 "dma_device_type": 2 00:05:03.876 } 00:05:03.876 ], 00:05:03.876 "driver_specific": {} 00:05:03.876 } 00:05:03.876 ]' 00:05:03.876 15:23:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:03.876 15:23:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:03.876 15:23:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:03.876 15:23:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.876 15:23:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.876 [2024-09-27 15:23:44.332418] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:03.876 [2024-09-27 15:23:44.332459] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:03.876 [2024-09-27 15:23:44.332473] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5f4300 00:05:03.876 [2024-09-27 15:23:44.332481] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:03.876 [2024-09-27 15:23:44.333913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:03.876 [2024-09-27 15:23:44.333950] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:03.876 Passthru0 00:05:03.876 15:23:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.876 15:23:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:03.876 15:23:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.876 15:23:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.138 15:23:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.139 15:23:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:04.139 { 00:05:04.139 "name": "Malloc2", 00:05:04.139 "aliases": [ 00:05:04.139 "b937239c-e05a-45b2-939c-e68dc51cfca0" 00:05:04.139 ], 00:05:04.139 "product_name": "Malloc disk", 00:05:04.139 "block_size": 512, 00:05:04.139 "num_blocks": 16384, 00:05:04.139 "uuid": "b937239c-e05a-45b2-939c-e68dc51cfca0", 00:05:04.139 "assigned_rate_limits": { 00:05:04.139 "rw_ios_per_sec": 0, 00:05:04.139 "rw_mbytes_per_sec": 0, 00:05:04.139 "r_mbytes_per_sec": 0, 00:05:04.139 "w_mbytes_per_sec": 0 00:05:04.139 }, 00:05:04.139 "claimed": true, 00:05:04.139 "claim_type": "exclusive_write", 00:05:04.139 "zoned": false, 00:05:04.139 "supported_io_types": { 00:05:04.139 "read": true, 00:05:04.139 "write": true, 00:05:04.139 "unmap": true, 00:05:04.139 "flush": true, 00:05:04.139 "reset": true, 00:05:04.139 "nvme_admin": false, 00:05:04.139 "nvme_io": false, 00:05:04.139 "nvme_io_md": false, 00:05:04.139 "write_zeroes": true, 00:05:04.139 "zcopy": true, 00:05:04.139 "get_zone_info": false, 00:05:04.139 "zone_management": false, 00:05:04.139 "zone_append": false, 00:05:04.139 "compare": false, 00:05:04.139 "compare_and_write": false, 00:05:04.139 "abort": true, 00:05:04.139 "seek_hole": false, 00:05:04.139 "seek_data": false, 00:05:04.139 "copy": true, 00:05:04.139 "nvme_iov_md": false 00:05:04.139 }, 00:05:04.139 "memory_domains": [ 00:05:04.139 { 00:05:04.139 "dma_device_id": "system", 00:05:04.139 "dma_device_type": 1 00:05:04.139 }, 00:05:04.139 { 00:05:04.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:04.139 "dma_device_type": 2 00:05:04.139 } 00:05:04.139 ], 00:05:04.139 "driver_specific": {} 00:05:04.139 }, 00:05:04.139 { 00:05:04.139 "name": "Passthru0", 00:05:04.139 "aliases": [ 00:05:04.139 "ca25473b-28e3-5157-8dd8-307fd0fb5a94" 00:05:04.139 ], 00:05:04.139 "product_name": "passthru", 00:05:04.139 "block_size": 512, 00:05:04.139 "num_blocks": 16384, 00:05:04.139 "uuid": "ca25473b-28e3-5157-8dd8-307fd0fb5a94", 00:05:04.139 "assigned_rate_limits": { 00:05:04.139 "rw_ios_per_sec": 0, 00:05:04.139 "rw_mbytes_per_sec": 0, 00:05:04.139 "r_mbytes_per_sec": 0, 00:05:04.139 "w_mbytes_per_sec": 0 00:05:04.139 }, 00:05:04.139 "claimed": false, 00:05:04.139 "zoned": false, 00:05:04.139 "supported_io_types": { 00:05:04.139 "read": true, 00:05:04.139 "write": true, 00:05:04.139 "unmap": true, 00:05:04.139 "flush": true, 00:05:04.139 "reset": true, 00:05:04.139 "nvme_admin": false, 00:05:04.139 "nvme_io": false, 00:05:04.139 "nvme_io_md": false, 00:05:04.139 "write_zeroes": true, 00:05:04.139 "zcopy": true, 00:05:04.139 "get_zone_info": false, 00:05:04.139 "zone_management": false, 00:05:04.139 "zone_append": false, 00:05:04.139 "compare": false, 00:05:04.139 "compare_and_write": false, 00:05:04.139 "abort": true, 00:05:04.139 "seek_hole": false, 00:05:04.139 "seek_data": false, 00:05:04.139 "copy": true, 00:05:04.139 "nvme_iov_md": false 00:05:04.139 }, 00:05:04.139 "memory_domains": [ 00:05:04.139 { 00:05:04.139 "dma_device_id": "system", 00:05:04.139 "dma_device_type": 1 00:05:04.139 }, 00:05:04.139 { 00:05:04.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:04.139 "dma_device_type": 2 00:05:04.139 } 00:05:04.139 ], 00:05:04.139 "driver_specific": { 00:05:04.139 "passthru": { 00:05:04.139 "name": "Passthru0", 00:05:04.139 "base_bdev_name": "Malloc2" 00:05:04.139 } 00:05:04.139 } 00:05:04.139 } 00:05:04.139 ]' 00:05:04.139 15:23:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:04.139 15:23:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:04.139 15:23:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:04.139 15:23:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.139 15:23:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.139 15:23:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.139 15:23:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:04.139 15:23:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.139 15:23:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.139 15:23:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.139 15:23:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:04.139 15:23:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.139 15:23:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.139 15:23:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.139 15:23:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:04.139 15:23:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:04.139 15:23:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:04.139 00:05:04.139 real 0m0.304s 00:05:04.139 user 0m0.191s 00:05:04.139 sys 0m0.046s 00:05:04.139 15:23:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:04.139 15:23:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.139 ************************************ 00:05:04.139 END TEST rpc_daemon_integrity 00:05:04.139 ************************************ 00:05:04.139 15:23:44 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:04.139 15:23:44 rpc -- rpc/rpc.sh@84 -- # killprocess 110250 00:05:04.139 15:23:44 rpc -- common/autotest_common.sh@950 -- # '[' -z 110250 ']' 00:05:04.139 15:23:44 rpc -- common/autotest_common.sh@954 -- # kill -0 110250 00:05:04.139 15:23:44 rpc -- common/autotest_common.sh@955 -- # uname 00:05:04.139 15:23:44 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:04.139 15:23:44 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 110250 00:05:04.139 15:23:44 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:04.139 15:23:44 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:04.139 15:23:44 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 110250' 00:05:04.139 killing process with pid 110250 00:05:04.139 15:23:44 rpc -- common/autotest_common.sh@969 -- # kill 110250 00:05:04.139 15:23:44 rpc -- common/autotest_common.sh@974 -- # wait 110250 00:05:04.401 00:05:04.401 real 0m2.716s 00:05:04.401 user 0m3.479s 00:05:04.401 sys 0m0.804s 00:05:04.401 15:23:44 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:04.401 15:23:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.401 ************************************ 00:05:04.401 END TEST rpc 00:05:04.401 ************************************ 00:05:04.401 15:23:44 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:04.401 15:23:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:04.401 15:23:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:04.401 15:23:44 -- common/autotest_common.sh@10 -- # set +x 00:05:04.663 ************************************ 00:05:04.663 START TEST skip_rpc 00:05:04.663 ************************************ 00:05:04.663 15:23:44 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:04.663 * Looking for test storage... 00:05:04.663 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:04.663 15:23:45 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:04.663 15:23:45 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:04.663 15:23:45 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:04.663 15:23:45 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:04.663 15:23:45 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.663 15:23:45 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.663 15:23:45 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.663 15:23:45 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.663 15:23:45 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.663 15:23:45 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.663 15:23:45 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.663 15:23:45 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.663 15:23:45 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.663 15:23:45 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.663 15:23:45 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.663 15:23:45 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:04.663 15:23:45 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:04.663 15:23:45 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.663 15:23:45 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.663 15:23:45 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:04.663 15:23:45 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:04.663 15:23:45 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.663 15:23:45 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:04.663 15:23:45 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.663 15:23:45 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:04.663 15:23:45 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:04.663 15:23:45 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.663 15:23:45 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:04.663 15:23:45 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.663 15:23:45 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.663 15:23:45 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.663 15:23:45 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:04.663 15:23:45 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.663 15:23:45 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:04.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.663 --rc genhtml_branch_coverage=1 00:05:04.663 --rc genhtml_function_coverage=1 00:05:04.663 --rc genhtml_legend=1 00:05:04.663 --rc geninfo_all_blocks=1 00:05:04.663 --rc geninfo_unexecuted_blocks=1 00:05:04.663 00:05:04.663 ' 00:05:04.663 15:23:45 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:04.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.663 --rc genhtml_branch_coverage=1 00:05:04.663 --rc genhtml_function_coverage=1 00:05:04.663 --rc genhtml_legend=1 00:05:04.663 --rc geninfo_all_blocks=1 00:05:04.663 --rc geninfo_unexecuted_blocks=1 00:05:04.663 00:05:04.663 ' 00:05:04.663 15:23:45 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:04.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.663 --rc genhtml_branch_coverage=1 00:05:04.663 --rc genhtml_function_coverage=1 00:05:04.663 --rc genhtml_legend=1 00:05:04.663 --rc geninfo_all_blocks=1 00:05:04.663 --rc geninfo_unexecuted_blocks=1 00:05:04.663 00:05:04.663 ' 00:05:04.663 15:23:45 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:04.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.663 --rc genhtml_branch_coverage=1 00:05:04.663 --rc genhtml_function_coverage=1 00:05:04.663 --rc genhtml_legend=1 00:05:04.663 --rc geninfo_all_blocks=1 00:05:04.663 --rc geninfo_unexecuted_blocks=1 00:05:04.663 00:05:04.663 ' 00:05:04.663 15:23:45 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:04.663 15:23:45 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:04.663 15:23:45 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:04.663 15:23:45 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:04.663 15:23:45 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:04.663 15:23:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.925 ************************************ 00:05:04.925 START TEST skip_rpc 00:05:04.925 ************************************ 00:05:04.925 15:23:45 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:04.925 15:23:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=111091 00:05:04.925 15:23:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:04.925 15:23:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:04.925 15:23:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:04.925 [2024-09-27 15:23:45.212920] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:05:04.925 [2024-09-27 15:23:45.212984] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111091 ] 00:05:04.925 [2024-09-27 15:23:45.296490] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.925 [2024-09-27 15:23:45.342707] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.223 15:23:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:10.223 15:23:50 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:10.223 15:23:50 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:10.223 15:23:50 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:10.223 15:23:50 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:10.223 15:23:50 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:10.223 15:23:50 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:10.223 15:23:50 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:10.223 15:23:50 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.223 15:23:50 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.223 15:23:50 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:10.223 15:23:50 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:10.223 15:23:50 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:10.223 15:23:50 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:10.223 15:23:50 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:10.223 15:23:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:10.223 15:23:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 111091 00:05:10.223 15:23:50 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 111091 ']' 00:05:10.223 15:23:50 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 111091 00:05:10.223 15:23:50 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:10.223 15:23:50 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:10.223 15:23:50 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 111091 00:05:10.223 15:23:50 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:10.223 15:23:50 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:10.223 15:23:50 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 111091' 00:05:10.223 killing process with pid 111091 00:05:10.223 15:23:50 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 111091 00:05:10.223 15:23:50 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 111091 00:05:10.223 00:05:10.223 real 0m5.276s 00:05:10.223 user 0m5.028s 00:05:10.223 sys 0m0.293s 00:05:10.223 15:23:50 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.223 15:23:50 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.223 ************************************ 00:05:10.223 END TEST skip_rpc 00:05:10.223 ************************************ 00:05:10.223 15:23:50 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:10.223 15:23:50 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.223 15:23:50 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.223 15:23:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.223 ************************************ 00:05:10.223 START TEST skip_rpc_with_json 00:05:10.223 ************************************ 00:05:10.223 15:23:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:10.223 15:23:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:10.223 15:23:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=112137 00:05:10.223 15:23:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:10.223 15:23:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 112137 00:05:10.223 15:23:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:10.224 15:23:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 112137 ']' 00:05:10.224 15:23:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.224 15:23:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:10.224 15:23:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.224 15:23:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:10.224 15:23:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:10.224 [2024-09-27 15:23:50.567151] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:05:10.224 [2024-09-27 15:23:50.567209] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112137 ] 00:05:10.224 [2024-09-27 15:23:50.648047] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.224 [2024-09-27 15:23:50.681687] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.164 15:23:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:11.164 15:23:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:11.164 15:23:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:11.164 15:23:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.164 15:23:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:11.164 [2024-09-27 15:23:51.356297] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:11.164 request: 00:05:11.164 { 00:05:11.164 "trtype": "tcp", 00:05:11.164 "method": "nvmf_get_transports", 00:05:11.164 "req_id": 1 00:05:11.164 } 00:05:11.164 Got JSON-RPC error response 00:05:11.164 response: 00:05:11.164 { 00:05:11.164 "code": -19, 00:05:11.164 "message": "No such device" 00:05:11.164 } 00:05:11.164 15:23:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:11.164 15:23:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:11.164 15:23:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.164 15:23:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:11.164 [2024-09-27 15:23:51.368393] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:11.164 15:23:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.164 15:23:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:11.164 15:23:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.164 15:23:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:11.164 15:23:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.164 15:23:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:11.164 { 00:05:11.164 "subsystems": [ 00:05:11.164 { 00:05:11.164 "subsystem": "fsdev", 00:05:11.164 "config": [ 00:05:11.164 { 00:05:11.164 "method": "fsdev_set_opts", 00:05:11.164 "params": { 00:05:11.164 "fsdev_io_pool_size": 65535, 00:05:11.164 "fsdev_io_cache_size": 256 00:05:11.164 } 00:05:11.164 } 00:05:11.164 ] 00:05:11.164 }, 00:05:11.164 { 00:05:11.164 "subsystem": "vfio_user_target", 00:05:11.164 "config": null 00:05:11.164 }, 00:05:11.164 { 00:05:11.164 "subsystem": "keyring", 00:05:11.164 "config": [] 00:05:11.164 }, 00:05:11.164 { 00:05:11.164 "subsystem": "iobuf", 00:05:11.164 "config": [ 00:05:11.164 { 00:05:11.164 "method": "iobuf_set_options", 00:05:11.164 "params": { 00:05:11.164 "small_pool_count": 8192, 00:05:11.164 "large_pool_count": 1024, 00:05:11.164 "small_bufsize": 8192, 00:05:11.164 "large_bufsize": 135168 00:05:11.164 } 00:05:11.164 } 00:05:11.164 ] 00:05:11.164 }, 00:05:11.164 { 00:05:11.164 "subsystem": "sock", 00:05:11.164 "config": [ 00:05:11.164 { 00:05:11.164 "method": "sock_set_default_impl", 00:05:11.164 "params": { 00:05:11.164 "impl_name": "posix" 00:05:11.164 } 00:05:11.164 }, 00:05:11.164 { 00:05:11.164 "method": "sock_impl_set_options", 00:05:11.164 "params": { 00:05:11.164 "impl_name": "ssl", 00:05:11.164 "recv_buf_size": 4096, 00:05:11.164 "send_buf_size": 4096, 00:05:11.164 "enable_recv_pipe": true, 00:05:11.164 "enable_quickack": false, 00:05:11.164 "enable_placement_id": 0, 00:05:11.164 "enable_zerocopy_send_server": true, 00:05:11.164 "enable_zerocopy_send_client": false, 00:05:11.164 "zerocopy_threshold": 0, 00:05:11.164 "tls_version": 0, 00:05:11.164 "enable_ktls": false 00:05:11.164 } 00:05:11.164 }, 00:05:11.164 { 00:05:11.164 "method": "sock_impl_set_options", 00:05:11.164 "params": { 00:05:11.164 "impl_name": "posix", 00:05:11.164 "recv_buf_size": 2097152, 00:05:11.164 "send_buf_size": 2097152, 00:05:11.164 "enable_recv_pipe": true, 00:05:11.164 "enable_quickack": false, 00:05:11.164 "enable_placement_id": 0, 00:05:11.164 "enable_zerocopy_send_server": true, 00:05:11.164 "enable_zerocopy_send_client": false, 00:05:11.164 "zerocopy_threshold": 0, 00:05:11.164 "tls_version": 0, 00:05:11.164 "enable_ktls": false 00:05:11.164 } 00:05:11.164 } 00:05:11.164 ] 00:05:11.164 }, 00:05:11.164 { 00:05:11.164 "subsystem": "vmd", 00:05:11.164 "config": [] 00:05:11.164 }, 00:05:11.164 { 00:05:11.164 "subsystem": "accel", 00:05:11.164 "config": [ 00:05:11.164 { 00:05:11.164 "method": "accel_set_options", 00:05:11.164 "params": { 00:05:11.164 "small_cache_size": 128, 00:05:11.164 "large_cache_size": 16, 00:05:11.164 "task_count": 2048, 00:05:11.164 "sequence_count": 2048, 00:05:11.164 "buf_count": 2048 00:05:11.164 } 00:05:11.164 } 00:05:11.164 ] 00:05:11.164 }, 00:05:11.164 { 00:05:11.164 "subsystem": "bdev", 00:05:11.164 "config": [ 00:05:11.164 { 00:05:11.164 "method": "bdev_set_options", 00:05:11.164 "params": { 00:05:11.164 "bdev_io_pool_size": 65535, 00:05:11.164 "bdev_io_cache_size": 256, 00:05:11.164 "bdev_auto_examine": true, 00:05:11.164 "iobuf_small_cache_size": 128, 00:05:11.164 "iobuf_large_cache_size": 16 00:05:11.164 } 00:05:11.164 }, 00:05:11.164 { 00:05:11.164 "method": "bdev_raid_set_options", 00:05:11.164 "params": { 00:05:11.164 "process_window_size_kb": 1024, 00:05:11.164 "process_max_bandwidth_mb_sec": 0 00:05:11.164 } 00:05:11.164 }, 00:05:11.164 { 00:05:11.164 "method": "bdev_iscsi_set_options", 00:05:11.164 "params": { 00:05:11.164 "timeout_sec": 30 00:05:11.164 } 00:05:11.164 }, 00:05:11.164 { 00:05:11.164 "method": "bdev_nvme_set_options", 00:05:11.164 "params": { 00:05:11.164 "action_on_timeout": "none", 00:05:11.164 "timeout_us": 0, 00:05:11.164 "timeout_admin_us": 0, 00:05:11.164 "keep_alive_timeout_ms": 10000, 00:05:11.164 "arbitration_burst": 0, 00:05:11.164 "low_priority_weight": 0, 00:05:11.164 "medium_priority_weight": 0, 00:05:11.164 "high_priority_weight": 0, 00:05:11.164 "nvme_adminq_poll_period_us": 10000, 00:05:11.164 "nvme_ioq_poll_period_us": 0, 00:05:11.164 "io_queue_requests": 0, 00:05:11.164 "delay_cmd_submit": true, 00:05:11.164 "transport_retry_count": 4, 00:05:11.164 "bdev_retry_count": 3, 00:05:11.164 "transport_ack_timeout": 0, 00:05:11.164 "ctrlr_loss_timeout_sec": 0, 00:05:11.164 "reconnect_delay_sec": 0, 00:05:11.165 "fast_io_fail_timeout_sec": 0, 00:05:11.165 "disable_auto_failback": false, 00:05:11.165 "generate_uuids": false, 00:05:11.165 "transport_tos": 0, 00:05:11.165 "nvme_error_stat": false, 00:05:11.165 "rdma_srq_size": 0, 00:05:11.165 "io_path_stat": false, 00:05:11.165 "allow_accel_sequence": false, 00:05:11.165 "rdma_max_cq_size": 0, 00:05:11.165 "rdma_cm_event_timeout_ms": 0, 00:05:11.165 "dhchap_digests": [ 00:05:11.165 "sha256", 00:05:11.165 "sha384", 00:05:11.165 "sha512" 00:05:11.165 ], 00:05:11.165 "dhchap_dhgroups": [ 00:05:11.165 "null", 00:05:11.165 "ffdhe2048", 00:05:11.165 "ffdhe3072", 00:05:11.165 "ffdhe4096", 00:05:11.165 "ffdhe6144", 00:05:11.165 "ffdhe8192" 00:05:11.165 ] 00:05:11.165 } 00:05:11.165 }, 00:05:11.165 { 00:05:11.165 "method": "bdev_nvme_set_hotplug", 00:05:11.165 "params": { 00:05:11.165 "period_us": 100000, 00:05:11.165 "enable": false 00:05:11.165 } 00:05:11.165 }, 00:05:11.165 { 00:05:11.165 "method": "bdev_wait_for_examine" 00:05:11.165 } 00:05:11.165 ] 00:05:11.165 }, 00:05:11.165 { 00:05:11.165 "subsystem": "scsi", 00:05:11.165 "config": null 00:05:11.165 }, 00:05:11.165 { 00:05:11.165 "subsystem": "scheduler", 00:05:11.165 "config": [ 00:05:11.165 { 00:05:11.165 "method": "framework_set_scheduler", 00:05:11.165 "params": { 00:05:11.165 "name": "static" 00:05:11.165 } 00:05:11.165 } 00:05:11.165 ] 00:05:11.165 }, 00:05:11.165 { 00:05:11.165 "subsystem": "vhost_scsi", 00:05:11.165 "config": [] 00:05:11.165 }, 00:05:11.165 { 00:05:11.165 "subsystem": "vhost_blk", 00:05:11.165 "config": [] 00:05:11.165 }, 00:05:11.165 { 00:05:11.165 "subsystem": "ublk", 00:05:11.165 "config": [] 00:05:11.165 }, 00:05:11.165 { 00:05:11.165 "subsystem": "nbd", 00:05:11.165 "config": [] 00:05:11.165 }, 00:05:11.165 { 00:05:11.165 "subsystem": "nvmf", 00:05:11.165 "config": [ 00:05:11.165 { 00:05:11.165 "method": "nvmf_set_config", 00:05:11.165 "params": { 00:05:11.165 "discovery_filter": "match_any", 00:05:11.165 "admin_cmd_passthru": { 00:05:11.165 "identify_ctrlr": false 00:05:11.165 }, 00:05:11.165 "dhchap_digests": [ 00:05:11.165 "sha256", 00:05:11.165 "sha384", 00:05:11.165 "sha512" 00:05:11.165 ], 00:05:11.165 "dhchap_dhgroups": [ 00:05:11.165 "null", 00:05:11.165 "ffdhe2048", 00:05:11.165 "ffdhe3072", 00:05:11.165 "ffdhe4096", 00:05:11.165 "ffdhe6144", 00:05:11.165 "ffdhe8192" 00:05:11.165 ] 00:05:11.165 } 00:05:11.165 }, 00:05:11.165 { 00:05:11.165 "method": "nvmf_set_max_subsystems", 00:05:11.165 "params": { 00:05:11.165 "max_subsystems": 1024 00:05:11.165 } 00:05:11.165 }, 00:05:11.165 { 00:05:11.165 "method": "nvmf_set_crdt", 00:05:11.165 "params": { 00:05:11.165 "crdt1": 0, 00:05:11.165 "crdt2": 0, 00:05:11.165 "crdt3": 0 00:05:11.165 } 00:05:11.165 }, 00:05:11.165 { 00:05:11.165 "method": "nvmf_create_transport", 00:05:11.165 "params": { 00:05:11.165 "trtype": "TCP", 00:05:11.165 "max_queue_depth": 128, 00:05:11.165 "max_io_qpairs_per_ctrlr": 127, 00:05:11.165 "in_capsule_data_size": 4096, 00:05:11.165 "max_io_size": 131072, 00:05:11.165 "io_unit_size": 131072, 00:05:11.165 "max_aq_depth": 128, 00:05:11.165 "num_shared_buffers": 511, 00:05:11.165 "buf_cache_size": 4294967295, 00:05:11.165 "dif_insert_or_strip": false, 00:05:11.165 "zcopy": false, 00:05:11.165 "c2h_success": true, 00:05:11.165 "sock_priority": 0, 00:05:11.165 "abort_timeout_sec": 1, 00:05:11.165 "ack_timeout": 0, 00:05:11.165 "data_wr_pool_size": 0 00:05:11.165 } 00:05:11.165 } 00:05:11.165 ] 00:05:11.165 }, 00:05:11.165 { 00:05:11.165 "subsystem": "iscsi", 00:05:11.165 "config": [ 00:05:11.165 { 00:05:11.165 "method": "iscsi_set_options", 00:05:11.165 "params": { 00:05:11.165 "node_base": "iqn.2016-06.io.spdk", 00:05:11.165 "max_sessions": 128, 00:05:11.165 "max_connections_per_session": 2, 00:05:11.165 "max_queue_depth": 64, 00:05:11.165 "default_time2wait": 2, 00:05:11.165 "default_time2retain": 20, 00:05:11.165 "first_burst_length": 8192, 00:05:11.165 "immediate_data": true, 00:05:11.165 "allow_duplicated_isid": false, 00:05:11.165 "error_recovery_level": 0, 00:05:11.165 "nop_timeout": 60, 00:05:11.165 "nop_in_interval": 30, 00:05:11.165 "disable_chap": false, 00:05:11.165 "require_chap": false, 00:05:11.165 "mutual_chap": false, 00:05:11.165 "chap_group": 0, 00:05:11.165 "max_large_datain_per_connection": 64, 00:05:11.165 "max_r2t_per_connection": 4, 00:05:11.165 "pdu_pool_size": 36864, 00:05:11.165 "immediate_data_pool_size": 16384, 00:05:11.165 "data_out_pool_size": 2048 00:05:11.165 } 00:05:11.165 } 00:05:11.165 ] 00:05:11.165 } 00:05:11.165 ] 00:05:11.165 } 00:05:11.165 15:23:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:11.165 15:23:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 112137 00:05:11.165 15:23:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 112137 ']' 00:05:11.165 15:23:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 112137 00:05:11.165 15:23:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:11.165 15:23:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:11.165 15:23:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 112137 00:05:11.165 15:23:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:11.165 15:23:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:11.165 15:23:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 112137' 00:05:11.165 killing process with pid 112137 00:05:11.165 15:23:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 112137 00:05:11.165 15:23:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 112137 00:05:11.425 15:23:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=112476 00:05:11.425 15:23:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:11.425 15:23:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:16.712 15:23:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 112476 00:05:16.712 15:23:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 112476 ']' 00:05:16.712 15:23:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 112476 00:05:16.712 15:23:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:16.712 15:23:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:16.712 15:23:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 112476 00:05:16.712 15:23:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:16.712 15:23:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:16.712 15:23:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 112476' 00:05:16.712 killing process with pid 112476 00:05:16.712 15:23:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 112476 00:05:16.712 15:23:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 112476 00:05:16.712 15:23:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:16.712 15:23:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:16.712 00:05:16.712 real 0m6.567s 00:05:16.712 user 0m6.464s 00:05:16.712 sys 0m0.575s 00:05:16.712 15:23:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.712 15:23:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:16.712 ************************************ 00:05:16.712 END TEST skip_rpc_with_json 00:05:16.712 ************************************ 00:05:16.712 15:23:57 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:16.712 15:23:57 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:16.712 15:23:57 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.712 15:23:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.712 ************************************ 00:05:16.712 START TEST skip_rpc_with_delay 00:05:16.712 ************************************ 00:05:16.712 15:23:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:16.712 15:23:57 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:16.712 15:23:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:16.712 15:23:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:16.712 15:23:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:16.712 15:23:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:16.712 15:23:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:16.712 15:23:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:16.712 15:23:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:16.712 15:23:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:16.712 15:23:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:16.712 15:23:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:16.712 15:23:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:16.974 [2024-09-27 15:23:57.214256] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:16.974 [2024-09-27 15:23:57.214353] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:16.974 15:23:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:16.974 15:23:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:16.974 15:23:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:16.974 15:23:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:16.974 00:05:16.974 real 0m0.078s 00:05:16.974 user 0m0.052s 00:05:16.974 sys 0m0.025s 00:05:16.974 15:23:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.974 15:23:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:16.974 ************************************ 00:05:16.974 END TEST skip_rpc_with_delay 00:05:16.974 ************************************ 00:05:16.974 15:23:57 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:16.974 15:23:57 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:16.974 15:23:57 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:16.974 15:23:57 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:16.974 15:23:57 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.974 15:23:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.974 ************************************ 00:05:16.974 START TEST exit_on_failed_rpc_init 00:05:16.974 ************************************ 00:05:16.974 15:23:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:16.974 15:23:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=113537 00:05:16.974 15:23:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 113537 00:05:16.974 15:23:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:16.974 15:23:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 113537 ']' 00:05:16.974 15:23:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.974 15:23:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:16.974 15:23:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.974 15:23:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:16.974 15:23:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:16.974 [2024-09-27 15:23:57.380639] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:05:16.974 [2024-09-27 15:23:57.380700] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113537 ] 00:05:17.235 [2024-09-27 15:23:57.463275] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.235 [2024-09-27 15:23:57.497071] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.807 15:23:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:17.807 15:23:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:17.807 15:23:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:17.808 15:23:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:17.808 15:23:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:17.808 15:23:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:17.808 15:23:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:17.808 15:23:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:17.808 15:23:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:17.808 15:23:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:17.808 15:23:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:17.808 15:23:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:17.808 15:23:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:17.808 15:23:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:17.808 15:23:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:17.808 [2024-09-27 15:23:58.208551] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:05:17.808 [2024-09-27 15:23:58.208603] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113744 ] 00:05:17.808 [2024-09-27 15:23:58.284965] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.069 [2024-09-27 15:23:58.315830] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.069 [2024-09-27 15:23:58.315899] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:18.069 [2024-09-27 15:23:58.315912] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:18.069 [2024-09-27 15:23:58.315921] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:18.069 15:23:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:18.069 15:23:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:18.069 15:23:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:18.069 15:23:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:18.069 15:23:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:18.069 15:23:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:18.069 15:23:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:18.069 15:23:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 113537 00:05:18.069 15:23:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 113537 ']' 00:05:18.069 15:23:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 113537 00:05:18.069 15:23:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:18.069 15:23:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:18.069 15:23:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 113537 00:05:18.069 15:23:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:18.069 15:23:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:18.069 15:23:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 113537' 00:05:18.069 killing process with pid 113537 00:05:18.069 15:23:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 113537 00:05:18.069 15:23:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 113537 00:05:18.331 00:05:18.331 real 0m1.304s 00:05:18.331 user 0m1.495s 00:05:18.331 sys 0m0.395s 00:05:18.331 15:23:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.331 15:23:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:18.331 ************************************ 00:05:18.331 END TEST exit_on_failed_rpc_init 00:05:18.331 ************************************ 00:05:18.331 15:23:58 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:18.331 00:05:18.331 real 0m13.746s 00:05:18.331 user 0m13.272s 00:05:18.331 sys 0m1.605s 00:05:18.331 15:23:58 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.331 15:23:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.331 ************************************ 00:05:18.331 END TEST skip_rpc 00:05:18.331 ************************************ 00:05:18.331 15:23:58 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:18.331 15:23:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.331 15:23:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.331 15:23:58 -- common/autotest_common.sh@10 -- # set +x 00:05:18.331 ************************************ 00:05:18.331 START TEST rpc_client 00:05:18.331 ************************************ 00:05:18.331 15:23:58 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:18.593 * Looking for test storage... 00:05:18.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:18.593 15:23:58 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:18.593 15:23:58 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:05:18.593 15:23:58 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:18.593 15:23:58 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:18.593 15:23:58 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.593 15:23:58 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.593 15:23:58 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.593 15:23:58 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.593 15:23:58 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.593 15:23:58 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.593 15:23:58 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.593 15:23:58 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.593 15:23:58 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.593 15:23:58 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.593 15:23:58 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.593 15:23:58 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:18.593 15:23:58 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:18.593 15:23:58 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.593 15:23:58 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.593 15:23:58 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:18.593 15:23:58 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:18.593 15:23:58 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.593 15:23:58 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:18.593 15:23:58 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.593 15:23:58 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:18.593 15:23:58 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:18.593 15:23:58 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.593 15:23:58 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:18.593 15:23:58 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.593 15:23:58 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.593 15:23:58 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.593 15:23:58 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:18.593 15:23:58 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.593 15:23:58 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:18.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.593 --rc genhtml_branch_coverage=1 00:05:18.593 --rc genhtml_function_coverage=1 00:05:18.593 --rc genhtml_legend=1 00:05:18.593 --rc geninfo_all_blocks=1 00:05:18.593 --rc geninfo_unexecuted_blocks=1 00:05:18.593 00:05:18.593 ' 00:05:18.593 15:23:58 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:18.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.593 --rc genhtml_branch_coverage=1 00:05:18.593 --rc genhtml_function_coverage=1 00:05:18.593 --rc genhtml_legend=1 00:05:18.593 --rc geninfo_all_blocks=1 00:05:18.593 --rc geninfo_unexecuted_blocks=1 00:05:18.593 00:05:18.593 ' 00:05:18.593 15:23:58 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:18.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.593 --rc genhtml_branch_coverage=1 00:05:18.593 --rc genhtml_function_coverage=1 00:05:18.593 --rc genhtml_legend=1 00:05:18.593 --rc geninfo_all_blocks=1 00:05:18.593 --rc geninfo_unexecuted_blocks=1 00:05:18.593 00:05:18.593 ' 00:05:18.593 15:23:58 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:18.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.593 --rc genhtml_branch_coverage=1 00:05:18.593 --rc genhtml_function_coverage=1 00:05:18.593 --rc genhtml_legend=1 00:05:18.593 --rc geninfo_all_blocks=1 00:05:18.593 --rc geninfo_unexecuted_blocks=1 00:05:18.593 00:05:18.593 ' 00:05:18.593 15:23:58 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:18.593 OK 00:05:18.593 15:23:58 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:18.593 00:05:18.593 real 0m0.227s 00:05:18.593 user 0m0.137s 00:05:18.593 sys 0m0.101s 00:05:18.593 15:23:58 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.593 15:23:58 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:18.593 ************************************ 00:05:18.593 END TEST rpc_client 00:05:18.593 ************************************ 00:05:18.593 15:23:59 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:18.593 15:23:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.593 15:23:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.593 15:23:59 -- common/autotest_common.sh@10 -- # set +x 00:05:18.593 ************************************ 00:05:18.593 START TEST json_config 00:05:18.593 ************************************ 00:05:18.593 15:23:59 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:18.855 15:23:59 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:18.855 15:23:59 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:05:18.855 15:23:59 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:18.855 15:23:59 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:18.855 15:23:59 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.855 15:23:59 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.855 15:23:59 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.855 15:23:59 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.855 15:23:59 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.855 15:23:59 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.856 15:23:59 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.856 15:23:59 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.856 15:23:59 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.856 15:23:59 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.856 15:23:59 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.856 15:23:59 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:18.856 15:23:59 json_config -- scripts/common.sh@345 -- # : 1 00:05:18.856 15:23:59 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.856 15:23:59 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.856 15:23:59 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:18.856 15:23:59 json_config -- scripts/common.sh@353 -- # local d=1 00:05:18.856 15:23:59 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.856 15:23:59 json_config -- scripts/common.sh@355 -- # echo 1 00:05:18.856 15:23:59 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.856 15:23:59 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:18.856 15:23:59 json_config -- scripts/common.sh@353 -- # local d=2 00:05:18.856 15:23:59 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.856 15:23:59 json_config -- scripts/common.sh@355 -- # echo 2 00:05:18.856 15:23:59 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.856 15:23:59 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.856 15:23:59 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.856 15:23:59 json_config -- scripts/common.sh@368 -- # return 0 00:05:18.856 15:23:59 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.856 15:23:59 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:18.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.856 --rc genhtml_branch_coverage=1 00:05:18.856 --rc genhtml_function_coverage=1 00:05:18.856 --rc genhtml_legend=1 00:05:18.856 --rc geninfo_all_blocks=1 00:05:18.856 --rc geninfo_unexecuted_blocks=1 00:05:18.856 00:05:18.856 ' 00:05:18.856 15:23:59 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:18.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.856 --rc genhtml_branch_coverage=1 00:05:18.856 --rc genhtml_function_coverage=1 00:05:18.856 --rc genhtml_legend=1 00:05:18.856 --rc geninfo_all_blocks=1 00:05:18.856 --rc geninfo_unexecuted_blocks=1 00:05:18.856 00:05:18.856 ' 00:05:18.856 15:23:59 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:18.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.856 --rc genhtml_branch_coverage=1 00:05:18.856 --rc genhtml_function_coverage=1 00:05:18.856 --rc genhtml_legend=1 00:05:18.856 --rc geninfo_all_blocks=1 00:05:18.856 --rc geninfo_unexecuted_blocks=1 00:05:18.856 00:05:18.856 ' 00:05:18.856 15:23:59 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:18.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.856 --rc genhtml_branch_coverage=1 00:05:18.856 --rc genhtml_function_coverage=1 00:05:18.856 --rc genhtml_legend=1 00:05:18.856 --rc geninfo_all_blocks=1 00:05:18.856 --rc geninfo_unexecuted_blocks=1 00:05:18.856 00:05:18.856 ' 00:05:18.856 15:23:59 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:18.856 15:23:59 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:18.856 15:23:59 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:18.856 15:23:59 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:18.856 15:23:59 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:18.856 15:23:59 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:18.856 15:23:59 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:18.856 15:23:59 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:18.856 15:23:59 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:18.856 15:23:59 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:18.856 15:23:59 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:18.856 15:23:59 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:18.856 15:23:59 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:18.856 15:23:59 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:18.856 15:23:59 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:18.856 15:23:59 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:18.856 15:23:59 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:18.856 15:23:59 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:18.856 15:23:59 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:18.856 15:23:59 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:18.856 15:23:59 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:18.856 15:23:59 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:18.856 15:23:59 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:18.856 15:23:59 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.856 15:23:59 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.856 15:23:59 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.856 15:23:59 json_config -- paths/export.sh@5 -- # export PATH 00:05:18.856 15:23:59 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.856 15:23:59 json_config -- nvmf/common.sh@51 -- # : 0 00:05:18.856 15:23:59 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:18.856 15:23:59 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:18.856 15:23:59 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:18.856 15:23:59 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:18.856 15:23:59 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:18.856 15:23:59 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:18.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:18.856 15:23:59 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:18.856 15:23:59 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:18.856 15:23:59 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:18.856 15:23:59 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:18.856 15:23:59 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:18.856 15:23:59 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:18.856 15:23:59 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:18.856 15:23:59 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:18.856 15:23:59 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:18.856 15:23:59 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:18.856 15:23:59 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:18.856 15:23:59 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:18.856 15:23:59 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:18.856 15:23:59 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:18.856 15:23:59 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:18.856 15:23:59 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:18.856 15:23:59 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:18.856 15:23:59 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:18.856 15:23:59 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:18.856 INFO: JSON configuration test init 00:05:18.856 15:23:59 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:18.856 15:23:59 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:18.856 15:23:59 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:18.856 15:23:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.856 15:23:59 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:18.856 15:23:59 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:18.856 15:23:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.856 15:23:59 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:18.856 15:23:59 json_config -- json_config/common.sh@9 -- # local app=target 00:05:18.856 15:23:59 json_config -- json_config/common.sh@10 -- # shift 00:05:18.856 15:23:59 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:18.856 15:23:59 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:18.856 15:23:59 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:18.856 15:23:59 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:18.857 15:23:59 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:18.857 15:23:59 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=114015 00:05:18.857 15:23:59 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:18.857 Waiting for target to run... 00:05:18.857 15:23:59 json_config -- json_config/common.sh@25 -- # waitforlisten 114015 /var/tmp/spdk_tgt.sock 00:05:18.857 15:23:59 json_config -- common/autotest_common.sh@831 -- # '[' -z 114015 ']' 00:05:18.857 15:23:59 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:18.857 15:23:59 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:18.857 15:23:59 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:18.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:18.857 15:23:59 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:18.857 15:23:59 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:18.857 15:23:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.857 [2024-09-27 15:23:59.309569] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:05:18.857 [2024-09-27 15:23:59.309623] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114015 ] 00:05:19.431 [2024-09-27 15:23:59.628637] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.431 [2024-09-27 15:23:59.656308] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.692 15:24:00 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:19.692 15:24:00 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:19.692 15:24:00 json_config -- json_config/common.sh@26 -- # echo '' 00:05:19.692 00:05:19.692 15:24:00 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:19.692 15:24:00 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:19.692 15:24:00 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:19.692 15:24:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.692 15:24:00 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:19.692 15:24:00 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:19.692 15:24:00 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:19.692 15:24:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.692 15:24:00 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:19.692 15:24:00 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:19.692 15:24:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:20.264 15:24:00 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:20.264 15:24:00 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:20.264 15:24:00 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:20.264 15:24:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.264 15:24:00 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:20.264 15:24:00 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:20.264 15:24:00 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:20.264 15:24:00 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:20.264 15:24:00 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:20.264 15:24:00 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:20.264 15:24:00 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:20.264 15:24:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:20.525 15:24:00 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:20.525 15:24:00 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:20.525 15:24:00 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:20.525 15:24:00 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:20.525 15:24:00 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:20.525 15:24:00 json_config -- json_config/json_config.sh@54 -- # sort 00:05:20.525 15:24:00 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:20.525 15:24:00 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:20.525 15:24:00 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:20.525 15:24:00 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:20.525 15:24:00 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:20.525 15:24:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.525 15:24:00 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:20.525 15:24:00 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:20.525 15:24:00 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:20.525 15:24:00 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:20.525 15:24:00 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:20.525 15:24:00 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:20.525 15:24:00 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:20.525 15:24:00 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:20.525 15:24:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.525 15:24:00 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:20.525 15:24:00 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:20.525 15:24:00 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:20.525 15:24:00 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:20.525 15:24:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:20.786 MallocForNvmf0 00:05:20.786 15:24:01 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:20.786 15:24:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:20.786 MallocForNvmf1 00:05:21.048 15:24:01 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:21.048 15:24:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:21.048 [2024-09-27 15:24:01.430488] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:21.048 15:24:01 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:21.048 15:24:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:21.310 15:24:01 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:21.310 15:24:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:21.571 15:24:01 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:21.571 15:24:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:21.571 15:24:01 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:21.572 15:24:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:21.833 [2024-09-27 15:24:02.136600] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:21.833 15:24:02 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:21.833 15:24:02 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:21.833 15:24:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.833 15:24:02 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:21.833 15:24:02 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:21.833 15:24:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.833 15:24:02 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:21.833 15:24:02 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:21.833 15:24:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:22.094 MallocBdevForConfigChangeCheck 00:05:22.094 15:24:02 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:22.094 15:24:02 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:22.094 15:24:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.094 15:24:02 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:22.094 15:24:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:22.355 15:24:02 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:22.355 INFO: shutting down applications... 00:05:22.355 15:24:02 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:22.355 15:24:02 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:22.355 15:24:02 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:22.355 15:24:02 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:22.926 Calling clear_iscsi_subsystem 00:05:22.926 Calling clear_nvmf_subsystem 00:05:22.927 Calling clear_nbd_subsystem 00:05:22.927 Calling clear_ublk_subsystem 00:05:22.927 Calling clear_vhost_blk_subsystem 00:05:22.927 Calling clear_vhost_scsi_subsystem 00:05:22.927 Calling clear_bdev_subsystem 00:05:22.927 15:24:03 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:22.927 15:24:03 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:22.927 15:24:03 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:22.927 15:24:03 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:22.927 15:24:03 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:22.927 15:24:03 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:23.187 15:24:03 json_config -- json_config/json_config.sh@352 -- # break 00:05:23.187 15:24:03 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:23.187 15:24:03 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:23.187 15:24:03 json_config -- json_config/common.sh@31 -- # local app=target 00:05:23.187 15:24:03 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:23.187 15:24:03 json_config -- json_config/common.sh@35 -- # [[ -n 114015 ]] 00:05:23.187 15:24:03 json_config -- json_config/common.sh@38 -- # kill -SIGINT 114015 00:05:23.187 15:24:03 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:23.187 15:24:03 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:23.187 15:24:03 json_config -- json_config/common.sh@41 -- # kill -0 114015 00:05:23.187 15:24:03 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:23.758 15:24:04 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:23.759 15:24:04 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:23.759 15:24:04 json_config -- json_config/common.sh@41 -- # kill -0 114015 00:05:23.759 15:24:04 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:23.759 15:24:04 json_config -- json_config/common.sh@43 -- # break 00:05:23.759 15:24:04 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:23.759 15:24:04 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:23.759 SPDK target shutdown done 00:05:23.759 15:24:04 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:23.759 INFO: relaunching applications... 00:05:23.759 15:24:04 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:23.759 15:24:04 json_config -- json_config/common.sh@9 -- # local app=target 00:05:23.759 15:24:04 json_config -- json_config/common.sh@10 -- # shift 00:05:23.759 15:24:04 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:23.759 15:24:04 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:23.759 15:24:04 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:23.759 15:24:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:23.759 15:24:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:23.759 15:24:04 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=115260 00:05:23.759 15:24:04 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:23.759 Waiting for target to run... 00:05:23.759 15:24:04 json_config -- json_config/common.sh@25 -- # waitforlisten 115260 /var/tmp/spdk_tgt.sock 00:05:23.759 15:24:04 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:23.759 15:24:04 json_config -- common/autotest_common.sh@831 -- # '[' -z 115260 ']' 00:05:23.759 15:24:04 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:23.759 15:24:04 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:23.759 15:24:04 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:23.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:23.759 15:24:04 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:23.759 15:24:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.759 [2024-09-27 15:24:04.119553] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:05:23.759 [2024-09-27 15:24:04.119615] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115260 ] 00:05:24.020 [2024-09-27 15:24:04.426831] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.020 [2024-09-27 15:24:04.444934] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.593 [2024-09-27 15:24:04.916657] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:24.593 [2024-09-27 15:24:04.948993] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:24.593 15:24:04 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:24.593 15:24:04 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:24.593 15:24:04 json_config -- json_config/common.sh@26 -- # echo '' 00:05:24.593 00:05:24.593 15:24:04 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:24.593 15:24:04 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:24.593 INFO: Checking if target configuration is the same... 00:05:24.593 15:24:04 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:24.593 15:24:04 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:24.593 15:24:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:24.593 + '[' 2 -ne 2 ']' 00:05:24.593 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:24.593 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:24.593 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:24.593 +++ basename /dev/fd/62 00:05:24.593 ++ mktemp /tmp/62.XXX 00:05:24.593 + tmp_file_1=/tmp/62.8gE 00:05:24.593 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:24.593 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:24.593 + tmp_file_2=/tmp/spdk_tgt_config.json.9pm 00:05:24.593 + ret=0 00:05:24.593 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:24.854 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:25.115 + diff -u /tmp/62.8gE /tmp/spdk_tgt_config.json.9pm 00:05:25.115 + echo 'INFO: JSON config files are the same' 00:05:25.115 INFO: JSON config files are the same 00:05:25.115 + rm /tmp/62.8gE /tmp/spdk_tgt_config.json.9pm 00:05:25.115 + exit 0 00:05:25.115 15:24:05 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:25.115 15:24:05 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:25.115 INFO: changing configuration and checking if this can be detected... 00:05:25.115 15:24:05 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:25.115 15:24:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:25.115 15:24:05 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:25.115 15:24:05 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:25.115 15:24:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:25.115 + '[' 2 -ne 2 ']' 00:05:25.115 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:25.115 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:25.115 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:25.115 +++ basename /dev/fd/62 00:05:25.115 ++ mktemp /tmp/62.XXX 00:05:25.115 + tmp_file_1=/tmp/62.D1f 00:05:25.115 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:25.115 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:25.115 + tmp_file_2=/tmp/spdk_tgt_config.json.cte 00:05:25.115 + ret=0 00:05:25.115 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:25.689 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:25.689 + diff -u /tmp/62.D1f /tmp/spdk_tgt_config.json.cte 00:05:25.689 + ret=1 00:05:25.689 + echo '=== Start of file: /tmp/62.D1f ===' 00:05:25.689 + cat /tmp/62.D1f 00:05:25.689 + echo '=== End of file: /tmp/62.D1f ===' 00:05:25.689 + echo '' 00:05:25.689 + echo '=== Start of file: /tmp/spdk_tgt_config.json.cte ===' 00:05:25.690 + cat /tmp/spdk_tgt_config.json.cte 00:05:25.690 + echo '=== End of file: /tmp/spdk_tgt_config.json.cte ===' 00:05:25.690 + echo '' 00:05:25.690 + rm /tmp/62.D1f /tmp/spdk_tgt_config.json.cte 00:05:25.690 + exit 1 00:05:25.690 15:24:05 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:25.690 INFO: configuration change detected. 00:05:25.690 15:24:05 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:25.690 15:24:05 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:25.690 15:24:05 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:25.690 15:24:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.690 15:24:05 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:25.690 15:24:05 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:25.690 15:24:05 json_config -- json_config/json_config.sh@324 -- # [[ -n 115260 ]] 00:05:25.690 15:24:05 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:25.690 15:24:05 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:25.690 15:24:05 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:25.690 15:24:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.690 15:24:05 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:25.690 15:24:05 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:25.690 15:24:05 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:25.690 15:24:05 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:25.690 15:24:05 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:25.690 15:24:05 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:25.690 15:24:05 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:25.690 15:24:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.690 15:24:06 json_config -- json_config/json_config.sh@330 -- # killprocess 115260 00:05:25.690 15:24:06 json_config -- common/autotest_common.sh@950 -- # '[' -z 115260 ']' 00:05:25.690 15:24:06 json_config -- common/autotest_common.sh@954 -- # kill -0 115260 00:05:25.690 15:24:06 json_config -- common/autotest_common.sh@955 -- # uname 00:05:25.690 15:24:06 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:25.690 15:24:06 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 115260 00:05:25.690 15:24:06 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:25.690 15:24:06 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:25.690 15:24:06 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 115260' 00:05:25.690 killing process with pid 115260 00:05:25.690 15:24:06 json_config -- common/autotest_common.sh@969 -- # kill 115260 00:05:25.690 15:24:06 json_config -- common/autotest_common.sh@974 -- # wait 115260 00:05:25.951 15:24:06 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:25.951 15:24:06 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:25.951 15:24:06 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:25.951 15:24:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.951 15:24:06 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:25.951 15:24:06 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:25.951 INFO: Success 00:05:25.951 00:05:25.951 real 0m7.350s 00:05:25.951 user 0m9.110s 00:05:25.951 sys 0m1.784s 00:05:25.951 15:24:06 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.951 15:24:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.951 ************************************ 00:05:25.951 END TEST json_config 00:05:25.951 ************************************ 00:05:25.951 15:24:06 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:25.951 15:24:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:25.951 15:24:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.951 15:24:06 -- common/autotest_common.sh@10 -- # set +x 00:05:26.214 ************************************ 00:05:26.214 START TEST json_config_extra_key 00:05:26.214 ************************************ 00:05:26.214 15:24:06 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:26.214 15:24:06 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:26.214 15:24:06 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:05:26.214 15:24:06 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:26.214 15:24:06 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:26.214 15:24:06 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.214 15:24:06 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.214 15:24:06 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.214 15:24:06 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.214 15:24:06 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.214 15:24:06 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.214 15:24:06 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.214 15:24:06 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.214 15:24:06 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.214 15:24:06 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.214 15:24:06 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.214 15:24:06 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:26.214 15:24:06 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:26.214 15:24:06 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.214 15:24:06 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.214 15:24:06 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:26.214 15:24:06 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:26.214 15:24:06 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.214 15:24:06 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:26.214 15:24:06 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.214 15:24:06 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:26.214 15:24:06 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:26.214 15:24:06 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.214 15:24:06 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:26.214 15:24:06 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.214 15:24:06 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.214 15:24:06 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.214 15:24:06 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:26.214 15:24:06 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.214 15:24:06 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:26.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.214 --rc genhtml_branch_coverage=1 00:05:26.214 --rc genhtml_function_coverage=1 00:05:26.214 --rc genhtml_legend=1 00:05:26.214 --rc geninfo_all_blocks=1 00:05:26.214 --rc geninfo_unexecuted_blocks=1 00:05:26.214 00:05:26.214 ' 00:05:26.214 15:24:06 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:26.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.214 --rc genhtml_branch_coverage=1 00:05:26.214 --rc genhtml_function_coverage=1 00:05:26.214 --rc genhtml_legend=1 00:05:26.214 --rc geninfo_all_blocks=1 00:05:26.214 --rc geninfo_unexecuted_blocks=1 00:05:26.214 00:05:26.214 ' 00:05:26.214 15:24:06 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:26.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.214 --rc genhtml_branch_coverage=1 00:05:26.214 --rc genhtml_function_coverage=1 00:05:26.214 --rc genhtml_legend=1 00:05:26.214 --rc geninfo_all_blocks=1 00:05:26.214 --rc geninfo_unexecuted_blocks=1 00:05:26.214 00:05:26.214 ' 00:05:26.214 15:24:06 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:26.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.214 --rc genhtml_branch_coverage=1 00:05:26.214 --rc genhtml_function_coverage=1 00:05:26.214 --rc genhtml_legend=1 00:05:26.214 --rc geninfo_all_blocks=1 00:05:26.214 --rc geninfo_unexecuted_blocks=1 00:05:26.214 00:05:26.214 ' 00:05:26.214 15:24:06 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:26.214 15:24:06 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:26.214 15:24:06 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:26.214 15:24:06 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:26.214 15:24:06 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:26.214 15:24:06 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:26.214 15:24:06 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:26.214 15:24:06 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:26.214 15:24:06 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:26.214 15:24:06 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:26.214 15:24:06 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:26.214 15:24:06 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:26.214 15:24:06 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:26.214 15:24:06 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:26.214 15:24:06 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:26.214 15:24:06 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:26.214 15:24:06 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:26.214 15:24:06 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:26.214 15:24:06 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:26.214 15:24:06 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:26.214 15:24:06 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:26.214 15:24:06 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:26.214 15:24:06 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:26.214 15:24:06 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.214 15:24:06 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.214 15:24:06 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.214 15:24:06 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:26.214 15:24:06 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.214 15:24:06 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:26.214 15:24:06 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:26.214 15:24:06 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:26.214 15:24:06 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:26.214 15:24:06 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:26.214 15:24:06 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:26.214 15:24:06 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:26.214 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:26.214 15:24:06 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:26.214 15:24:06 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:26.214 15:24:06 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:26.214 15:24:06 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:26.215 15:24:06 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:26.215 15:24:06 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:26.215 15:24:06 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:26.215 15:24:06 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:26.215 15:24:06 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:26.215 15:24:06 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:26.215 15:24:06 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:26.215 15:24:06 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:26.215 15:24:06 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:26.215 15:24:06 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:26.215 INFO: launching applications... 00:05:26.215 15:24:06 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:26.215 15:24:06 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:26.215 15:24:06 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:26.215 15:24:06 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:26.215 15:24:06 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:26.215 15:24:06 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:26.215 15:24:06 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:26.215 15:24:06 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:26.215 15:24:06 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=115971 00:05:26.215 15:24:06 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:26.215 Waiting for target to run... 00:05:26.215 15:24:06 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 115971 /var/tmp/spdk_tgt.sock 00:05:26.215 15:24:06 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 115971 ']' 00:05:26.215 15:24:06 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:26.215 15:24:06 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:26.215 15:24:06 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:26.215 15:24:06 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:26.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:26.215 15:24:06 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:26.215 15:24:06 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:26.476 [2024-09-27 15:24:06.737826] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:05:26.476 [2024-09-27 15:24:06.737908] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115971 ] 00:05:26.737 [2024-09-27 15:24:07.076104] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.737 [2024-09-27 15:24:07.102109] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.310 15:24:07 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:27.310 15:24:07 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:27.310 15:24:07 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:27.310 00:05:27.310 15:24:07 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:27.310 INFO: shutting down applications... 00:05:27.310 15:24:07 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:27.310 15:24:07 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:27.310 15:24:07 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:27.310 15:24:07 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 115971 ]] 00:05:27.310 15:24:07 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 115971 00:05:27.311 15:24:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:27.311 15:24:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:27.311 15:24:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 115971 00:05:27.311 15:24:07 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:27.572 15:24:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:27.572 15:24:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:27.572 15:24:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 115971 00:05:27.572 15:24:08 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:27.572 15:24:08 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:27.572 15:24:08 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:27.572 15:24:08 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:27.572 SPDK target shutdown done 00:05:27.572 15:24:08 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:27.572 Success 00:05:27.572 00:05:27.572 real 0m1.581s 00:05:27.572 user 0m1.159s 00:05:27.572 sys 0m0.467s 00:05:27.572 15:24:08 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.572 15:24:08 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:27.572 ************************************ 00:05:27.572 END TEST json_config_extra_key 00:05:27.572 ************************************ 00:05:27.834 15:24:08 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:27.834 15:24:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.834 15:24:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.834 15:24:08 -- common/autotest_common.sh@10 -- # set +x 00:05:27.834 ************************************ 00:05:27.834 START TEST alias_rpc 00:05:27.834 ************************************ 00:05:27.834 15:24:08 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:27.834 * Looking for test storage... 00:05:27.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:27.834 15:24:08 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:27.834 15:24:08 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:27.834 15:24:08 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:27.835 15:24:08 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:27.835 15:24:08 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.835 15:24:08 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.835 15:24:08 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.835 15:24:08 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.835 15:24:08 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.835 15:24:08 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.835 15:24:08 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.835 15:24:08 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.835 15:24:08 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.835 15:24:08 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.835 15:24:08 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.835 15:24:08 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:27.835 15:24:08 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:27.835 15:24:08 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.835 15:24:08 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.098 15:24:08 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:28.098 15:24:08 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:28.098 15:24:08 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.098 15:24:08 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:28.098 15:24:08 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.098 15:24:08 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:28.098 15:24:08 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:28.098 15:24:08 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.098 15:24:08 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:28.098 15:24:08 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.098 15:24:08 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.098 15:24:08 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.098 15:24:08 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:28.098 15:24:08 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.098 15:24:08 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:28.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.098 --rc genhtml_branch_coverage=1 00:05:28.098 --rc genhtml_function_coverage=1 00:05:28.098 --rc genhtml_legend=1 00:05:28.098 --rc geninfo_all_blocks=1 00:05:28.098 --rc geninfo_unexecuted_blocks=1 00:05:28.098 00:05:28.098 ' 00:05:28.098 15:24:08 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:28.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.098 --rc genhtml_branch_coverage=1 00:05:28.098 --rc genhtml_function_coverage=1 00:05:28.098 --rc genhtml_legend=1 00:05:28.099 --rc geninfo_all_blocks=1 00:05:28.099 --rc geninfo_unexecuted_blocks=1 00:05:28.099 00:05:28.099 ' 00:05:28.099 15:24:08 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:28.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.099 --rc genhtml_branch_coverage=1 00:05:28.099 --rc genhtml_function_coverage=1 00:05:28.099 --rc genhtml_legend=1 00:05:28.099 --rc geninfo_all_blocks=1 00:05:28.099 --rc geninfo_unexecuted_blocks=1 00:05:28.099 00:05:28.099 ' 00:05:28.099 15:24:08 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:28.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.099 --rc genhtml_branch_coverage=1 00:05:28.099 --rc genhtml_function_coverage=1 00:05:28.099 --rc genhtml_legend=1 00:05:28.099 --rc geninfo_all_blocks=1 00:05:28.099 --rc geninfo_unexecuted_blocks=1 00:05:28.099 00:05:28.099 ' 00:05:28.099 15:24:08 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:28.099 15:24:08 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=116456 00:05:28.099 15:24:08 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 116456 00:05:28.099 15:24:08 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 116456 ']' 00:05:28.099 15:24:08 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:28.099 15:24:08 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.099 15:24:08 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:28.099 15:24:08 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.099 15:24:08 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:28.099 15:24:08 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.099 [2024-09-27 15:24:08.399781] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:05:28.099 [2024-09-27 15:24:08.399857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116456 ] 00:05:28.099 [2024-09-27 15:24:08.480786] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.099 [2024-09-27 15:24:08.515590] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.044 15:24:09 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:29.044 15:24:09 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:29.044 15:24:09 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:29.044 15:24:09 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 116456 00:05:29.044 15:24:09 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 116456 ']' 00:05:29.044 15:24:09 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 116456 00:05:29.044 15:24:09 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:29.044 15:24:09 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:29.044 15:24:09 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 116456 00:05:29.044 15:24:09 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:29.044 15:24:09 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:29.044 15:24:09 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 116456' 00:05:29.044 killing process with pid 116456 00:05:29.044 15:24:09 alias_rpc -- common/autotest_common.sh@969 -- # kill 116456 00:05:29.044 15:24:09 alias_rpc -- common/autotest_common.sh@974 -- # wait 116456 00:05:29.305 00:05:29.306 real 0m1.504s 00:05:29.306 user 0m1.632s 00:05:29.306 sys 0m0.431s 00:05:29.306 15:24:09 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.306 15:24:09 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.306 ************************************ 00:05:29.306 END TEST alias_rpc 00:05:29.306 ************************************ 00:05:29.306 15:24:09 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:29.306 15:24:09 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:29.306 15:24:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:29.306 15:24:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.306 15:24:09 -- common/autotest_common.sh@10 -- # set +x 00:05:29.306 ************************************ 00:05:29.306 START TEST spdkcli_tcp 00:05:29.306 ************************************ 00:05:29.306 15:24:09 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:29.568 * Looking for test storage... 00:05:29.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:29.568 15:24:09 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:29.568 15:24:09 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:29.568 15:24:09 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:29.568 15:24:09 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:29.568 15:24:09 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.568 15:24:09 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.568 15:24:09 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.568 15:24:09 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.568 15:24:09 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.568 15:24:09 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.568 15:24:09 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.568 15:24:09 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.568 15:24:09 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.568 15:24:09 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.568 15:24:09 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.568 15:24:09 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:29.568 15:24:09 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:29.568 15:24:09 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.568 15:24:09 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.568 15:24:09 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:29.568 15:24:09 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:29.568 15:24:09 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.568 15:24:09 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:29.568 15:24:09 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.568 15:24:09 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:29.568 15:24:09 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:29.568 15:24:09 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.568 15:24:09 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:29.568 15:24:09 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.568 15:24:09 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.568 15:24:09 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.568 15:24:09 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:29.568 15:24:09 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.568 15:24:09 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:29.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.568 --rc genhtml_branch_coverage=1 00:05:29.568 --rc genhtml_function_coverage=1 00:05:29.568 --rc genhtml_legend=1 00:05:29.568 --rc geninfo_all_blocks=1 00:05:29.568 --rc geninfo_unexecuted_blocks=1 00:05:29.568 00:05:29.568 ' 00:05:29.568 15:24:09 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:29.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.568 --rc genhtml_branch_coverage=1 00:05:29.568 --rc genhtml_function_coverage=1 00:05:29.568 --rc genhtml_legend=1 00:05:29.568 --rc geninfo_all_blocks=1 00:05:29.568 --rc geninfo_unexecuted_blocks=1 00:05:29.568 00:05:29.568 ' 00:05:29.568 15:24:09 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:29.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.568 --rc genhtml_branch_coverage=1 00:05:29.568 --rc genhtml_function_coverage=1 00:05:29.568 --rc genhtml_legend=1 00:05:29.568 --rc geninfo_all_blocks=1 00:05:29.568 --rc geninfo_unexecuted_blocks=1 00:05:29.568 00:05:29.568 ' 00:05:29.568 15:24:09 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:29.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.568 --rc genhtml_branch_coverage=1 00:05:29.568 --rc genhtml_function_coverage=1 00:05:29.568 --rc genhtml_legend=1 00:05:29.568 --rc geninfo_all_blocks=1 00:05:29.568 --rc geninfo_unexecuted_blocks=1 00:05:29.568 00:05:29.568 ' 00:05:29.568 15:24:09 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:29.568 15:24:09 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:29.568 15:24:09 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:29.568 15:24:09 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:29.568 15:24:09 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:29.568 15:24:09 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:29.568 15:24:09 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:29.568 15:24:09 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:29.568 15:24:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:29.568 15:24:09 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=117010 00:05:29.568 15:24:09 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 117010 00:05:29.568 15:24:09 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:29.568 15:24:09 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 117010 ']' 00:05:29.568 15:24:09 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.568 15:24:09 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:29.568 15:24:09 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.568 15:24:09 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:29.569 15:24:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:29.569 [2024-09-27 15:24:09.983562] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:05:29.569 [2024-09-27 15:24:09.983635] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117010 ] 00:05:29.830 [2024-09-27 15:24:10.067054] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:29.830 [2024-09-27 15:24:10.110402] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.830 [2024-09-27 15:24:10.110403] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.419 15:24:10 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:30.419 15:24:10 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:30.419 15:24:10 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=117294 00:05:30.419 15:24:10 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:30.419 15:24:10 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:30.681 [ 00:05:30.681 "bdev_malloc_delete", 00:05:30.681 "bdev_malloc_create", 00:05:30.681 "bdev_null_resize", 00:05:30.681 "bdev_null_delete", 00:05:30.681 "bdev_null_create", 00:05:30.681 "bdev_nvme_cuse_unregister", 00:05:30.681 "bdev_nvme_cuse_register", 00:05:30.681 "bdev_opal_new_user", 00:05:30.682 "bdev_opal_set_lock_state", 00:05:30.682 "bdev_opal_delete", 00:05:30.682 "bdev_opal_get_info", 00:05:30.682 "bdev_opal_create", 00:05:30.682 "bdev_nvme_opal_revert", 00:05:30.682 "bdev_nvme_opal_init", 00:05:30.682 "bdev_nvme_send_cmd", 00:05:30.682 "bdev_nvme_set_keys", 00:05:30.682 "bdev_nvme_get_path_iostat", 00:05:30.682 "bdev_nvme_get_mdns_discovery_info", 00:05:30.682 "bdev_nvme_stop_mdns_discovery", 00:05:30.682 "bdev_nvme_start_mdns_discovery", 00:05:30.682 "bdev_nvme_set_multipath_policy", 00:05:30.682 "bdev_nvme_set_preferred_path", 00:05:30.682 "bdev_nvme_get_io_paths", 00:05:30.682 "bdev_nvme_remove_error_injection", 00:05:30.682 "bdev_nvme_add_error_injection", 00:05:30.682 "bdev_nvme_get_discovery_info", 00:05:30.682 "bdev_nvme_stop_discovery", 00:05:30.682 "bdev_nvme_start_discovery", 00:05:30.682 "bdev_nvme_get_controller_health_info", 00:05:30.682 "bdev_nvme_disable_controller", 00:05:30.682 "bdev_nvme_enable_controller", 00:05:30.682 "bdev_nvme_reset_controller", 00:05:30.682 "bdev_nvme_get_transport_statistics", 00:05:30.682 "bdev_nvme_apply_firmware", 00:05:30.682 "bdev_nvme_detach_controller", 00:05:30.682 "bdev_nvme_get_controllers", 00:05:30.682 "bdev_nvme_attach_controller", 00:05:30.682 "bdev_nvme_set_hotplug", 00:05:30.682 "bdev_nvme_set_options", 00:05:30.682 "bdev_passthru_delete", 00:05:30.682 "bdev_passthru_create", 00:05:30.682 "bdev_lvol_set_parent_bdev", 00:05:30.682 "bdev_lvol_set_parent", 00:05:30.682 "bdev_lvol_check_shallow_copy", 00:05:30.682 "bdev_lvol_start_shallow_copy", 00:05:30.682 "bdev_lvol_grow_lvstore", 00:05:30.682 "bdev_lvol_get_lvols", 00:05:30.682 "bdev_lvol_get_lvstores", 00:05:30.682 "bdev_lvol_delete", 00:05:30.682 "bdev_lvol_set_read_only", 00:05:30.682 "bdev_lvol_resize", 00:05:30.682 "bdev_lvol_decouple_parent", 00:05:30.682 "bdev_lvol_inflate", 00:05:30.682 "bdev_lvol_rename", 00:05:30.682 "bdev_lvol_clone_bdev", 00:05:30.682 "bdev_lvol_clone", 00:05:30.682 "bdev_lvol_snapshot", 00:05:30.682 "bdev_lvol_create", 00:05:30.682 "bdev_lvol_delete_lvstore", 00:05:30.682 "bdev_lvol_rename_lvstore", 00:05:30.682 "bdev_lvol_create_lvstore", 00:05:30.682 "bdev_raid_set_options", 00:05:30.682 "bdev_raid_remove_base_bdev", 00:05:30.682 "bdev_raid_add_base_bdev", 00:05:30.682 "bdev_raid_delete", 00:05:30.682 "bdev_raid_create", 00:05:30.682 "bdev_raid_get_bdevs", 00:05:30.682 "bdev_error_inject_error", 00:05:30.682 "bdev_error_delete", 00:05:30.682 "bdev_error_create", 00:05:30.682 "bdev_split_delete", 00:05:30.682 "bdev_split_create", 00:05:30.682 "bdev_delay_delete", 00:05:30.682 "bdev_delay_create", 00:05:30.682 "bdev_delay_update_latency", 00:05:30.682 "bdev_zone_block_delete", 00:05:30.682 "bdev_zone_block_create", 00:05:30.682 "blobfs_create", 00:05:30.682 "blobfs_detect", 00:05:30.682 "blobfs_set_cache_size", 00:05:30.682 "bdev_aio_delete", 00:05:30.682 "bdev_aio_rescan", 00:05:30.682 "bdev_aio_create", 00:05:30.682 "bdev_ftl_set_property", 00:05:30.682 "bdev_ftl_get_properties", 00:05:30.682 "bdev_ftl_get_stats", 00:05:30.682 "bdev_ftl_unmap", 00:05:30.682 "bdev_ftl_unload", 00:05:30.682 "bdev_ftl_delete", 00:05:30.682 "bdev_ftl_load", 00:05:30.682 "bdev_ftl_create", 00:05:30.682 "bdev_virtio_attach_controller", 00:05:30.682 "bdev_virtio_scsi_get_devices", 00:05:30.682 "bdev_virtio_detach_controller", 00:05:30.682 "bdev_virtio_blk_set_hotplug", 00:05:30.682 "bdev_iscsi_delete", 00:05:30.682 "bdev_iscsi_create", 00:05:30.682 "bdev_iscsi_set_options", 00:05:30.682 "accel_error_inject_error", 00:05:30.682 "ioat_scan_accel_module", 00:05:30.682 "dsa_scan_accel_module", 00:05:30.682 "iaa_scan_accel_module", 00:05:30.682 "vfu_virtio_create_fs_endpoint", 00:05:30.682 "vfu_virtio_create_scsi_endpoint", 00:05:30.682 "vfu_virtio_scsi_remove_target", 00:05:30.682 "vfu_virtio_scsi_add_target", 00:05:30.682 "vfu_virtio_create_blk_endpoint", 00:05:30.682 "vfu_virtio_delete_endpoint", 00:05:30.682 "keyring_file_remove_key", 00:05:30.682 "keyring_file_add_key", 00:05:30.682 "keyring_linux_set_options", 00:05:30.682 "fsdev_aio_delete", 00:05:30.682 "fsdev_aio_create", 00:05:30.682 "iscsi_get_histogram", 00:05:30.682 "iscsi_enable_histogram", 00:05:30.682 "iscsi_set_options", 00:05:30.682 "iscsi_get_auth_groups", 00:05:30.682 "iscsi_auth_group_remove_secret", 00:05:30.682 "iscsi_auth_group_add_secret", 00:05:30.682 "iscsi_delete_auth_group", 00:05:30.682 "iscsi_create_auth_group", 00:05:30.682 "iscsi_set_discovery_auth", 00:05:30.682 "iscsi_get_options", 00:05:30.682 "iscsi_target_node_request_logout", 00:05:30.682 "iscsi_target_node_set_redirect", 00:05:30.682 "iscsi_target_node_set_auth", 00:05:30.682 "iscsi_target_node_add_lun", 00:05:30.682 "iscsi_get_stats", 00:05:30.682 "iscsi_get_connections", 00:05:30.682 "iscsi_portal_group_set_auth", 00:05:30.682 "iscsi_start_portal_group", 00:05:30.682 "iscsi_delete_portal_group", 00:05:30.682 "iscsi_create_portal_group", 00:05:30.682 "iscsi_get_portal_groups", 00:05:30.682 "iscsi_delete_target_node", 00:05:30.682 "iscsi_target_node_remove_pg_ig_maps", 00:05:30.682 "iscsi_target_node_add_pg_ig_maps", 00:05:30.682 "iscsi_create_target_node", 00:05:30.682 "iscsi_get_target_nodes", 00:05:30.682 "iscsi_delete_initiator_group", 00:05:30.682 "iscsi_initiator_group_remove_initiators", 00:05:30.682 "iscsi_initiator_group_add_initiators", 00:05:30.682 "iscsi_create_initiator_group", 00:05:30.682 "iscsi_get_initiator_groups", 00:05:30.682 "nvmf_set_crdt", 00:05:30.682 "nvmf_set_config", 00:05:30.682 "nvmf_set_max_subsystems", 00:05:30.682 "nvmf_stop_mdns_prr", 00:05:30.682 "nvmf_publish_mdns_prr", 00:05:30.682 "nvmf_subsystem_get_listeners", 00:05:30.682 "nvmf_subsystem_get_qpairs", 00:05:30.682 "nvmf_subsystem_get_controllers", 00:05:30.682 "nvmf_get_stats", 00:05:30.682 "nvmf_get_transports", 00:05:30.682 "nvmf_create_transport", 00:05:30.682 "nvmf_get_targets", 00:05:30.682 "nvmf_delete_target", 00:05:30.682 "nvmf_create_target", 00:05:30.682 "nvmf_subsystem_allow_any_host", 00:05:30.682 "nvmf_subsystem_set_keys", 00:05:30.682 "nvmf_subsystem_remove_host", 00:05:30.682 "nvmf_subsystem_add_host", 00:05:30.682 "nvmf_ns_remove_host", 00:05:30.682 "nvmf_ns_add_host", 00:05:30.682 "nvmf_subsystem_remove_ns", 00:05:30.682 "nvmf_subsystem_set_ns_ana_group", 00:05:30.682 "nvmf_subsystem_add_ns", 00:05:30.682 "nvmf_subsystem_listener_set_ana_state", 00:05:30.682 "nvmf_discovery_get_referrals", 00:05:30.682 "nvmf_discovery_remove_referral", 00:05:30.682 "nvmf_discovery_add_referral", 00:05:30.682 "nvmf_subsystem_remove_listener", 00:05:30.682 "nvmf_subsystem_add_listener", 00:05:30.682 "nvmf_delete_subsystem", 00:05:30.682 "nvmf_create_subsystem", 00:05:30.682 "nvmf_get_subsystems", 00:05:30.682 "env_dpdk_get_mem_stats", 00:05:30.682 "nbd_get_disks", 00:05:30.682 "nbd_stop_disk", 00:05:30.682 "nbd_start_disk", 00:05:30.682 "ublk_recover_disk", 00:05:30.682 "ublk_get_disks", 00:05:30.682 "ublk_stop_disk", 00:05:30.682 "ublk_start_disk", 00:05:30.682 "ublk_destroy_target", 00:05:30.682 "ublk_create_target", 00:05:30.682 "virtio_blk_create_transport", 00:05:30.682 "virtio_blk_get_transports", 00:05:30.682 "vhost_controller_set_coalescing", 00:05:30.682 "vhost_get_controllers", 00:05:30.682 "vhost_delete_controller", 00:05:30.682 "vhost_create_blk_controller", 00:05:30.682 "vhost_scsi_controller_remove_target", 00:05:30.682 "vhost_scsi_controller_add_target", 00:05:30.682 "vhost_start_scsi_controller", 00:05:30.682 "vhost_create_scsi_controller", 00:05:30.682 "thread_set_cpumask", 00:05:30.682 "scheduler_set_options", 00:05:30.682 "framework_get_governor", 00:05:30.682 "framework_get_scheduler", 00:05:30.682 "framework_set_scheduler", 00:05:30.682 "framework_get_reactors", 00:05:30.682 "thread_get_io_channels", 00:05:30.682 "thread_get_pollers", 00:05:30.682 "thread_get_stats", 00:05:30.682 "framework_monitor_context_switch", 00:05:30.682 "spdk_kill_instance", 00:05:30.682 "log_enable_timestamps", 00:05:30.682 "log_get_flags", 00:05:30.682 "log_clear_flag", 00:05:30.682 "log_set_flag", 00:05:30.682 "log_get_level", 00:05:30.682 "log_set_level", 00:05:30.682 "log_get_print_level", 00:05:30.682 "log_set_print_level", 00:05:30.682 "framework_enable_cpumask_locks", 00:05:30.682 "framework_disable_cpumask_locks", 00:05:30.682 "framework_wait_init", 00:05:30.682 "framework_start_init", 00:05:30.682 "scsi_get_devices", 00:05:30.682 "bdev_get_histogram", 00:05:30.682 "bdev_enable_histogram", 00:05:30.682 "bdev_set_qos_limit", 00:05:30.682 "bdev_set_qd_sampling_period", 00:05:30.682 "bdev_get_bdevs", 00:05:30.682 "bdev_reset_iostat", 00:05:30.682 "bdev_get_iostat", 00:05:30.682 "bdev_examine", 00:05:30.682 "bdev_wait_for_examine", 00:05:30.682 "bdev_set_options", 00:05:30.682 "accel_get_stats", 00:05:30.682 "accel_set_options", 00:05:30.682 "accel_set_driver", 00:05:30.682 "accel_crypto_key_destroy", 00:05:30.682 "accel_crypto_keys_get", 00:05:30.682 "accel_crypto_key_create", 00:05:30.682 "accel_assign_opc", 00:05:30.682 "accel_get_module_info", 00:05:30.682 "accel_get_opc_assignments", 00:05:30.682 "vmd_rescan", 00:05:30.682 "vmd_remove_device", 00:05:30.682 "vmd_enable", 00:05:30.682 "sock_get_default_impl", 00:05:30.682 "sock_set_default_impl", 00:05:30.682 "sock_impl_set_options", 00:05:30.682 "sock_impl_get_options", 00:05:30.682 "iobuf_get_stats", 00:05:30.682 "iobuf_set_options", 00:05:30.683 "keyring_get_keys", 00:05:30.683 "vfu_tgt_set_base_path", 00:05:30.683 "framework_get_pci_devices", 00:05:30.683 "framework_get_config", 00:05:30.683 "framework_get_subsystems", 00:05:30.683 "fsdev_set_opts", 00:05:30.683 "fsdev_get_opts", 00:05:30.683 "trace_get_info", 00:05:30.683 "trace_get_tpoint_group_mask", 00:05:30.683 "trace_disable_tpoint_group", 00:05:30.683 "trace_enable_tpoint_group", 00:05:30.683 "trace_clear_tpoint_mask", 00:05:30.683 "trace_set_tpoint_mask", 00:05:30.683 "notify_get_notifications", 00:05:30.683 "notify_get_types", 00:05:30.683 "spdk_get_version", 00:05:30.683 "rpc_get_methods" 00:05:30.683 ] 00:05:30.683 15:24:10 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:30.683 15:24:10 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:30.683 15:24:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:30.683 15:24:10 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:30.683 15:24:10 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 117010 00:05:30.683 15:24:10 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 117010 ']' 00:05:30.683 15:24:10 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 117010 00:05:30.683 15:24:11 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:30.683 15:24:11 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:30.683 15:24:11 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 117010 00:05:30.683 15:24:11 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:30.683 15:24:11 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:30.683 15:24:11 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 117010' 00:05:30.683 killing process with pid 117010 00:05:30.683 15:24:11 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 117010 00:05:30.683 15:24:11 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 117010 00:05:30.945 00:05:30.945 real 0m1.552s 00:05:30.945 user 0m2.797s 00:05:30.945 sys 0m0.492s 00:05:30.945 15:24:11 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.945 15:24:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:30.945 ************************************ 00:05:30.945 END TEST spdkcli_tcp 00:05:30.945 ************************************ 00:05:30.945 15:24:11 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:30.945 15:24:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:30.945 15:24:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.945 15:24:11 -- common/autotest_common.sh@10 -- # set +x 00:05:30.945 ************************************ 00:05:30.945 START TEST dpdk_mem_utility 00:05:30.945 ************************************ 00:05:30.945 15:24:11 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:30.945 * Looking for test storage... 00:05:31.208 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:31.208 15:24:11 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:31.208 15:24:11 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:05:31.208 15:24:11 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:31.208 15:24:11 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:31.208 15:24:11 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.208 15:24:11 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.208 15:24:11 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.208 15:24:11 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.208 15:24:11 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.208 15:24:11 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.208 15:24:11 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.208 15:24:11 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.208 15:24:11 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.208 15:24:11 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.208 15:24:11 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.208 15:24:11 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:31.208 15:24:11 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:31.208 15:24:11 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.208 15:24:11 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.208 15:24:11 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:31.208 15:24:11 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:31.208 15:24:11 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.208 15:24:11 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:31.208 15:24:11 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.208 15:24:11 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:31.208 15:24:11 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:31.208 15:24:11 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.208 15:24:11 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:31.208 15:24:11 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.208 15:24:11 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.208 15:24:11 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.208 15:24:11 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:31.208 15:24:11 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.208 15:24:11 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:31.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.208 --rc genhtml_branch_coverage=1 00:05:31.208 --rc genhtml_function_coverage=1 00:05:31.208 --rc genhtml_legend=1 00:05:31.208 --rc geninfo_all_blocks=1 00:05:31.208 --rc geninfo_unexecuted_blocks=1 00:05:31.208 00:05:31.208 ' 00:05:31.208 15:24:11 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:31.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.208 --rc genhtml_branch_coverage=1 00:05:31.208 --rc genhtml_function_coverage=1 00:05:31.208 --rc genhtml_legend=1 00:05:31.208 --rc geninfo_all_blocks=1 00:05:31.208 --rc geninfo_unexecuted_blocks=1 00:05:31.208 00:05:31.208 ' 00:05:31.208 15:24:11 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:31.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.208 --rc genhtml_branch_coverage=1 00:05:31.208 --rc genhtml_function_coverage=1 00:05:31.208 --rc genhtml_legend=1 00:05:31.208 --rc geninfo_all_blocks=1 00:05:31.208 --rc geninfo_unexecuted_blocks=1 00:05:31.208 00:05:31.208 ' 00:05:31.208 15:24:11 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:31.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.208 --rc genhtml_branch_coverage=1 00:05:31.208 --rc genhtml_function_coverage=1 00:05:31.208 --rc genhtml_legend=1 00:05:31.208 --rc geninfo_all_blocks=1 00:05:31.208 --rc geninfo_unexecuted_blocks=1 00:05:31.208 00:05:31.208 ' 00:05:31.208 15:24:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:31.208 15:24:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=117459 00:05:31.208 15:24:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 117459 00:05:31.208 15:24:11 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 117459 ']' 00:05:31.208 15:24:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:31.208 15:24:11 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.208 15:24:11 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:31.208 15:24:11 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.208 15:24:11 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:31.209 15:24:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:31.209 [2024-09-27 15:24:11.600853] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:05:31.209 [2024-09-27 15:24:11.600940] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117459 ] 00:05:31.209 [2024-09-27 15:24:11.681932] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.470 [2024-09-27 15:24:11.716122] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.042 15:24:12 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:32.042 15:24:12 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:32.042 15:24:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:32.042 15:24:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:32.042 15:24:12 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.042 15:24:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:32.042 { 00:05:32.042 "filename": "/tmp/spdk_mem_dump.txt" 00:05:32.042 } 00:05:32.042 15:24:12 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.042 15:24:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:32.042 DPDK memory size 860.000000 MiB in 1 heap(s) 00:05:32.042 1 heaps totaling size 860.000000 MiB 00:05:32.042 size: 860.000000 MiB heap id: 0 00:05:32.042 end heaps---------- 00:05:32.042 9 mempools totaling size 642.649841 MiB 00:05:32.042 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:32.042 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:32.042 size: 92.545471 MiB name: bdev_io_117459 00:05:32.042 size: 51.011292 MiB name: evtpool_117459 00:05:32.042 size: 50.003479 MiB name: msgpool_117459 00:05:32.042 size: 36.509338 MiB name: fsdev_io_117459 00:05:32.042 size: 21.763794 MiB name: PDU_Pool 00:05:32.042 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:32.042 size: 0.026123 MiB name: Session_Pool 00:05:32.042 end mempools------- 00:05:32.042 6 memzones totaling size 4.142822 MiB 00:05:32.042 size: 1.000366 MiB name: RG_ring_0_117459 00:05:32.042 size: 1.000366 MiB name: RG_ring_1_117459 00:05:32.042 size: 1.000366 MiB name: RG_ring_4_117459 00:05:32.042 size: 1.000366 MiB name: RG_ring_5_117459 00:05:32.042 size: 0.125366 MiB name: RG_ring_2_117459 00:05:32.042 size: 0.015991 MiB name: RG_ring_3_117459 00:05:32.042 end memzones------- 00:05:32.042 15:24:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:32.042 heap id: 0 total size: 860.000000 MiB number of busy elements: 44 number of free elements: 16 00:05:32.042 list of free elements. size: 13.984680 MiB 00:05:32.042 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:32.042 element at address: 0x200000800000 with size: 1.996948 MiB 00:05:32.042 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:05:32.042 element at address: 0x20001be00000 with size: 0.999878 MiB 00:05:32.042 element at address: 0x200034a00000 with size: 0.994446 MiB 00:05:32.042 element at address: 0x200009600000 with size: 0.959839 MiB 00:05:32.042 element at address: 0x200015e00000 with size: 0.954285 MiB 00:05:32.042 element at address: 0x20001c000000 with size: 0.936584 MiB 00:05:32.042 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:32.042 element at address: 0x20001d800000 with size: 0.582886 MiB 00:05:32.042 element at address: 0x200003e00000 with size: 0.495605 MiB 00:05:32.043 element at address: 0x20000d800000 with size: 0.490723 MiB 00:05:32.043 element at address: 0x20001c200000 with size: 0.485657 MiB 00:05:32.043 element at address: 0x200007000000 with size: 0.481934 MiB 00:05:32.043 element at address: 0x20002ac00000 with size: 0.410034 MiB 00:05:32.043 element at address: 0x200003a00000 with size: 0.354858 MiB 00:05:32.043 list of standard malloc elements. size: 199.218628 MiB 00:05:32.043 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:05:32.043 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:05:32.043 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:05:32.043 element at address: 0x20001befff80 with size: 1.000122 MiB 00:05:32.043 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:05:32.043 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:32.043 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:05:32.043 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:32.043 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:05:32.043 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:32.043 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:32.043 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:32.043 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:32.043 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:32.043 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:32.043 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:32.043 element at address: 0x200003a5ad80 with size: 0.000183 MiB 00:05:32.043 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:32.043 element at address: 0x200003a5f240 with size: 0.000183 MiB 00:05:32.043 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:05:32.043 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:05:32.043 element at address: 0x200003aff880 with size: 0.000183 MiB 00:05:32.043 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:32.043 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:32.043 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:05:32.043 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:32.043 element at address: 0x20000707b600 with size: 0.000183 MiB 00:05:32.043 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:05:32.043 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:05:32.043 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:05:32.043 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:05:32.043 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:05:32.043 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:05:32.043 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:05:32.043 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:05:32.043 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:05:32.043 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:05:32.043 element at address: 0x20001d895380 with size: 0.000183 MiB 00:05:32.043 element at address: 0x20001d895440 with size: 0.000183 MiB 00:05:32.043 element at address: 0x20002ac68f80 with size: 0.000183 MiB 00:05:32.043 element at address: 0x20002ac69040 with size: 0.000183 MiB 00:05:32.043 element at address: 0x20002ac6fc40 with size: 0.000183 MiB 00:05:32.043 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:05:32.043 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:05:32.043 list of memzone associated elements. size: 646.796692 MiB 00:05:32.043 element at address: 0x20001d895500 with size: 211.416748 MiB 00:05:32.043 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:32.043 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:05:32.043 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:32.043 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:05:32.043 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_117459_0 00:05:32.043 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:32.043 associated memzone info: size: 48.002930 MiB name: MP_evtpool_117459_0 00:05:32.043 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:32.043 associated memzone info: size: 48.002930 MiB name: MP_msgpool_117459_0 00:05:32.043 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:05:32.043 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_117459_0 00:05:32.043 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:05:32.043 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:32.043 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:05:32.043 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:32.043 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:32.043 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_117459 00:05:32.043 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:32.043 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_117459 00:05:32.043 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:32.043 associated memzone info: size: 1.007996 MiB name: MP_evtpool_117459 00:05:32.043 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:05:32.043 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:32.043 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:05:32.043 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:32.043 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:05:32.043 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:32.043 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:05:32.043 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:32.043 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:32.043 associated memzone info: size: 1.000366 MiB name: RG_ring_0_117459 00:05:32.043 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:32.043 associated memzone info: size: 1.000366 MiB name: RG_ring_1_117459 00:05:32.043 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:05:32.043 associated memzone info: size: 1.000366 MiB name: RG_ring_4_117459 00:05:32.043 element at address: 0x200034afe940 with size: 1.000488 MiB 00:05:32.043 associated memzone info: size: 1.000366 MiB name: RG_ring_5_117459 00:05:32.043 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:05:32.043 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_117459 00:05:32.043 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:05:32.043 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_117459 00:05:32.043 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:05:32.043 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:32.043 element at address: 0x20000707b780 with size: 0.500488 MiB 00:05:32.043 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:32.043 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:05:32.043 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:32.043 element at address: 0x200003a5f300 with size: 0.125488 MiB 00:05:32.043 associated memzone info: size: 0.125366 MiB name: RG_ring_2_117459 00:05:32.043 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:05:32.043 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:32.043 element at address: 0x20002ac69100 with size: 0.023743 MiB 00:05:32.043 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:32.043 element at address: 0x200003a5b040 with size: 0.016113 MiB 00:05:32.043 associated memzone info: size: 0.015991 MiB name: RG_ring_3_117459 00:05:32.043 element at address: 0x20002ac6f240 with size: 0.002441 MiB 00:05:32.043 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:32.043 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:32.043 associated memzone info: size: 0.000183 MiB name: MP_msgpool_117459 00:05:32.043 element at address: 0x200003aff940 with size: 0.000305 MiB 00:05:32.043 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_117459 00:05:32.043 element at address: 0x200003a5ae40 with size: 0.000305 MiB 00:05:32.043 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_117459 00:05:32.043 element at address: 0x20002ac6fd00 with size: 0.000305 MiB 00:05:32.043 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:32.043 15:24:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:32.043 15:24:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 117459 00:05:32.043 15:24:12 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 117459 ']' 00:05:32.043 15:24:12 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 117459 00:05:32.043 15:24:12 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:32.043 15:24:12 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:32.043 15:24:12 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 117459 00:05:32.305 15:24:12 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:32.305 15:24:12 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:32.305 15:24:12 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 117459' 00:05:32.305 killing process with pid 117459 00:05:32.305 15:24:12 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 117459 00:05:32.305 15:24:12 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 117459 00:05:32.305 00:05:32.305 real 0m1.405s 00:05:32.305 user 0m1.459s 00:05:32.305 sys 0m0.429s 00:05:32.305 15:24:12 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.305 15:24:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:32.305 ************************************ 00:05:32.305 END TEST dpdk_mem_utility 00:05:32.305 ************************************ 00:05:32.305 15:24:12 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:32.305 15:24:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.305 15:24:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.305 15:24:12 -- common/autotest_common.sh@10 -- # set +x 00:05:32.568 ************************************ 00:05:32.568 START TEST event 00:05:32.568 ************************************ 00:05:32.568 15:24:12 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:32.568 * Looking for test storage... 00:05:32.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:32.568 15:24:12 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:32.568 15:24:12 event -- common/autotest_common.sh@1681 -- # lcov --version 00:05:32.568 15:24:12 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:32.568 15:24:12 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:32.568 15:24:12 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:32.568 15:24:12 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:32.568 15:24:12 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:32.568 15:24:12 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.568 15:24:12 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:32.568 15:24:12 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:32.568 15:24:12 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:32.568 15:24:12 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:32.568 15:24:12 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:32.568 15:24:12 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:32.568 15:24:12 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:32.568 15:24:12 event -- scripts/common.sh@344 -- # case "$op" in 00:05:32.568 15:24:12 event -- scripts/common.sh@345 -- # : 1 00:05:32.568 15:24:12 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:32.568 15:24:12 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.568 15:24:12 event -- scripts/common.sh@365 -- # decimal 1 00:05:32.568 15:24:13 event -- scripts/common.sh@353 -- # local d=1 00:05:32.568 15:24:13 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.568 15:24:13 event -- scripts/common.sh@355 -- # echo 1 00:05:32.568 15:24:13 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:32.568 15:24:13 event -- scripts/common.sh@366 -- # decimal 2 00:05:32.568 15:24:13 event -- scripts/common.sh@353 -- # local d=2 00:05:32.568 15:24:13 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.568 15:24:13 event -- scripts/common.sh@355 -- # echo 2 00:05:32.568 15:24:13 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:32.568 15:24:13 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:32.568 15:24:13 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:32.568 15:24:13 event -- scripts/common.sh@368 -- # return 0 00:05:32.568 15:24:13 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.568 15:24:13 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:32.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.568 --rc genhtml_branch_coverage=1 00:05:32.568 --rc genhtml_function_coverage=1 00:05:32.568 --rc genhtml_legend=1 00:05:32.568 --rc geninfo_all_blocks=1 00:05:32.568 --rc geninfo_unexecuted_blocks=1 00:05:32.568 00:05:32.568 ' 00:05:32.568 15:24:13 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:32.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.568 --rc genhtml_branch_coverage=1 00:05:32.568 --rc genhtml_function_coverage=1 00:05:32.568 --rc genhtml_legend=1 00:05:32.568 --rc geninfo_all_blocks=1 00:05:32.568 --rc geninfo_unexecuted_blocks=1 00:05:32.568 00:05:32.568 ' 00:05:32.568 15:24:13 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:32.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.568 --rc genhtml_branch_coverage=1 00:05:32.568 --rc genhtml_function_coverage=1 00:05:32.568 --rc genhtml_legend=1 00:05:32.568 --rc geninfo_all_blocks=1 00:05:32.568 --rc geninfo_unexecuted_blocks=1 00:05:32.568 00:05:32.568 ' 00:05:32.568 15:24:13 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:32.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.568 --rc genhtml_branch_coverage=1 00:05:32.568 --rc genhtml_function_coverage=1 00:05:32.568 --rc genhtml_legend=1 00:05:32.568 --rc geninfo_all_blocks=1 00:05:32.568 --rc geninfo_unexecuted_blocks=1 00:05:32.568 00:05:32.568 ' 00:05:32.568 15:24:13 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:32.568 15:24:13 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:32.568 15:24:13 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:32.568 15:24:13 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:32.568 15:24:13 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.568 15:24:13 event -- common/autotest_common.sh@10 -- # set +x 00:05:32.568 ************************************ 00:05:32.568 START TEST event_perf 00:05:32.568 ************************************ 00:05:32.569 15:24:13 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:32.831 Running I/O for 1 seconds...[2024-09-27 15:24:13.070556] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:05:32.831 [2024-09-27 15:24:13.070649] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117783 ] 00:05:32.831 [2024-09-27 15:24:13.156510] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:32.831 [2024-09-27 15:24:13.199627] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.831 [2024-09-27 15:24:13.199785] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:32.831 [2024-09-27 15:24:13.199947] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.831 Running I/O for 1 seconds...[2024-09-27 15:24:13.199947] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:33.776 00:05:33.776 lcore 0: 185336 00:05:33.776 lcore 1: 185339 00:05:33.776 lcore 2: 185339 00:05:33.776 lcore 3: 185339 00:05:33.776 done. 00:05:33.776 00:05:33.776 real 0m1.186s 00:05:33.776 user 0m4.080s 00:05:33.776 sys 0m0.102s 00:05:33.776 15:24:14 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.776 15:24:14 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:33.776 ************************************ 00:05:33.776 END TEST event_perf 00:05:33.776 ************************************ 00:05:34.037 15:24:14 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:34.037 15:24:14 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:34.037 15:24:14 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.037 15:24:14 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.038 ************************************ 00:05:34.038 START TEST event_reactor 00:05:34.038 ************************************ 00:05:34.038 15:24:14 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:34.038 [2024-09-27 15:24:14.328497] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:05:34.038 [2024-09-27 15:24:14.328592] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118135 ] 00:05:34.038 [2024-09-27 15:24:14.414544] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.038 [2024-09-27 15:24:14.453580] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.422 test_start 00:05:35.422 oneshot 00:05:35.422 tick 100 00:05:35.422 tick 100 00:05:35.422 tick 250 00:05:35.422 tick 100 00:05:35.422 tick 100 00:05:35.422 tick 100 00:05:35.422 tick 250 00:05:35.422 tick 500 00:05:35.422 tick 100 00:05:35.422 tick 100 00:05:35.422 tick 250 00:05:35.422 tick 100 00:05:35.422 tick 100 00:05:35.422 test_end 00:05:35.422 00:05:35.422 real 0m1.180s 00:05:35.422 user 0m1.079s 00:05:35.422 sys 0m0.096s 00:05:35.422 15:24:15 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.422 15:24:15 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:35.422 ************************************ 00:05:35.422 END TEST event_reactor 00:05:35.422 ************************************ 00:05:35.422 15:24:15 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:35.422 15:24:15 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:35.422 15:24:15 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.422 15:24:15 event -- common/autotest_common.sh@10 -- # set +x 00:05:35.422 ************************************ 00:05:35.422 START TEST event_reactor_perf 00:05:35.422 ************************************ 00:05:35.422 15:24:15 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:35.422 [2024-09-27 15:24:15.586398] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:05:35.422 [2024-09-27 15:24:15.586480] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118483 ] 00:05:35.422 [2024-09-27 15:24:15.669740] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.422 [2024-09-27 15:24:15.701873] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.366 test_start 00:05:36.366 test_end 00:05:36.366 Performance: 536713 events per second 00:05:36.366 00:05:36.366 real 0m1.171s 00:05:36.366 user 0m1.083s 00:05:36.366 sys 0m0.084s 00:05:36.366 15:24:16 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.366 15:24:16 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:36.366 ************************************ 00:05:36.366 END TEST event_reactor_perf 00:05:36.366 ************************************ 00:05:36.366 15:24:16 event -- event/event.sh@49 -- # uname -s 00:05:36.366 15:24:16 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:36.366 15:24:16 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:36.366 15:24:16 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:36.366 15:24:16 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.366 15:24:16 event -- common/autotest_common.sh@10 -- # set +x 00:05:36.366 ************************************ 00:05:36.366 START TEST event_scheduler 00:05:36.366 ************************************ 00:05:36.366 15:24:16 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:36.628 * Looking for test storage... 00:05:36.628 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:36.628 15:24:16 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:36.628 15:24:16 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:05:36.628 15:24:16 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:36.628 15:24:17 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:36.628 15:24:17 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.628 15:24:17 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.628 15:24:17 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.628 15:24:17 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.628 15:24:17 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.628 15:24:17 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.628 15:24:17 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.628 15:24:17 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.628 15:24:17 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.628 15:24:17 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.628 15:24:17 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.628 15:24:17 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:36.628 15:24:17 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:36.628 15:24:17 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.628 15:24:17 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.628 15:24:17 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:36.628 15:24:17 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:36.628 15:24:17 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.628 15:24:17 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:36.628 15:24:17 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.628 15:24:17 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:36.628 15:24:17 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:36.628 15:24:17 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.628 15:24:17 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:36.628 15:24:17 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.628 15:24:17 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.628 15:24:17 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.628 15:24:17 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:36.628 15:24:17 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.628 15:24:17 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:36.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.628 --rc genhtml_branch_coverage=1 00:05:36.628 --rc genhtml_function_coverage=1 00:05:36.628 --rc genhtml_legend=1 00:05:36.628 --rc geninfo_all_blocks=1 00:05:36.628 --rc geninfo_unexecuted_blocks=1 00:05:36.628 00:05:36.628 ' 00:05:36.628 15:24:17 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:36.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.628 --rc genhtml_branch_coverage=1 00:05:36.628 --rc genhtml_function_coverage=1 00:05:36.628 --rc genhtml_legend=1 00:05:36.628 --rc geninfo_all_blocks=1 00:05:36.628 --rc geninfo_unexecuted_blocks=1 00:05:36.628 00:05:36.628 ' 00:05:36.628 15:24:17 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:36.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.628 --rc genhtml_branch_coverage=1 00:05:36.628 --rc genhtml_function_coverage=1 00:05:36.628 --rc genhtml_legend=1 00:05:36.628 --rc geninfo_all_blocks=1 00:05:36.628 --rc geninfo_unexecuted_blocks=1 00:05:36.628 00:05:36.628 ' 00:05:36.628 15:24:17 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:36.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.628 --rc genhtml_branch_coverage=1 00:05:36.628 --rc genhtml_function_coverage=1 00:05:36.628 --rc genhtml_legend=1 00:05:36.628 --rc geninfo_all_blocks=1 00:05:36.628 --rc geninfo_unexecuted_blocks=1 00:05:36.628 00:05:36.628 ' 00:05:36.628 15:24:17 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:36.628 15:24:17 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=118843 00:05:36.628 15:24:17 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:36.628 15:24:17 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 118843 00:05:36.628 15:24:17 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:36.628 15:24:17 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 118843 ']' 00:05:36.628 15:24:17 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.628 15:24:17 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:36.628 15:24:17 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.628 15:24:17 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:36.628 15:24:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:36.628 [2024-09-27 15:24:17.074592] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:05:36.628 [2024-09-27 15:24:17.074668] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118843 ] 00:05:36.890 [2024-09-27 15:24:17.155118] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:36.890 [2024-09-27 15:24:17.194591] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.890 [2024-09-27 15:24:17.194749] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.890 [2024-09-27 15:24:17.194924] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:36.890 [2024-09-27 15:24:17.194925] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:37.462 15:24:17 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:37.462 15:24:17 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:37.462 15:24:17 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:37.462 15:24:17 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.462 15:24:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:37.462 [2024-09-27 15:24:17.885558] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:37.462 [2024-09-27 15:24:17.885577] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:37.462 [2024-09-27 15:24:17.885586] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:37.462 [2024-09-27 15:24:17.885592] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:37.462 [2024-09-27 15:24:17.885598] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:37.462 15:24:17 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.462 15:24:17 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:37.462 15:24:17 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.462 15:24:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:37.462 [2024-09-27 15:24:17.940789] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:37.462 15:24:17 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.462 15:24:17 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:37.462 15:24:17 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:37.462 15:24:17 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.462 15:24:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:37.724 ************************************ 00:05:37.724 START TEST scheduler_create_thread 00:05:37.724 ************************************ 00:05:37.724 15:24:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:37.724 15:24:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:37.724 15:24:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.724 15:24:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.724 2 00:05:37.724 15:24:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.724 15:24:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:37.724 15:24:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.724 15:24:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.724 3 00:05:37.724 15:24:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.724 15:24:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:37.724 15:24:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.724 15:24:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.724 4 00:05:37.724 15:24:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.724 15:24:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:37.724 15:24:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.724 15:24:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.724 5 00:05:37.724 15:24:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.724 15:24:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:37.724 15:24:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.724 15:24:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.724 6 00:05:37.724 15:24:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.724 15:24:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:37.724 15:24:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.724 15:24:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.724 7 00:05:37.724 15:24:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.724 15:24:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:37.724 15:24:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.724 15:24:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.724 8 00:05:37.724 15:24:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.724 15:24:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:37.724 15:24:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.724 15:24:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.297 9 00:05:38.297 15:24:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.297 15:24:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:38.297 15:24:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.297 15:24:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.240 10 00:05:39.240 15:24:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.240 15:24:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:39.240 15:24:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.240 15:24:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.185 15:24:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.185 15:24:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:40.185 15:24:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:40.185 15:24:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.185 15:24:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.756 15:24:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.756 15:24:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:40.756 15:24:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.756 15:24:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.700 15:24:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.700 15:24:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:41.700 15:24:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:41.700 15:24:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.700 15:24:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.961 15:24:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.961 00:05:41.961 real 0m4.464s 00:05:41.961 user 0m0.025s 00:05:41.961 sys 0m0.007s 00:05:41.961 15:24:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.961 15:24:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.961 ************************************ 00:05:41.961 END TEST scheduler_create_thread 00:05:41.961 ************************************ 00:05:42.222 15:24:22 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:42.222 15:24:22 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 118843 00:05:42.222 15:24:22 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 118843 ']' 00:05:42.222 15:24:22 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 118843 00:05:42.222 15:24:22 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:42.222 15:24:22 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:42.222 15:24:22 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 118843 00:05:42.222 15:24:22 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:42.222 15:24:22 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:42.222 15:24:22 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 118843' 00:05:42.222 killing process with pid 118843 00:05:42.222 15:24:22 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 118843 00:05:42.222 15:24:22 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 118843 00:05:42.482 [2024-09-27 15:24:22.723499] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:42.483 00:05:42.483 real 0m6.061s 00:05:42.483 user 0m14.413s 00:05:42.483 sys 0m0.430s 00:05:42.483 15:24:22 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.483 15:24:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:42.483 ************************************ 00:05:42.483 END TEST event_scheduler 00:05:42.483 ************************************ 00:05:42.483 15:24:22 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:42.483 15:24:22 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:42.483 15:24:22 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:42.483 15:24:22 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.483 15:24:22 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.483 ************************************ 00:05:42.483 START TEST app_repeat 00:05:42.483 ************************************ 00:05:42.483 15:24:22 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:42.483 15:24:22 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.483 15:24:22 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.483 15:24:22 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:42.483 15:24:22 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:42.483 15:24:22 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:42.483 15:24:22 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:42.483 15:24:22 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:42.744 15:24:22 event.app_repeat -- event/event.sh@19 -- # repeat_pid=119945 00:05:42.744 15:24:22 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:42.744 15:24:22 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:42.744 15:24:22 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 119945' 00:05:42.744 Process app_repeat pid: 119945 00:05:42.744 15:24:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:42.744 15:24:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:42.744 spdk_app_start Round 0 00:05:42.744 15:24:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 119945 /var/tmp/spdk-nbd.sock 00:05:42.744 15:24:22 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 119945 ']' 00:05:42.744 15:24:22 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:42.744 15:24:22 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:42.744 15:24:22 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:42.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:42.744 15:24:22 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:42.744 15:24:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:42.744 [2024-09-27 15:24:23.002637] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:05:42.744 [2024-09-27 15:24:23.002707] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119945 ] 00:05:42.744 [2024-09-27 15:24:23.084585] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.744 [2024-09-27 15:24:23.118486] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.744 [2024-09-27 15:24:23.118486] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.744 15:24:23 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:42.744 15:24:23 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:42.744 15:24:23 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.004 Malloc0 00:05:43.004 15:24:23 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.265 Malloc1 00:05:43.265 15:24:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.265 15:24:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.265 15:24:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.265 15:24:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:43.265 15:24:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.265 15:24:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:43.265 15:24:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.265 15:24:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.265 15:24:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.265 15:24:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:43.265 15:24:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.265 15:24:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:43.265 15:24:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:43.265 15:24:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:43.265 15:24:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.265 15:24:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:43.526 /dev/nbd0 00:05:43.526 15:24:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:43.526 15:24:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:43.526 15:24:23 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:43.526 15:24:23 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:43.526 15:24:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:43.526 15:24:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:43.526 15:24:23 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:43.526 15:24:23 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:43.526 15:24:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:43.526 15:24:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:43.526 15:24:23 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.526 1+0 records in 00:05:43.526 1+0 records out 00:05:43.526 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265036 s, 15.5 MB/s 00:05:43.526 15:24:23 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.526 15:24:23 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:43.526 15:24:23 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.526 15:24:23 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:43.526 15:24:23 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:43.526 15:24:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.526 15:24:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.526 15:24:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:43.526 /dev/nbd1 00:05:43.787 15:24:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:43.787 15:24:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:43.787 15:24:24 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:43.787 15:24:24 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:43.787 15:24:24 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:43.787 15:24:24 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:43.787 15:24:24 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:43.787 15:24:24 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:43.787 15:24:24 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:43.787 15:24:24 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:43.787 15:24:24 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.787 1+0 records in 00:05:43.787 1+0 records out 00:05:43.787 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241424 s, 17.0 MB/s 00:05:43.787 15:24:24 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.787 15:24:24 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:43.787 15:24:24 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.787 15:24:24 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:43.787 15:24:24 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:43.787 15:24:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.787 15:24:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.787 15:24:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.787 15:24:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.787 15:24:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:43.787 15:24:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:43.787 { 00:05:43.787 "nbd_device": "/dev/nbd0", 00:05:43.787 "bdev_name": "Malloc0" 00:05:43.787 }, 00:05:43.787 { 00:05:43.787 "nbd_device": "/dev/nbd1", 00:05:43.787 "bdev_name": "Malloc1" 00:05:43.787 } 00:05:43.787 ]' 00:05:43.787 15:24:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:43.787 15:24:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:43.787 { 00:05:43.787 "nbd_device": "/dev/nbd0", 00:05:43.787 "bdev_name": "Malloc0" 00:05:43.787 }, 00:05:43.787 { 00:05:43.787 "nbd_device": "/dev/nbd1", 00:05:43.787 "bdev_name": "Malloc1" 00:05:43.787 } 00:05:43.787 ]' 00:05:43.787 15:24:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:43.787 /dev/nbd1' 00:05:44.048 15:24:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:44.048 /dev/nbd1' 00:05:44.048 15:24:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.048 15:24:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:44.048 15:24:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:44.048 15:24:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:44.048 15:24:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:44.048 15:24:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:44.048 15:24:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.048 15:24:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:44.048 15:24:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:44.048 15:24:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:44.048 15:24:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:44.048 15:24:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:44.048 256+0 records in 00:05:44.048 256+0 records out 00:05:44.048 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122055 s, 85.9 MB/s 00:05:44.048 15:24:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:44.048 15:24:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:44.048 256+0 records in 00:05:44.048 256+0 records out 00:05:44.048 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120151 s, 87.3 MB/s 00:05:44.048 15:24:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:44.048 15:24:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:44.048 256+0 records in 00:05:44.048 256+0 records out 00:05:44.048 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012795 s, 82.0 MB/s 00:05:44.048 15:24:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:44.048 15:24:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.048 15:24:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:44.048 15:24:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:44.048 15:24:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:44.048 15:24:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:44.048 15:24:24 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:44.048 15:24:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:44.048 15:24:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:44.048 15:24:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:44.048 15:24:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:44.048 15:24:24 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:44.048 15:24:24 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:44.048 15:24:24 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.048 15:24:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.048 15:24:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:44.048 15:24:24 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:44.048 15:24:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.048 15:24:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:44.309 15:24:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:44.309 15:24:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:44.309 15:24:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:44.309 15:24:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.309 15:24:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.309 15:24:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:44.309 15:24:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:44.309 15:24:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.309 15:24:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.309 15:24:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:44.309 15:24:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:44.309 15:24:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:44.309 15:24:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:44.309 15:24:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.309 15:24:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.309 15:24:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:44.309 15:24:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:44.309 15:24:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.309 15:24:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.309 15:24:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.309 15:24:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.570 15:24:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:44.570 15:24:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:44.570 15:24:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:44.570 15:24:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:44.570 15:24:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:44.570 15:24:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.570 15:24:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:44.570 15:24:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:44.570 15:24:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:44.570 15:24:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:44.570 15:24:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:44.570 15:24:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:44.570 15:24:24 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:44.830 15:24:25 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:44.830 [2024-09-27 15:24:25.262243] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:44.830 [2024-09-27 15:24:25.289156] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.830 [2024-09-27 15:24:25.289155] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.830 [2024-09-27 15:24:25.318263] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:44.830 [2024-09-27 15:24:25.318293] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:48.131 15:24:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:48.131 15:24:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:48.131 spdk_app_start Round 1 00:05:48.131 15:24:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 119945 /var/tmp/spdk-nbd.sock 00:05:48.131 15:24:28 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 119945 ']' 00:05:48.131 15:24:28 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:48.131 15:24:28 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:48.131 15:24:28 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:48.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:48.132 15:24:28 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:48.132 15:24:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:48.132 15:24:28 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:48.132 15:24:28 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:48.132 15:24:28 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.132 Malloc0 00:05:48.132 15:24:28 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.393 Malloc1 00:05:48.393 15:24:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.393 15:24:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.393 15:24:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.393 15:24:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:48.393 15:24:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.393 15:24:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:48.393 15:24:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.393 15:24:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.393 15:24:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.393 15:24:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:48.393 15:24:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.393 15:24:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:48.393 15:24:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:48.393 15:24:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:48.393 15:24:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.393 15:24:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:48.654 /dev/nbd0 00:05:48.654 15:24:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:48.654 15:24:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:48.654 15:24:28 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:48.654 15:24:28 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:48.654 15:24:28 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:48.654 15:24:28 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:48.654 15:24:28 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:48.654 15:24:28 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:48.654 15:24:28 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:48.654 15:24:28 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:48.654 15:24:28 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.654 1+0 records in 00:05:48.654 1+0 records out 00:05:48.654 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275383 s, 14.9 MB/s 00:05:48.654 15:24:28 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.654 15:24:28 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:48.654 15:24:28 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.654 15:24:28 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:48.654 15:24:28 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:48.654 15:24:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.654 15:24:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.654 15:24:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:48.915 /dev/nbd1 00:05:48.915 15:24:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:48.915 15:24:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:48.915 15:24:29 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:48.915 15:24:29 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:48.915 15:24:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:48.915 15:24:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:48.915 15:24:29 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:48.915 15:24:29 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:48.915 15:24:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:48.915 15:24:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:48.915 15:24:29 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.915 1+0 records in 00:05:48.915 1+0 records out 00:05:48.915 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279745 s, 14.6 MB/s 00:05:48.915 15:24:29 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.915 15:24:29 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:48.915 15:24:29 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.915 15:24:29 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:48.915 15:24:29 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:48.915 15:24:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.915 15:24:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.915 15:24:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:48.915 15:24:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.915 15:24:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:48.915 15:24:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:48.915 { 00:05:48.915 "nbd_device": "/dev/nbd0", 00:05:48.915 "bdev_name": "Malloc0" 00:05:48.915 }, 00:05:48.915 { 00:05:48.915 "nbd_device": "/dev/nbd1", 00:05:48.915 "bdev_name": "Malloc1" 00:05:48.915 } 00:05:48.915 ]' 00:05:48.915 15:24:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:48.915 { 00:05:48.915 "nbd_device": "/dev/nbd0", 00:05:48.915 "bdev_name": "Malloc0" 00:05:48.915 }, 00:05:48.915 { 00:05:48.915 "nbd_device": "/dev/nbd1", 00:05:48.915 "bdev_name": "Malloc1" 00:05:48.915 } 00:05:48.915 ]' 00:05:48.915 15:24:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.178 15:24:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:49.178 /dev/nbd1' 00:05:49.178 15:24:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.178 15:24:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:49.178 /dev/nbd1' 00:05:49.178 15:24:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:49.178 15:24:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:49.178 15:24:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:49.178 15:24:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:49.178 15:24:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:49.178 15:24:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.178 15:24:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.178 15:24:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:49.178 15:24:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:49.178 15:24:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:49.178 15:24:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:49.178 256+0 records in 00:05:49.178 256+0 records out 00:05:49.178 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125154 s, 83.8 MB/s 00:05:49.178 15:24:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.178 15:24:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:49.178 256+0 records in 00:05:49.178 256+0 records out 00:05:49.178 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119555 s, 87.7 MB/s 00:05:49.178 15:24:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.178 15:24:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:49.178 256+0 records in 00:05:49.178 256+0 records out 00:05:49.178 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128053 s, 81.9 MB/s 00:05:49.178 15:24:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:49.178 15:24:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.178 15:24:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.178 15:24:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:49.178 15:24:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:49.178 15:24:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:49.178 15:24:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:49.178 15:24:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.178 15:24:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:49.178 15:24:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.178 15:24:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:49.178 15:24:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:49.178 15:24:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:49.178 15:24:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.178 15:24:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.178 15:24:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:49.178 15:24:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:49.178 15:24:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.178 15:24:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:49.440 15:24:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:49.440 15:24:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:49.440 15:24:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:49.440 15:24:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.440 15:24:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.440 15:24:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:49.440 15:24:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.440 15:24:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.440 15:24:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.440 15:24:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:49.440 15:24:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:49.440 15:24:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:49.440 15:24:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:49.440 15:24:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.440 15:24:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.440 15:24:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:49.440 15:24:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.440 15:24:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.440 15:24:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.440 15:24:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.440 15:24:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.702 15:24:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:49.702 15:24:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:49.702 15:24:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.702 15:24:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:49.702 15:24:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:49.702 15:24:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.702 15:24:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:49.702 15:24:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:49.702 15:24:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:49.702 15:24:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:49.702 15:24:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:49.702 15:24:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:49.702 15:24:30 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:49.964 15:24:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:49.964 [2024-09-27 15:24:30.408677] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:49.964 [2024-09-27 15:24:30.435593] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.964 [2024-09-27 15:24:30.435593] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.225 [2024-09-27 15:24:30.465411] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:50.225 [2024-09-27 15:24:30.465445] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:53.528 15:24:33 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:53.528 15:24:33 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:53.528 spdk_app_start Round 2 00:05:53.528 15:24:33 event.app_repeat -- event/event.sh@25 -- # waitforlisten 119945 /var/tmp/spdk-nbd.sock 00:05:53.528 15:24:33 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 119945 ']' 00:05:53.528 15:24:33 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:53.528 15:24:33 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.528 15:24:33 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:53.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:53.528 15:24:33 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.528 15:24:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:53.528 15:24:33 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:53.528 15:24:33 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:53.528 15:24:33 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.528 Malloc0 00:05:53.528 15:24:33 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.528 Malloc1 00:05:53.528 15:24:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:53.528 15:24:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.528 15:24:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.528 15:24:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:53.528 15:24:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.528 15:24:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:53.528 15:24:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:53.528 15:24:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.528 15:24:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.528 15:24:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:53.528 15:24:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.528 15:24:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:53.528 15:24:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:53.528 15:24:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:53.528 15:24:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.528 15:24:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:53.789 /dev/nbd0 00:05:53.789 15:24:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:53.789 15:24:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:53.789 15:24:34 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:53.789 15:24:34 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:53.789 15:24:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:53.789 15:24:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:53.789 15:24:34 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:53.789 15:24:34 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:53.789 15:24:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:53.789 15:24:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:53.789 15:24:34 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:53.789 1+0 records in 00:05:53.789 1+0 records out 00:05:53.789 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288686 s, 14.2 MB/s 00:05:53.789 15:24:34 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:53.789 15:24:34 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:53.789 15:24:34 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:53.789 15:24:34 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:53.789 15:24:34 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:53.789 15:24:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:53.789 15:24:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.789 15:24:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:54.051 /dev/nbd1 00:05:54.051 15:24:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:54.051 15:24:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:54.051 15:24:34 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:54.051 15:24:34 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:54.051 15:24:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:54.051 15:24:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:54.051 15:24:34 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:54.051 15:24:34 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:54.051 15:24:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:54.051 15:24:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:54.051 15:24:34 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.051 1+0 records in 00:05:54.051 1+0 records out 00:05:54.051 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281354 s, 14.6 MB/s 00:05:54.051 15:24:34 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:54.051 15:24:34 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:54.051 15:24:34 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:54.051 15:24:34 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:54.051 15:24:34 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:54.051 15:24:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.051 15:24:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.051 15:24:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.051 15:24:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.051 15:24:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.312 15:24:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:54.312 { 00:05:54.312 "nbd_device": "/dev/nbd0", 00:05:54.312 "bdev_name": "Malloc0" 00:05:54.312 }, 00:05:54.312 { 00:05:54.312 "nbd_device": "/dev/nbd1", 00:05:54.312 "bdev_name": "Malloc1" 00:05:54.312 } 00:05:54.312 ]' 00:05:54.312 15:24:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:54.312 { 00:05:54.312 "nbd_device": "/dev/nbd0", 00:05:54.312 "bdev_name": "Malloc0" 00:05:54.312 }, 00:05:54.312 { 00:05:54.312 "nbd_device": "/dev/nbd1", 00:05:54.312 "bdev_name": "Malloc1" 00:05:54.312 } 00:05:54.312 ]' 00:05:54.312 15:24:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.312 15:24:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:54.312 /dev/nbd1' 00:05:54.312 15:24:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:54.312 /dev/nbd1' 00:05:54.312 15:24:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.312 15:24:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:54.312 15:24:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:54.312 15:24:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:54.312 15:24:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:54.312 15:24:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:54.312 15:24:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.312 15:24:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.312 15:24:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:54.312 15:24:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.312 15:24:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:54.312 15:24:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:54.312 256+0 records in 00:05:54.312 256+0 records out 00:05:54.312 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127454 s, 82.3 MB/s 00:05:54.312 15:24:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.312 15:24:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:54.312 256+0 records in 00:05:54.312 256+0 records out 00:05:54.312 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122156 s, 85.8 MB/s 00:05:54.312 15:24:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.312 15:24:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:54.312 256+0 records in 00:05:54.312 256+0 records out 00:05:54.312 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0132403 s, 79.2 MB/s 00:05:54.312 15:24:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:54.312 15:24:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.312 15:24:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.312 15:24:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:54.312 15:24:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.312 15:24:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:54.312 15:24:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:54.312 15:24:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.312 15:24:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:54.312 15:24:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.312 15:24:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:54.312 15:24:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.312 15:24:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:54.312 15:24:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.312 15:24:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.312 15:24:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:54.312 15:24:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:54.312 15:24:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.313 15:24:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:54.574 15:24:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:54.574 15:24:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:54.574 15:24:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:54.574 15:24:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.574 15:24:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.574 15:24:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:54.574 15:24:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:54.574 15:24:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.574 15:24:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.574 15:24:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:54.574 15:24:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:54.574 15:24:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:54.574 15:24:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:54.574 15:24:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.574 15:24:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.574 15:24:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:54.574 15:24:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:54.574 15:24:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.835 15:24:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.835 15:24:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.835 15:24:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.835 15:24:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:54.835 15:24:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:54.835 15:24:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.835 15:24:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:54.835 15:24:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:54.835 15:24:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.835 15:24:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:54.835 15:24:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:54.835 15:24:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:54.835 15:24:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:54.835 15:24:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:54.835 15:24:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:54.835 15:24:35 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:55.097 15:24:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:55.097 [2024-09-27 15:24:35.577788] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:55.360 [2024-09-27 15:24:35.604750] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.360 [2024-09-27 15:24:35.604750] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.360 [2024-09-27 15:24:35.633782] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:55.360 [2024-09-27 15:24:35.633814] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:58.669 15:24:38 event.app_repeat -- event/event.sh@38 -- # waitforlisten 119945 /var/tmp/spdk-nbd.sock 00:05:58.669 15:24:38 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 119945 ']' 00:05:58.669 15:24:38 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:58.669 15:24:38 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:58.669 15:24:38 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:58.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:58.669 15:24:38 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:58.669 15:24:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:58.669 15:24:38 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:58.669 15:24:38 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:58.669 15:24:38 event.app_repeat -- event/event.sh@39 -- # killprocess 119945 00:05:58.669 15:24:38 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 119945 ']' 00:05:58.669 15:24:38 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 119945 00:05:58.669 15:24:38 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:58.669 15:24:38 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:58.669 15:24:38 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 119945 00:05:58.669 15:24:38 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:58.669 15:24:38 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:58.669 15:24:38 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 119945' 00:05:58.669 killing process with pid 119945 00:05:58.669 15:24:38 event.app_repeat -- common/autotest_common.sh@969 -- # kill 119945 00:05:58.669 15:24:38 event.app_repeat -- common/autotest_common.sh@974 -- # wait 119945 00:05:58.669 spdk_app_start is called in Round 0. 00:05:58.669 Shutdown signal received, stop current app iteration 00:05:58.669 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 reinitialization... 00:05:58.669 spdk_app_start is called in Round 1. 00:05:58.669 Shutdown signal received, stop current app iteration 00:05:58.669 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 reinitialization... 00:05:58.669 spdk_app_start is called in Round 2. 00:05:58.669 Shutdown signal received, stop current app iteration 00:05:58.669 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 reinitialization... 00:05:58.669 spdk_app_start is called in Round 3. 00:05:58.669 Shutdown signal received, stop current app iteration 00:05:58.669 15:24:38 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:58.669 15:24:38 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:58.669 00:05:58.669 real 0m15.845s 00:05:58.669 user 0m34.770s 00:05:58.669 sys 0m2.315s 00:05:58.669 15:24:38 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.669 15:24:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:58.669 ************************************ 00:05:58.669 END TEST app_repeat 00:05:58.669 ************************************ 00:05:58.669 15:24:38 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:58.669 15:24:38 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:58.669 15:24:38 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.669 15:24:38 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.669 15:24:38 event -- common/autotest_common.sh@10 -- # set +x 00:05:58.669 ************************************ 00:05:58.669 START TEST cpu_locks 00:05:58.669 ************************************ 00:05:58.669 15:24:38 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:58.669 * Looking for test storage... 00:05:58.669 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:58.669 15:24:38 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:58.669 15:24:38 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:05:58.669 15:24:38 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:58.669 15:24:39 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:58.669 15:24:39 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.669 15:24:39 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.669 15:24:39 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.669 15:24:39 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.669 15:24:39 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.669 15:24:39 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.669 15:24:39 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.669 15:24:39 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.669 15:24:39 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.669 15:24:39 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.669 15:24:39 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.669 15:24:39 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:58.669 15:24:39 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:58.669 15:24:39 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.669 15:24:39 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.669 15:24:39 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:58.669 15:24:39 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:58.669 15:24:39 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.669 15:24:39 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:58.669 15:24:39 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.669 15:24:39 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:58.669 15:24:39 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:58.669 15:24:39 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.669 15:24:39 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:58.669 15:24:39 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.669 15:24:39 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.669 15:24:39 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.669 15:24:39 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:58.669 15:24:39 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.669 15:24:39 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:58.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.669 --rc genhtml_branch_coverage=1 00:05:58.669 --rc genhtml_function_coverage=1 00:05:58.669 --rc genhtml_legend=1 00:05:58.669 --rc geninfo_all_blocks=1 00:05:58.669 --rc geninfo_unexecuted_blocks=1 00:05:58.669 00:05:58.669 ' 00:05:58.669 15:24:39 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:58.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.669 --rc genhtml_branch_coverage=1 00:05:58.669 --rc genhtml_function_coverage=1 00:05:58.670 --rc genhtml_legend=1 00:05:58.670 --rc geninfo_all_blocks=1 00:05:58.670 --rc geninfo_unexecuted_blocks=1 00:05:58.670 00:05:58.670 ' 00:05:58.670 15:24:39 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:58.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.670 --rc genhtml_branch_coverage=1 00:05:58.670 --rc genhtml_function_coverage=1 00:05:58.670 --rc genhtml_legend=1 00:05:58.670 --rc geninfo_all_blocks=1 00:05:58.670 --rc geninfo_unexecuted_blocks=1 00:05:58.670 00:05:58.670 ' 00:05:58.670 15:24:39 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:58.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.670 --rc genhtml_branch_coverage=1 00:05:58.670 --rc genhtml_function_coverage=1 00:05:58.670 --rc genhtml_legend=1 00:05:58.670 --rc geninfo_all_blocks=1 00:05:58.670 --rc geninfo_unexecuted_blocks=1 00:05:58.670 00:05:58.670 ' 00:05:58.670 15:24:39 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:58.670 15:24:39 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:58.670 15:24:39 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:58.670 15:24:39 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:58.670 15:24:39 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.670 15:24:39 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.670 15:24:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.670 ************************************ 00:05:58.670 START TEST default_locks 00:05:58.670 ************************************ 00:05:58.670 15:24:39 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:58.670 15:24:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=123521 00:05:58.670 15:24:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 123521 00:05:58.670 15:24:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:58.670 15:24:39 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 123521 ']' 00:05:58.670 15:24:39 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.670 15:24:39 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:58.670 15:24:39 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.670 15:24:39 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:58.670 15:24:39 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.931 [2024-09-27 15:24:39.192707] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:05:58.931 [2024-09-27 15:24:39.192776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123521 ] 00:05:58.931 [2024-09-27 15:24:39.272175] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.931 [2024-09-27 15:24:39.305625] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.504 15:24:39 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:59.504 15:24:39 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:59.504 15:24:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 123521 00:05:59.765 15:24:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 123521 00:05:59.765 15:24:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:00.026 lslocks: write error 00:06:00.026 15:24:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 123521 00:06:00.026 15:24:40 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 123521 ']' 00:06:00.026 15:24:40 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 123521 00:06:00.026 15:24:40 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:00.026 15:24:40 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:00.026 15:24:40 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 123521 00:06:00.026 15:24:40 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:00.026 15:24:40 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:00.026 15:24:40 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 123521' 00:06:00.026 killing process with pid 123521 00:06:00.026 15:24:40 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 123521 00:06:00.026 15:24:40 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 123521 00:06:00.287 15:24:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 123521 00:06:00.287 15:24:40 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:00.288 15:24:40 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 123521 00:06:00.288 15:24:40 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:00.288 15:24:40 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:00.288 15:24:40 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:00.288 15:24:40 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:00.288 15:24:40 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 123521 00:06:00.288 15:24:40 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 123521 ']' 00:06:00.288 15:24:40 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.288 15:24:40 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:00.288 15:24:40 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.288 15:24:40 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:00.288 15:24:40 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.288 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (123521) - No such process 00:06:00.288 ERROR: process (pid: 123521) is no longer running 00:06:00.288 15:24:40 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:00.288 15:24:40 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:00.288 15:24:40 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:00.288 15:24:40 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:00.288 15:24:40 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:00.288 15:24:40 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:00.288 15:24:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:00.288 15:24:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:00.288 15:24:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:00.288 15:24:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:00.288 00:06:00.288 real 0m1.558s 00:06:00.288 user 0m1.694s 00:06:00.288 sys 0m0.541s 00:06:00.288 15:24:40 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.288 15:24:40 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.288 ************************************ 00:06:00.288 END TEST default_locks 00:06:00.288 ************************************ 00:06:00.288 15:24:40 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:00.288 15:24:40 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.288 15:24:40 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.288 15:24:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.288 ************************************ 00:06:00.288 START TEST default_locks_via_rpc 00:06:00.288 ************************************ 00:06:00.288 15:24:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:00.288 15:24:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=123851 00:06:00.288 15:24:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 123851 00:06:00.288 15:24:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:00.288 15:24:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 123851 ']' 00:06:00.288 15:24:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.288 15:24:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:00.288 15:24:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.288 15:24:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:00.288 15:24:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.549 [2024-09-27 15:24:40.825333] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:00.549 [2024-09-27 15:24:40.825390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123851 ] 00:06:00.549 [2024-09-27 15:24:40.905781] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.549 [2024-09-27 15:24:40.938270] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.122 15:24:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:01.122 15:24:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:01.122 15:24:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:01.122 15:24:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.122 15:24:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.383 15:24:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.383 15:24:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:01.383 15:24:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:01.383 15:24:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:01.383 15:24:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:01.383 15:24:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:01.383 15:24:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.383 15:24:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.383 15:24:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.383 15:24:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 123851 00:06:01.383 15:24:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 123851 00:06:01.383 15:24:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:01.644 15:24:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 123851 00:06:01.644 15:24:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 123851 ']' 00:06:01.905 15:24:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 123851 00:06:01.905 15:24:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:01.905 15:24:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:01.905 15:24:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 123851 00:06:01.905 15:24:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:01.905 15:24:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:01.905 15:24:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 123851' 00:06:01.905 killing process with pid 123851 00:06:01.905 15:24:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 123851 00:06:01.905 15:24:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 123851 00:06:01.905 00:06:01.905 real 0m1.619s 00:06:01.905 user 0m1.729s 00:06:01.905 sys 0m0.570s 00:06:01.905 15:24:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.905 15:24:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.905 ************************************ 00:06:01.905 END TEST default_locks_via_rpc 00:06:01.905 ************************************ 00:06:02.165 15:24:42 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:02.165 15:24:42 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.165 15:24:42 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.165 15:24:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.165 ************************************ 00:06:02.165 START TEST non_locking_app_on_locked_coremask 00:06:02.165 ************************************ 00:06:02.165 15:24:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:02.165 15:24:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=124196 00:06:02.165 15:24:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 124196 /var/tmp/spdk.sock 00:06:02.166 15:24:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:02.166 15:24:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 124196 ']' 00:06:02.166 15:24:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.166 15:24:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.166 15:24:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.166 15:24:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.166 15:24:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.166 [2024-09-27 15:24:42.513207] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:02.166 [2024-09-27 15:24:42.513261] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124196 ] 00:06:02.166 [2024-09-27 15:24:42.592872] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.166 [2024-09-27 15:24:42.623350] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.107 15:24:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:03.107 15:24:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:03.107 15:24:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:03.107 15:24:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=124280 00:06:03.107 15:24:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 124280 /var/tmp/spdk2.sock 00:06:03.107 15:24:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 124280 ']' 00:06:03.107 15:24:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.107 15:24:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:03.107 15:24:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.107 15:24:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:03.107 15:24:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.107 [2024-09-27 15:24:43.338382] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:03.107 [2024-09-27 15:24:43.338433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124280 ] 00:06:03.107 [2024-09-27 15:24:43.411804] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:03.107 [2024-09-27 15:24:43.411824] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.107 [2024-09-27 15:24:43.468546] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.679 15:24:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:03.679 15:24:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:03.679 15:24:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 124196 00:06:03.679 15:24:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 124196 00:06:03.679 15:24:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:04.622 lslocks: write error 00:06:04.622 15:24:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 124196 00:06:04.622 15:24:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 124196 ']' 00:06:04.622 15:24:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 124196 00:06:04.622 15:24:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:04.622 15:24:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:04.622 15:24:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 124196 00:06:04.622 15:24:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:04.622 15:24:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:04.622 15:24:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 124196' 00:06:04.622 killing process with pid 124196 00:06:04.622 15:24:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 124196 00:06:04.622 15:24:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 124196 00:06:04.882 15:24:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 124280 00:06:04.882 15:24:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 124280 ']' 00:06:04.882 15:24:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 124280 00:06:04.882 15:24:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:04.883 15:24:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:04.883 15:24:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 124280 00:06:04.883 15:24:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:04.883 15:24:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:04.883 15:24:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 124280' 00:06:04.883 killing process with pid 124280 00:06:04.883 15:24:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 124280 00:06:04.883 15:24:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 124280 00:06:05.144 00:06:05.144 real 0m3.025s 00:06:05.144 user 0m3.363s 00:06:05.144 sys 0m0.942s 00:06:05.144 15:24:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.144 15:24:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.144 ************************************ 00:06:05.144 END TEST non_locking_app_on_locked_coremask 00:06:05.144 ************************************ 00:06:05.144 15:24:45 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:05.144 15:24:45 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.144 15:24:45 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.144 15:24:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.144 ************************************ 00:06:05.144 START TEST locking_app_on_unlocked_coremask 00:06:05.144 ************************************ 00:06:05.144 15:24:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:05.144 15:24:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=124767 00:06:05.144 15:24:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 124767 /var/tmp/spdk.sock 00:06:05.144 15:24:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:05.144 15:24:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 124767 ']' 00:06:05.144 15:24:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.144 15:24:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:05.144 15:24:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.144 15:24:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:05.144 15:24:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.144 [2024-09-27 15:24:45.618813] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:05.145 [2024-09-27 15:24:45.618873] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124767 ] 00:06:05.405 [2024-09-27 15:24:45.700270] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:05.405 [2024-09-27 15:24:45.700306] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.405 [2024-09-27 15:24:45.739511] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.976 15:24:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.976 15:24:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:05.976 15:24:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=124990 00:06:05.976 15:24:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 124990 /var/tmp/spdk2.sock 00:06:05.976 15:24:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 124990 ']' 00:06:05.976 15:24:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:05.976 15:24:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.976 15:24:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:05.976 15:24:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.976 15:24:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:05.976 15:24:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.237 [2024-09-27 15:24:46.477279] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:06.237 [2024-09-27 15:24:46.477332] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124990 ] 00:06:06.237 [2024-09-27 15:24:46.551901] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.237 [2024-09-27 15:24:46.608746] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.809 15:24:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:06.809 15:24:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:06.809 15:24:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 124990 00:06:06.809 15:24:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:06.809 15:24:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 124990 00:06:07.752 lslocks: write error 00:06:07.752 15:24:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 124767 00:06:07.752 15:24:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 124767 ']' 00:06:07.752 15:24:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 124767 00:06:07.752 15:24:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:07.752 15:24:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:07.752 15:24:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 124767 00:06:07.752 15:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:07.752 15:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:07.752 15:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 124767' 00:06:07.752 killing process with pid 124767 00:06:07.752 15:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 124767 00:06:07.752 15:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 124767 00:06:08.013 15:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 124990 00:06:08.013 15:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 124990 ']' 00:06:08.013 15:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 124990 00:06:08.013 15:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:08.013 15:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:08.013 15:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 124990 00:06:08.013 15:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:08.013 15:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:08.013 15:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 124990' 00:06:08.013 killing process with pid 124990 00:06:08.013 15:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 124990 00:06:08.013 15:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 124990 00:06:08.274 00:06:08.274 real 0m3.097s 00:06:08.274 user 0m3.390s 00:06:08.274 sys 0m1.018s 00:06:08.274 15:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.274 15:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.274 ************************************ 00:06:08.274 END TEST locking_app_on_unlocked_coremask 00:06:08.274 ************************************ 00:06:08.274 15:24:48 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:08.274 15:24:48 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.274 15:24:48 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.274 15:24:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.274 ************************************ 00:06:08.274 START TEST locking_app_on_locked_coremask 00:06:08.274 ************************************ 00:06:08.274 15:24:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:08.274 15:24:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=125380 00:06:08.274 15:24:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 125380 /var/tmp/spdk.sock 00:06:08.274 15:24:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:08.274 15:24:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 125380 ']' 00:06:08.274 15:24:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.274 15:24:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.274 15:24:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.274 15:24:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.274 15:24:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.535 [2024-09-27 15:24:48.803455] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:08.535 [2024-09-27 15:24:48.803513] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125380 ] 00:06:08.535 [2024-09-27 15:24:48.883888] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.535 [2024-09-27 15:24:48.917370] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.107 15:24:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.107 15:24:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:09.107 15:24:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=125697 00:06:09.107 15:24:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 125697 /var/tmp/spdk2.sock 00:06:09.107 15:24:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:09.107 15:24:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:09.107 15:24:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 125697 /var/tmp/spdk2.sock 00:06:09.107 15:24:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:09.107 15:24:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.107 15:24:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:09.107 15:24:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.107 15:24:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 125697 /var/tmp/spdk2.sock 00:06:09.107 15:24:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 125697 ']' 00:06:09.107 15:24:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.107 15:24:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:09.107 15:24:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.107 15:24:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:09.107 15:24:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.368 [2024-09-27 15:24:49.623830] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:09.368 [2024-09-27 15:24:49.623883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125697 ] 00:06:09.368 [2024-09-27 15:24:49.697820] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 125380 has claimed it. 00:06:09.368 [2024-09-27 15:24:49.697849] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:09.938 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (125697) - No such process 00:06:09.938 ERROR: process (pid: 125697) is no longer running 00:06:09.938 15:24:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.938 15:24:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:09.938 15:24:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:09.938 15:24:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:09.938 15:24:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:09.938 15:24:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:09.938 15:24:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 125380 00:06:09.938 15:24:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 125380 00:06:09.938 15:24:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:10.199 lslocks: write error 00:06:10.199 15:24:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 125380 00:06:10.199 15:24:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 125380 ']' 00:06:10.199 15:24:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 125380 00:06:10.199 15:24:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:10.199 15:24:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:10.199 15:24:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 125380 00:06:10.199 15:24:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:10.199 15:24:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:10.199 15:24:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 125380' 00:06:10.199 killing process with pid 125380 00:06:10.199 15:24:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 125380 00:06:10.199 15:24:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 125380 00:06:10.465 00:06:10.465 real 0m1.992s 00:06:10.465 user 0m2.238s 00:06:10.465 sys 0m0.544s 00:06:10.465 15:24:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.465 15:24:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.465 ************************************ 00:06:10.465 END TEST locking_app_on_locked_coremask 00:06:10.465 ************************************ 00:06:10.465 15:24:50 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:10.465 15:24:50 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.465 15:24:50 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.465 15:24:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.465 ************************************ 00:06:10.465 START TEST locking_overlapped_coremask 00:06:10.465 ************************************ 00:06:10.465 15:24:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:10.465 15:24:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=125913 00:06:10.465 15:24:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 125913 /var/tmp/spdk.sock 00:06:10.465 15:24:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:10.465 15:24:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 125913 ']' 00:06:10.465 15:24:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.465 15:24:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.465 15:24:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.465 15:24:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.465 15:24:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.465 [2024-09-27 15:24:50.869837] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:10.465 [2024-09-27 15:24:50.869914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125913 ] 00:06:10.465 [2024-09-27 15:24:50.951101] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:10.726 [2024-09-27 15:24:50.992993] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.726 [2024-09-27 15:24:50.993277] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.726 [2024-09-27 15:24:50.993278] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.297 15:24:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:11.297 15:24:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:11.297 15:24:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:11.297 15:24:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=126078 00:06:11.297 15:24:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 126078 /var/tmp/spdk2.sock 00:06:11.297 15:24:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:11.297 15:24:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 126078 /var/tmp/spdk2.sock 00:06:11.297 15:24:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:11.297 15:24:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:11.297 15:24:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:11.297 15:24:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:11.297 15:24:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 126078 /var/tmp/spdk2.sock 00:06:11.297 15:24:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 126078 ']' 00:06:11.297 15:24:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.297 15:24:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.297 15:24:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.297 15:24:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.297 15:24:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.297 [2024-09-27 15:24:51.711809] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:11.297 [2024-09-27 15:24:51.711860] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126078 ] 00:06:11.558 [2024-09-27 15:24:51.803694] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 125913 has claimed it. 00:06:11.558 [2024-09-27 15:24:51.803735] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:12.129 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (126078) - No such process 00:06:12.129 ERROR: process (pid: 126078) is no longer running 00:06:12.130 15:24:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.130 15:24:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:12.130 15:24:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:12.130 15:24:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:12.130 15:24:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:12.130 15:24:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:12.130 15:24:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:12.130 15:24:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:12.130 15:24:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:12.130 15:24:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:12.130 15:24:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 125913 00:06:12.130 15:24:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 125913 ']' 00:06:12.130 15:24:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 125913 00:06:12.130 15:24:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:12.130 15:24:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:12.130 15:24:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 125913 00:06:12.130 15:24:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:12.130 15:24:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:12.130 15:24:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 125913' 00:06:12.130 killing process with pid 125913 00:06:12.130 15:24:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 125913 00:06:12.130 15:24:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 125913 00:06:12.130 00:06:12.130 real 0m1.799s 00:06:12.130 user 0m5.143s 00:06:12.130 sys 0m0.403s 00:06:12.130 15:24:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.130 15:24:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.130 ************************************ 00:06:12.130 END TEST locking_overlapped_coremask 00:06:12.130 ************************************ 00:06:12.391 15:24:52 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:12.391 15:24:52 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:12.391 15:24:52 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.391 15:24:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.391 ************************************ 00:06:12.391 START TEST locking_overlapped_coremask_via_rpc 00:06:12.391 ************************************ 00:06:12.391 15:24:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:12.391 15:24:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=126384 00:06:12.391 15:24:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 126384 /var/tmp/spdk.sock 00:06:12.391 15:24:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 126384 ']' 00:06:12.391 15:24:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:12.391 15:24:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.391 15:24:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:12.391 15:24:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.391 15:24:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:12.391 15:24:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.391 [2024-09-27 15:24:52.737930] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:12.391 [2024-09-27 15:24:52.737990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126384 ] 00:06:12.391 [2024-09-27 15:24:52.818036] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:12.391 [2024-09-27 15:24:52.818066] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:12.391 [2024-09-27 15:24:52.852212] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.391 [2024-09-27 15:24:52.852368] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.391 [2024-09-27 15:24:52.852369] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:13.333 15:24:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:13.333 15:24:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:13.333 15:24:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:13.333 15:24:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=126452 00:06:13.333 15:24:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 126452 /var/tmp/spdk2.sock 00:06:13.333 15:24:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 126452 ']' 00:06:13.333 15:24:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:13.333 15:24:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:13.333 15:24:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:13.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:13.333 15:24:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:13.333 15:24:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.333 [2024-09-27 15:24:53.574787] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:13.333 [2024-09-27 15:24:53.574836] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126452 ] 00:06:13.333 [2024-09-27 15:24:53.665805] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:13.333 [2024-09-27 15:24:53.665833] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:13.333 [2024-09-27 15:24:53.733932] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:13.333 [2024-09-27 15:24:53.734018] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:13.333 [2024-09-27 15:24:53.734020] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:06:13.902 15:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:13.902 15:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:13.902 15:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:13.902 15:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.902 15:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.902 15:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.902 15:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:13.902 15:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:13.902 15:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:13.902 15:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:13.902 15:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:13.902 15:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:13.902 15:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:13.902 15:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:13.902 15:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.902 15:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.162 [2024-09-27 15:24:54.394982] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 126384 has claimed it. 00:06:14.162 request: 00:06:14.162 { 00:06:14.162 "method": "framework_enable_cpumask_locks", 00:06:14.162 "req_id": 1 00:06:14.162 } 00:06:14.162 Got JSON-RPC error response 00:06:14.162 response: 00:06:14.162 { 00:06:14.162 "code": -32603, 00:06:14.162 "message": "Failed to claim CPU core: 2" 00:06:14.162 } 00:06:14.162 15:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:14.162 15:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:14.162 15:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:14.162 15:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:14.162 15:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:14.162 15:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 126384 /var/tmp/spdk.sock 00:06:14.162 15:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 126384 ']' 00:06:14.162 15:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.162 15:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.162 15:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.162 15:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.162 15:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.163 15:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:14.163 15:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:14.163 15:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 126452 /var/tmp/spdk2.sock 00:06:14.163 15:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 126452 ']' 00:06:14.163 15:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.163 15:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.163 15:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.163 15:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.163 15:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.423 15:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:14.423 15:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:14.423 15:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:14.423 15:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:14.423 15:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:14.423 15:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:14.423 00:06:14.423 real 0m2.092s 00:06:14.423 user 0m0.856s 00:06:14.423 sys 0m0.159s 00:06:14.423 15:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.423 15:24:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.423 ************************************ 00:06:14.423 END TEST locking_overlapped_coremask_via_rpc 00:06:14.423 ************************************ 00:06:14.423 15:24:54 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:14.423 15:24:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 126384 ]] 00:06:14.423 15:24:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 126384 00:06:14.423 15:24:54 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 126384 ']' 00:06:14.423 15:24:54 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 126384 00:06:14.423 15:24:54 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:14.423 15:24:54 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:14.423 15:24:54 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 126384 00:06:14.423 15:24:54 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:14.423 15:24:54 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:14.423 15:24:54 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 126384' 00:06:14.423 killing process with pid 126384 00:06:14.423 15:24:54 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 126384 00:06:14.423 15:24:54 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 126384 00:06:14.682 15:24:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 126452 ]] 00:06:14.683 15:24:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 126452 00:06:14.683 15:24:55 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 126452 ']' 00:06:14.683 15:24:55 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 126452 00:06:14.683 15:24:55 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:14.683 15:24:55 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:14.683 15:24:55 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 126452 00:06:14.683 15:24:55 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:14.683 15:24:55 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:14.683 15:24:55 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 126452' 00:06:14.683 killing process with pid 126452 00:06:14.683 15:24:55 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 126452 00:06:14.683 15:24:55 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 126452 00:06:14.943 15:24:55 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:14.943 15:24:55 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:14.943 15:24:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 126384 ]] 00:06:14.943 15:24:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 126384 00:06:14.943 15:24:55 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 126384 ']' 00:06:14.943 15:24:55 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 126384 00:06:14.943 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (126384) - No such process 00:06:14.943 15:24:55 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 126384 is not found' 00:06:14.943 Process with pid 126384 is not found 00:06:14.944 15:24:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 126452 ]] 00:06:14.944 15:24:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 126452 00:06:14.944 15:24:55 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 126452 ']' 00:06:14.944 15:24:55 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 126452 00:06:14.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (126452) - No such process 00:06:14.944 15:24:55 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 126452 is not found' 00:06:14.944 Process with pid 126452 is not found 00:06:14.944 15:24:55 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:14.944 00:06:14.944 real 0m16.452s 00:06:14.944 user 0m28.508s 00:06:14.944 sys 0m5.165s 00:06:14.944 15:24:55 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.944 15:24:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.944 ************************************ 00:06:14.944 END TEST cpu_locks 00:06:14.944 ************************************ 00:06:14.944 00:06:14.944 real 0m42.561s 00:06:14.944 user 1m24.235s 00:06:14.944 sys 0m8.591s 00:06:14.944 15:24:55 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.944 15:24:55 event -- common/autotest_common.sh@10 -- # set +x 00:06:14.944 ************************************ 00:06:14.944 END TEST event 00:06:14.944 ************************************ 00:06:14.944 15:24:55 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:14.944 15:24:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:14.944 15:24:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.944 15:24:55 -- common/autotest_common.sh@10 -- # set +x 00:06:15.205 ************************************ 00:06:15.205 START TEST thread 00:06:15.205 ************************************ 00:06:15.205 15:24:55 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:15.205 * Looking for test storage... 00:06:15.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:15.205 15:24:55 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:15.205 15:24:55 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:06:15.205 15:24:55 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:15.205 15:24:55 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:15.205 15:24:55 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.205 15:24:55 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.205 15:24:55 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.205 15:24:55 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.205 15:24:55 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.205 15:24:55 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.205 15:24:55 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.205 15:24:55 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.205 15:24:55 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.205 15:24:55 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.205 15:24:55 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.205 15:24:55 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:15.205 15:24:55 thread -- scripts/common.sh@345 -- # : 1 00:06:15.205 15:24:55 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.205 15:24:55 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.205 15:24:55 thread -- scripts/common.sh@365 -- # decimal 1 00:06:15.205 15:24:55 thread -- scripts/common.sh@353 -- # local d=1 00:06:15.205 15:24:55 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.205 15:24:55 thread -- scripts/common.sh@355 -- # echo 1 00:06:15.205 15:24:55 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.205 15:24:55 thread -- scripts/common.sh@366 -- # decimal 2 00:06:15.205 15:24:55 thread -- scripts/common.sh@353 -- # local d=2 00:06:15.205 15:24:55 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.205 15:24:55 thread -- scripts/common.sh@355 -- # echo 2 00:06:15.205 15:24:55 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.205 15:24:55 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.205 15:24:55 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.205 15:24:55 thread -- scripts/common.sh@368 -- # return 0 00:06:15.205 15:24:55 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.205 15:24:55 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:15.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.205 --rc genhtml_branch_coverage=1 00:06:15.205 --rc genhtml_function_coverage=1 00:06:15.205 --rc genhtml_legend=1 00:06:15.205 --rc geninfo_all_blocks=1 00:06:15.205 --rc geninfo_unexecuted_blocks=1 00:06:15.205 00:06:15.205 ' 00:06:15.205 15:24:55 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:15.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.205 --rc genhtml_branch_coverage=1 00:06:15.205 --rc genhtml_function_coverage=1 00:06:15.205 --rc genhtml_legend=1 00:06:15.205 --rc geninfo_all_blocks=1 00:06:15.205 --rc geninfo_unexecuted_blocks=1 00:06:15.205 00:06:15.205 ' 00:06:15.205 15:24:55 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:15.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.205 --rc genhtml_branch_coverage=1 00:06:15.205 --rc genhtml_function_coverage=1 00:06:15.205 --rc genhtml_legend=1 00:06:15.205 --rc geninfo_all_blocks=1 00:06:15.205 --rc geninfo_unexecuted_blocks=1 00:06:15.205 00:06:15.205 ' 00:06:15.205 15:24:55 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:15.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.205 --rc genhtml_branch_coverage=1 00:06:15.205 --rc genhtml_function_coverage=1 00:06:15.205 --rc genhtml_legend=1 00:06:15.205 --rc geninfo_all_blocks=1 00:06:15.205 --rc geninfo_unexecuted_blocks=1 00:06:15.205 00:06:15.205 ' 00:06:15.205 15:24:55 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:15.205 15:24:55 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:15.205 15:24:55 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.205 15:24:55 thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.466 ************************************ 00:06:15.466 START TEST thread_poller_perf 00:06:15.466 ************************************ 00:06:15.466 15:24:55 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:15.466 [2024-09-27 15:24:55.718802] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:15.466 [2024-09-27 15:24:55.718891] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126975 ] 00:06:15.466 [2024-09-27 15:24:55.804096] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.466 [2024-09-27 15:24:55.843804] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.466 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:16.406 ====================================== 00:06:16.406 busy:2406947510 (cyc) 00:06:16.406 total_run_count: 418000 00:06:16.406 tsc_hz: 2400000000 (cyc) 00:06:16.406 ====================================== 00:06:16.406 poller_cost: 5758 (cyc), 2399 (nsec) 00:06:16.406 00:06:16.406 real 0m1.187s 00:06:16.406 user 0m1.086s 00:06:16.406 sys 0m0.097s 00:06:16.406 15:24:56 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.406 15:24:56 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:16.406 ************************************ 00:06:16.406 END TEST thread_poller_perf 00:06:16.406 ************************************ 00:06:16.667 15:24:56 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:16.667 15:24:56 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:16.667 15:24:56 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.667 15:24:56 thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.667 ************************************ 00:06:16.667 START TEST thread_poller_perf 00:06:16.667 ************************************ 00:06:16.667 15:24:56 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:16.667 [2024-09-27 15:24:56.981706] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:16.667 [2024-09-27 15:24:56.981789] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127250 ] 00:06:16.667 [2024-09-27 15:24:57.063795] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.667 [2024-09-27 15:24:57.098776] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.667 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:18.054 ====================================== 00:06:18.054 busy:2401738378 (cyc) 00:06:18.054 total_run_count: 5553000 00:06:18.054 tsc_hz: 2400000000 (cyc) 00:06:18.054 ====================================== 00:06:18.054 poller_cost: 432 (cyc), 180 (nsec) 00:06:18.054 00:06:18.054 real 0m1.175s 00:06:18.054 user 0m1.082s 00:06:18.054 sys 0m0.088s 00:06:18.054 15:24:58 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.054 15:24:58 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:18.054 ************************************ 00:06:18.054 END TEST thread_poller_perf 00:06:18.054 ************************************ 00:06:18.054 15:24:58 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:18.054 00:06:18.054 real 0m2.719s 00:06:18.054 user 0m2.340s 00:06:18.054 sys 0m0.391s 00:06:18.054 15:24:58 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.054 15:24:58 thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.054 ************************************ 00:06:18.054 END TEST thread 00:06:18.054 ************************************ 00:06:18.054 15:24:58 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:18.054 15:24:58 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:18.054 15:24:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.054 15:24:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.054 15:24:58 -- common/autotest_common.sh@10 -- # set +x 00:06:18.054 ************************************ 00:06:18.054 START TEST app_cmdline 00:06:18.054 ************************************ 00:06:18.054 15:24:58 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:18.054 * Looking for test storage... 00:06:18.054 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:18.054 15:24:58 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:18.054 15:24:58 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:06:18.054 15:24:58 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:18.054 15:24:58 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:18.054 15:24:58 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.054 15:24:58 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.054 15:24:58 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.054 15:24:58 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.054 15:24:58 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.054 15:24:58 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.054 15:24:58 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.054 15:24:58 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.054 15:24:58 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.054 15:24:58 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.054 15:24:58 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.054 15:24:58 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:18.054 15:24:58 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:18.054 15:24:58 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.054 15:24:58 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.054 15:24:58 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:18.054 15:24:58 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:18.054 15:24:58 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.054 15:24:58 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:18.054 15:24:58 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.054 15:24:58 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:18.054 15:24:58 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:18.054 15:24:58 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.054 15:24:58 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:18.054 15:24:58 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.054 15:24:58 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.054 15:24:58 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.054 15:24:58 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:18.054 15:24:58 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.055 15:24:58 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:18.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.055 --rc genhtml_branch_coverage=1 00:06:18.055 --rc genhtml_function_coverage=1 00:06:18.055 --rc genhtml_legend=1 00:06:18.055 --rc geninfo_all_blocks=1 00:06:18.055 --rc geninfo_unexecuted_blocks=1 00:06:18.055 00:06:18.055 ' 00:06:18.055 15:24:58 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:18.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.055 --rc genhtml_branch_coverage=1 00:06:18.055 --rc genhtml_function_coverage=1 00:06:18.055 --rc genhtml_legend=1 00:06:18.055 --rc geninfo_all_blocks=1 00:06:18.055 --rc geninfo_unexecuted_blocks=1 00:06:18.055 00:06:18.055 ' 00:06:18.055 15:24:58 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:18.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.055 --rc genhtml_branch_coverage=1 00:06:18.055 --rc genhtml_function_coverage=1 00:06:18.055 --rc genhtml_legend=1 00:06:18.055 --rc geninfo_all_blocks=1 00:06:18.055 --rc geninfo_unexecuted_blocks=1 00:06:18.055 00:06:18.055 ' 00:06:18.055 15:24:58 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:18.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.055 --rc genhtml_branch_coverage=1 00:06:18.055 --rc genhtml_function_coverage=1 00:06:18.055 --rc genhtml_legend=1 00:06:18.055 --rc geninfo_all_blocks=1 00:06:18.055 --rc geninfo_unexecuted_blocks=1 00:06:18.055 00:06:18.055 ' 00:06:18.055 15:24:58 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:18.055 15:24:58 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=127653 00:06:18.055 15:24:58 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 127653 00:06:18.055 15:24:58 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:18.055 15:24:58 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 127653 ']' 00:06:18.055 15:24:58 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.055 15:24:58 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:18.055 15:24:58 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.055 15:24:58 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:18.055 15:24:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:18.055 [2024-09-27 15:24:58.504019] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:18.055 [2024-09-27 15:24:58.504071] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127653 ] 00:06:18.315 [2024-09-27 15:24:58.579533] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.315 [2024-09-27 15:24:58.609264] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.887 15:24:59 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:18.887 15:24:59 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:18.887 15:24:59 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:19.148 { 00:06:19.148 "version": "SPDK v25.01-pre git sha1 09cc66129", 00:06:19.148 "fields": { 00:06:19.148 "major": 25, 00:06:19.148 "minor": 1, 00:06:19.148 "patch": 0, 00:06:19.148 "suffix": "-pre", 00:06:19.148 "commit": "09cc66129" 00:06:19.148 } 00:06:19.148 } 00:06:19.148 15:24:59 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:19.148 15:24:59 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:19.148 15:24:59 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:19.148 15:24:59 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:19.148 15:24:59 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:19.148 15:24:59 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:19.148 15:24:59 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.148 15:24:59 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:19.148 15:24:59 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:19.148 15:24:59 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.148 15:24:59 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:19.148 15:24:59 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:19.148 15:24:59 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:19.148 15:24:59 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:19.148 15:24:59 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:19.148 15:24:59 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:19.148 15:24:59 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.148 15:24:59 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:19.148 15:24:59 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.148 15:24:59 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:19.148 15:24:59 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.148 15:24:59 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:19.148 15:24:59 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:19.149 15:24:59 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:19.410 request: 00:06:19.410 { 00:06:19.410 "method": "env_dpdk_get_mem_stats", 00:06:19.410 "req_id": 1 00:06:19.410 } 00:06:19.410 Got JSON-RPC error response 00:06:19.410 response: 00:06:19.410 { 00:06:19.410 "code": -32601, 00:06:19.410 "message": "Method not found" 00:06:19.410 } 00:06:19.410 15:24:59 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:19.410 15:24:59 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:19.410 15:24:59 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:19.410 15:24:59 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:19.410 15:24:59 app_cmdline -- app/cmdline.sh@1 -- # killprocess 127653 00:06:19.410 15:24:59 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 127653 ']' 00:06:19.410 15:24:59 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 127653 00:06:19.410 15:24:59 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:19.410 15:24:59 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:19.410 15:24:59 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 127653 00:06:19.410 15:24:59 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:19.410 15:24:59 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:19.410 15:24:59 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 127653' 00:06:19.410 killing process with pid 127653 00:06:19.410 15:24:59 app_cmdline -- common/autotest_common.sh@969 -- # kill 127653 00:06:19.410 15:24:59 app_cmdline -- common/autotest_common.sh@974 -- # wait 127653 00:06:19.672 00:06:19.672 real 0m1.710s 00:06:19.672 user 0m2.067s 00:06:19.672 sys 0m0.440s 00:06:19.672 15:24:59 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.672 15:24:59 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:19.672 ************************************ 00:06:19.672 END TEST app_cmdline 00:06:19.672 ************************************ 00:06:19.672 15:25:00 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:19.672 15:25:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.672 15:25:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.672 15:25:00 -- common/autotest_common.sh@10 -- # set +x 00:06:19.672 ************************************ 00:06:19.672 START TEST version 00:06:19.672 ************************************ 00:06:19.672 15:25:00 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:19.672 * Looking for test storage... 00:06:19.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:19.672 15:25:00 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:19.672 15:25:00 version -- common/autotest_common.sh@1681 -- # lcov --version 00:06:19.672 15:25:00 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:19.934 15:25:00 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:19.934 15:25:00 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.934 15:25:00 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.934 15:25:00 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.934 15:25:00 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.934 15:25:00 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.934 15:25:00 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.934 15:25:00 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.934 15:25:00 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.934 15:25:00 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.934 15:25:00 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.934 15:25:00 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.934 15:25:00 version -- scripts/common.sh@344 -- # case "$op" in 00:06:19.934 15:25:00 version -- scripts/common.sh@345 -- # : 1 00:06:19.934 15:25:00 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.934 15:25:00 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.934 15:25:00 version -- scripts/common.sh@365 -- # decimal 1 00:06:19.934 15:25:00 version -- scripts/common.sh@353 -- # local d=1 00:06:19.934 15:25:00 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.934 15:25:00 version -- scripts/common.sh@355 -- # echo 1 00:06:19.934 15:25:00 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.934 15:25:00 version -- scripts/common.sh@366 -- # decimal 2 00:06:19.934 15:25:00 version -- scripts/common.sh@353 -- # local d=2 00:06:19.934 15:25:00 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.934 15:25:00 version -- scripts/common.sh@355 -- # echo 2 00:06:19.934 15:25:00 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.934 15:25:00 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.934 15:25:00 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.934 15:25:00 version -- scripts/common.sh@368 -- # return 0 00:06:19.934 15:25:00 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.934 15:25:00 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:19.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.934 --rc genhtml_branch_coverage=1 00:06:19.934 --rc genhtml_function_coverage=1 00:06:19.934 --rc genhtml_legend=1 00:06:19.934 --rc geninfo_all_blocks=1 00:06:19.934 --rc geninfo_unexecuted_blocks=1 00:06:19.934 00:06:19.934 ' 00:06:19.934 15:25:00 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:19.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.934 --rc genhtml_branch_coverage=1 00:06:19.934 --rc genhtml_function_coverage=1 00:06:19.934 --rc genhtml_legend=1 00:06:19.934 --rc geninfo_all_blocks=1 00:06:19.934 --rc geninfo_unexecuted_blocks=1 00:06:19.934 00:06:19.934 ' 00:06:19.934 15:25:00 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:19.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.934 --rc genhtml_branch_coverage=1 00:06:19.934 --rc genhtml_function_coverage=1 00:06:19.934 --rc genhtml_legend=1 00:06:19.935 --rc geninfo_all_blocks=1 00:06:19.935 --rc geninfo_unexecuted_blocks=1 00:06:19.935 00:06:19.935 ' 00:06:19.935 15:25:00 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:19.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.935 --rc genhtml_branch_coverage=1 00:06:19.935 --rc genhtml_function_coverage=1 00:06:19.935 --rc genhtml_legend=1 00:06:19.935 --rc geninfo_all_blocks=1 00:06:19.935 --rc geninfo_unexecuted_blocks=1 00:06:19.935 00:06:19.935 ' 00:06:19.935 15:25:00 version -- app/version.sh@17 -- # get_header_version major 00:06:19.935 15:25:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:19.935 15:25:00 version -- app/version.sh@14 -- # cut -f2 00:06:19.935 15:25:00 version -- app/version.sh@14 -- # tr -d '"' 00:06:19.935 15:25:00 version -- app/version.sh@17 -- # major=25 00:06:19.935 15:25:00 version -- app/version.sh@18 -- # get_header_version minor 00:06:19.935 15:25:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:19.935 15:25:00 version -- app/version.sh@14 -- # cut -f2 00:06:19.935 15:25:00 version -- app/version.sh@14 -- # tr -d '"' 00:06:19.935 15:25:00 version -- app/version.sh@18 -- # minor=1 00:06:19.935 15:25:00 version -- app/version.sh@19 -- # get_header_version patch 00:06:19.935 15:25:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:19.935 15:25:00 version -- app/version.sh@14 -- # cut -f2 00:06:19.935 15:25:00 version -- app/version.sh@14 -- # tr -d '"' 00:06:19.935 15:25:00 version -- app/version.sh@19 -- # patch=0 00:06:19.935 15:25:00 version -- app/version.sh@20 -- # get_header_version suffix 00:06:19.935 15:25:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:19.935 15:25:00 version -- app/version.sh@14 -- # cut -f2 00:06:19.935 15:25:00 version -- app/version.sh@14 -- # tr -d '"' 00:06:19.935 15:25:00 version -- app/version.sh@20 -- # suffix=-pre 00:06:19.935 15:25:00 version -- app/version.sh@22 -- # version=25.1 00:06:19.935 15:25:00 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:19.935 15:25:00 version -- app/version.sh@28 -- # version=25.1rc0 00:06:19.935 15:25:00 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:19.935 15:25:00 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:19.935 15:25:00 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:19.935 15:25:00 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:19.935 00:06:19.935 real 0m0.282s 00:06:19.935 user 0m0.163s 00:06:19.935 sys 0m0.166s 00:06:19.935 15:25:00 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.935 15:25:00 version -- common/autotest_common.sh@10 -- # set +x 00:06:19.935 ************************************ 00:06:19.935 END TEST version 00:06:19.935 ************************************ 00:06:19.935 15:25:00 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:19.935 15:25:00 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:19.935 15:25:00 -- spdk/autotest.sh@194 -- # uname -s 00:06:19.935 15:25:00 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:19.935 15:25:00 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:19.935 15:25:00 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:19.935 15:25:00 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:19.935 15:25:00 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:06:19.935 15:25:00 -- spdk/autotest.sh@256 -- # timing_exit lib 00:06:19.935 15:25:00 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:19.935 15:25:00 -- common/autotest_common.sh@10 -- # set +x 00:06:19.935 15:25:00 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:06:19.935 15:25:00 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:06:19.935 15:25:00 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:06:19.935 15:25:00 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:06:19.935 15:25:00 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:06:19.935 15:25:00 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:06:19.935 15:25:00 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:19.935 15:25:00 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:19.935 15:25:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.935 15:25:00 -- common/autotest_common.sh@10 -- # set +x 00:06:20.197 ************************************ 00:06:20.197 START TEST nvmf_tcp 00:06:20.197 ************************************ 00:06:20.197 15:25:00 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:20.197 * Looking for test storage... 00:06:20.197 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:20.197 15:25:00 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:20.197 15:25:00 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:06:20.197 15:25:00 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:20.197 15:25:00 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:20.197 15:25:00 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.197 15:25:00 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.197 15:25:00 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.197 15:25:00 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.197 15:25:00 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.197 15:25:00 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.197 15:25:00 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.197 15:25:00 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.197 15:25:00 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.197 15:25:00 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.197 15:25:00 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.197 15:25:00 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:20.197 15:25:00 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:20.197 15:25:00 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.197 15:25:00 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.197 15:25:00 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:20.197 15:25:00 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:20.197 15:25:00 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.197 15:25:00 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:20.197 15:25:00 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.197 15:25:00 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:20.197 15:25:00 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:20.197 15:25:00 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.197 15:25:00 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:20.197 15:25:00 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.197 15:25:00 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.197 15:25:00 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.197 15:25:00 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:20.197 15:25:00 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.197 15:25:00 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:20.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.197 --rc genhtml_branch_coverage=1 00:06:20.197 --rc genhtml_function_coverage=1 00:06:20.197 --rc genhtml_legend=1 00:06:20.197 --rc geninfo_all_blocks=1 00:06:20.197 --rc geninfo_unexecuted_blocks=1 00:06:20.197 00:06:20.197 ' 00:06:20.197 15:25:00 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:20.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.197 --rc genhtml_branch_coverage=1 00:06:20.197 --rc genhtml_function_coverage=1 00:06:20.197 --rc genhtml_legend=1 00:06:20.197 --rc geninfo_all_blocks=1 00:06:20.197 --rc geninfo_unexecuted_blocks=1 00:06:20.197 00:06:20.197 ' 00:06:20.197 15:25:00 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:20.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.197 --rc genhtml_branch_coverage=1 00:06:20.197 --rc genhtml_function_coverage=1 00:06:20.197 --rc genhtml_legend=1 00:06:20.197 --rc geninfo_all_blocks=1 00:06:20.197 --rc geninfo_unexecuted_blocks=1 00:06:20.197 00:06:20.197 ' 00:06:20.197 15:25:00 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:20.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.197 --rc genhtml_branch_coverage=1 00:06:20.197 --rc genhtml_function_coverage=1 00:06:20.197 --rc genhtml_legend=1 00:06:20.197 --rc geninfo_all_blocks=1 00:06:20.197 --rc geninfo_unexecuted_blocks=1 00:06:20.197 00:06:20.197 ' 00:06:20.197 15:25:00 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:20.197 15:25:00 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:20.197 15:25:00 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:20.197 15:25:00 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:20.197 15:25:00 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.197 15:25:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:20.460 ************************************ 00:06:20.460 START TEST nvmf_target_core 00:06:20.460 ************************************ 00:06:20.460 15:25:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:20.460 * Looking for test storage... 00:06:20.460 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:20.460 15:25:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:20.460 15:25:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:06:20.460 15:25:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:20.460 15:25:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:20.460 15:25:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.460 15:25:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.460 15:25:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.460 15:25:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.460 15:25:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.460 15:25:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.460 15:25:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.460 15:25:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.460 15:25:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.460 15:25:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.460 15:25:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.460 15:25:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:20.460 15:25:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:20.460 15:25:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.460 15:25:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.460 15:25:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:20.460 15:25:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:20.460 15:25:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.460 15:25:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:20.460 15:25:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.460 15:25:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:20.460 15:25:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:20.460 15:25:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.460 15:25:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:20.460 15:25:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.460 15:25:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.460 15:25:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.460 15:25:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:20.460 15:25:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.460 15:25:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:20.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.460 --rc genhtml_branch_coverage=1 00:06:20.460 --rc genhtml_function_coverage=1 00:06:20.460 --rc genhtml_legend=1 00:06:20.460 --rc geninfo_all_blocks=1 00:06:20.460 --rc geninfo_unexecuted_blocks=1 00:06:20.460 00:06:20.460 ' 00:06:20.460 15:25:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:20.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.460 --rc genhtml_branch_coverage=1 00:06:20.460 --rc genhtml_function_coverage=1 00:06:20.460 --rc genhtml_legend=1 00:06:20.460 --rc geninfo_all_blocks=1 00:06:20.460 --rc geninfo_unexecuted_blocks=1 00:06:20.460 00:06:20.460 ' 00:06:20.460 15:25:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:20.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.460 --rc genhtml_branch_coverage=1 00:06:20.460 --rc genhtml_function_coverage=1 00:06:20.460 --rc genhtml_legend=1 00:06:20.460 --rc geninfo_all_blocks=1 00:06:20.460 --rc geninfo_unexecuted_blocks=1 00:06:20.460 00:06:20.460 ' 00:06:20.460 15:25:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:20.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.460 --rc genhtml_branch_coverage=1 00:06:20.461 --rc genhtml_function_coverage=1 00:06:20.461 --rc genhtml_legend=1 00:06:20.461 --rc geninfo_all_blocks=1 00:06:20.461 --rc geninfo_unexecuted_blocks=1 00:06:20.461 00:06:20.461 ' 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:20.461 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.461 15:25:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:20.723 ************************************ 00:06:20.723 START TEST nvmf_abort 00:06:20.723 ************************************ 00:06:20.723 15:25:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:20.723 * Looking for test storage... 00:06:20.723 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:20.723 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:20.723 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:06:20.723 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:20.723 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:20.723 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.723 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.723 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.723 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.723 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.723 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.723 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.723 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.723 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:20.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.724 --rc genhtml_branch_coverage=1 00:06:20.724 --rc genhtml_function_coverage=1 00:06:20.724 --rc genhtml_legend=1 00:06:20.724 --rc geninfo_all_blocks=1 00:06:20.724 --rc geninfo_unexecuted_blocks=1 00:06:20.724 00:06:20.724 ' 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:20.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.724 --rc genhtml_branch_coverage=1 00:06:20.724 --rc genhtml_function_coverage=1 00:06:20.724 --rc genhtml_legend=1 00:06:20.724 --rc geninfo_all_blocks=1 00:06:20.724 --rc geninfo_unexecuted_blocks=1 00:06:20.724 00:06:20.724 ' 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:20.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.724 --rc genhtml_branch_coverage=1 00:06:20.724 --rc genhtml_function_coverage=1 00:06:20.724 --rc genhtml_legend=1 00:06:20.724 --rc geninfo_all_blocks=1 00:06:20.724 --rc geninfo_unexecuted_blocks=1 00:06:20.724 00:06:20.724 ' 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:20.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.724 --rc genhtml_branch_coverage=1 00:06:20.724 --rc genhtml_function_coverage=1 00:06:20.724 --rc genhtml_legend=1 00:06:20.724 --rc geninfo_all_blocks=1 00:06:20.724 --rc geninfo_unexecuted_blocks=1 00:06:20.724 00:06:20.724 ' 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:20.724 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:20.986 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:20.986 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:20.986 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:20.986 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:20.986 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:20.986 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:20.986 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:20.986 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:20.986 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:20.986 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:20.986 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:20.986 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:06:20.986 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:20.986 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # prepare_net_devs 00:06:20.986 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@434 -- # local -g is_hw=no 00:06:20.986 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # remove_spdk_ns 00:06:20.986 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:20.986 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:20.987 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:20.987 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:06:20.987 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:06:20.987 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:20.987 15:25:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:29.132 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:29.132 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:29.132 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:29.132 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:29.132 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:29.132 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:29.132 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:29.132 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:29.132 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:29.132 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:29.132 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:29.132 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:29.132 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:29.132 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:29.132 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:29.132 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:29.132 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:29.132 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:29.132 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:29.132 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:29.132 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:29.133 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:29.133 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:29.133 Found net devices under 0000:31:00.0: cvl_0_0 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:29.133 Found net devices under 0000:31:00.1: cvl_0_1 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # is_hw=yes 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:29.133 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:29.133 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.559 ms 00:06:29.133 00:06:29.133 --- 10.0.0.2 ping statistics --- 00:06:29.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:29.133 rtt min/avg/max/mdev = 0.559/0.559/0.559/0.000 ms 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:29.133 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:29.133 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:06:29.133 00:06:29.133 --- 10.0.0.1 ping statistics --- 00:06:29.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:29.133 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # return 0 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # nvmfpid=132209 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # waitforlisten 132209 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 132209 ']' 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.133 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:29.134 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:29.134 [2024-09-27 15:25:08.982576] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:29.134 [2024-09-27 15:25:08.982646] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:29.134 [2024-09-27 15:25:09.074225] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:29.134 [2024-09-27 15:25:09.125439] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:29.134 [2024-09-27 15:25:09.125497] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:29.134 [2024-09-27 15:25:09.125509] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:29.134 [2024-09-27 15:25:09.125519] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:29.134 [2024-09-27 15:25:09.125530] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:29.134 [2024-09-27 15:25:09.125707] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.134 [2024-09-27 15:25:09.125864] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.134 [2024-09-27 15:25:09.125864] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:29.395 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:29.395 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:06:29.395 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:06:29.395 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:29.395 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:29.395 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:29.395 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:29.395 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.395 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:29.395 [2024-09-27 15:25:09.880398] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:29.656 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.656 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:29.656 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.656 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:29.656 Malloc0 00:06:29.656 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.656 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:29.656 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.656 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:29.656 Delay0 00:06:29.656 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.656 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:29.656 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.656 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:29.656 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.656 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:29.656 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.656 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:29.656 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.656 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:29.657 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.657 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:29.657 [2024-09-27 15:25:09.972124] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:29.657 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.657 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:29.657 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.657 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:29.657 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.657 15:25:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:29.657 [2024-09-27 15:25:10.104330] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:32.202 Initializing NVMe Controllers 00:06:32.202 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:32.202 controller IO queue size 128 less than required 00:06:32.202 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:32.202 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:32.202 Initialization complete. Launching workers. 00:06:32.202 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 30437 00:06:32.202 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 30498, failed to submit 62 00:06:32.202 success 30441, unsuccessful 57, failed 0 00:06:32.202 15:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:32.202 15:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.202 15:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:32.202 15:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.202 15:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:32.202 15:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:32.202 15:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # nvmfcleanup 00:06:32.202 15:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:32.202 15:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:32.202 15:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:32.202 15:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:32.202 15:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:32.202 rmmod nvme_tcp 00:06:32.202 rmmod nvme_fabrics 00:06:32.202 rmmod nvme_keyring 00:06:32.202 15:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:32.202 15:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:32.202 15:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:32.202 15:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@513 -- # '[' -n 132209 ']' 00:06:32.202 15:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # killprocess 132209 00:06:32.202 15:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 132209 ']' 00:06:32.202 15:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 132209 00:06:32.202 15:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:06:32.202 15:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:32.202 15:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 132209 00:06:32.202 15:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:32.202 15:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:32.202 15:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 132209' 00:06:32.202 killing process with pid 132209 00:06:32.202 15:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 132209 00:06:32.202 15:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 132209 00:06:32.202 15:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:06:32.202 15:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:06:32.202 15:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:06:32.202 15:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:32.202 15:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # iptables-restore 00:06:32.202 15:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # iptables-save 00:06:32.202 15:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:06:32.202 15:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:32.202 15:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:32.202 15:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:32.202 15:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:32.202 15:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:34.749 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:34.749 00:06:34.749 real 0m13.657s 00:06:34.749 user 0m14.511s 00:06:34.749 sys 0m6.500s 00:06:34.749 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.749 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:34.749 ************************************ 00:06:34.749 END TEST nvmf_abort 00:06:34.750 ************************************ 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:34.750 ************************************ 00:06:34.750 START TEST nvmf_ns_hotplug_stress 00:06:34.750 ************************************ 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:34.750 * Looking for test storage... 00:06:34.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:34.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.750 --rc genhtml_branch_coverage=1 00:06:34.750 --rc genhtml_function_coverage=1 00:06:34.750 --rc genhtml_legend=1 00:06:34.750 --rc geninfo_all_blocks=1 00:06:34.750 --rc geninfo_unexecuted_blocks=1 00:06:34.750 00:06:34.750 ' 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:34.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.750 --rc genhtml_branch_coverage=1 00:06:34.750 --rc genhtml_function_coverage=1 00:06:34.750 --rc genhtml_legend=1 00:06:34.750 --rc geninfo_all_blocks=1 00:06:34.750 --rc geninfo_unexecuted_blocks=1 00:06:34.750 00:06:34.750 ' 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:34.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.750 --rc genhtml_branch_coverage=1 00:06:34.750 --rc genhtml_function_coverage=1 00:06:34.750 --rc genhtml_legend=1 00:06:34.750 --rc geninfo_all_blocks=1 00:06:34.750 --rc geninfo_unexecuted_blocks=1 00:06:34.750 00:06:34.750 ' 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:34.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.750 --rc genhtml_branch_coverage=1 00:06:34.750 --rc genhtml_function_coverage=1 00:06:34.750 --rc genhtml_legend=1 00:06:34.750 --rc geninfo_all_blocks=1 00:06:34.750 --rc geninfo_unexecuted_blocks=1 00:06:34.750 00:06:34.750 ' 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:34.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:34.750 15:25:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:42.895 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:42.895 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:42.895 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:42.895 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:42.895 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:42.895 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:42.895 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:42.895 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:42.895 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:42.895 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:42.895 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:42.896 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:42.896 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:42.896 Found net devices under 0000:31:00.0: cvl_0_0 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:42.896 Found net devices under 0000:31:00.1: cvl_0_1 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:42.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:42.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.572 ms 00:06:42.896 00:06:42.896 --- 10.0.0.2 ping statistics --- 00:06:42.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.896 rtt min/avg/max/mdev = 0.572/0.572/0.572/0.000 ms 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:42.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:42.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:06:42.896 00:06:42.896 --- 10.0.0.1 ping statistics --- 00:06:42.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.896 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # return 0 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:06:42.896 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:42.897 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:06:42.897 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:42.897 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:42.897 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # nvmfpid=137316 00:06:42.897 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # waitforlisten 137316 00:06:42.897 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:42.897 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 137316 ']' 00:06:42.897 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.897 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:42.897 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.897 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:42.897 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:42.897 [2024-09-27 15:25:22.737351] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:42.897 [2024-09-27 15:25:22.737420] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:42.897 [2024-09-27 15:25:22.828223] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:42.897 [2024-09-27 15:25:22.874770] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:42.897 [2024-09-27 15:25:22.874829] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:42.897 [2024-09-27 15:25:22.874841] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:42.897 [2024-09-27 15:25:22.874851] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:42.897 [2024-09-27 15:25:22.874859] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:42.897 [2024-09-27 15:25:22.875066] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.897 [2024-09-27 15:25:22.875219] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.897 [2024-09-27 15:25:22.875220] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:43.159 15:25:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:43.159 15:25:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:06:43.159 15:25:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:06:43.159 15:25:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:43.159 15:25:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:43.159 15:25:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:43.159 15:25:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:43.159 15:25:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:43.422 [2024-09-27 15:25:23.778316] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:43.422 15:25:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:43.683 15:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:43.945 [2024-09-27 15:25:24.190998] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:43.945 15:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:43.945 15:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:44.206 Malloc0 00:06:44.206 15:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:44.467 Delay0 00:06:44.467 15:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.729 15:25:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:44.729 NULL1 00:06:44.729 15:25:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:44.990 15:25:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=137829 00:06:44.990 15:25:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:44.990 15:25:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:06:44.990 15:25:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.251 15:25:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.511 15:25:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:45.511 15:25:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:45.511 true 00:06:45.511 15:25:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:06:45.511 15:25:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.772 15:25:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.033 15:25:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:46.033 15:25:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:46.033 true 00:06:46.033 15:25:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:06:46.033 15:25:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.294 15:25:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.555 15:25:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:46.555 15:25:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:46.555 true 00:06:46.555 15:25:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:06:46.555 15:25:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.817 15:25:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.078 15:25:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:47.078 15:25:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:47.339 true 00:06:47.339 15:25:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:06:47.339 15:25:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.339 15:25:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.600 15:25:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:47.600 15:25:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:47.860 true 00:06:47.860 15:25:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:06:47.860 15:25:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.860 15:25:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.121 15:25:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:48.121 15:25:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:48.382 true 00:06:48.382 15:25:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:06:48.382 15:25:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.382 15:25:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.644 15:25:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:48.644 15:25:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:48.905 true 00:06:48.905 15:25:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:06:48.905 15:25:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.166 15:25:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.166 15:25:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:49.166 15:25:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:49.427 true 00:06:49.427 15:25:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:06:49.427 15:25:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.688 15:25:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.688 15:25:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:49.688 15:25:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:49.949 true 00:06:49.949 15:25:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:06:49.949 15:25:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.210 15:25:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.470 15:25:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:50.470 15:25:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:50.470 true 00:06:50.470 15:25:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:06:50.470 15:25:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.730 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.990 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:50.991 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:50.991 true 00:06:50.991 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:06:50.991 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.251 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.511 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:51.511 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:51.511 true 00:06:51.511 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:06:51.511 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.772 15:25:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.033 15:25:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:52.033 15:25:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:52.033 true 00:06:52.295 15:25:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:06:52.295 15:25:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.295 15:25:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.556 15:25:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:52.556 15:25:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:52.817 true 00:06:52.817 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:06:52.817 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.817 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.078 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:53.078 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:53.339 true 00:06:53.339 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:06:53.339 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.339 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.600 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:53.600 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:53.860 true 00:06:53.860 15:25:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:06:53.860 15:25:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.120 15:25:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.120 15:25:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:54.120 15:25:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:54.381 true 00:06:54.381 15:25:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:06:54.381 15:25:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.654 15:25:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.654 15:25:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:54.654 15:25:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:54.915 true 00:06:54.915 15:25:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:06:54.915 15:25:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.179 15:25:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.179 15:25:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:55.179 15:25:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:55.442 true 00:06:55.442 15:25:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:06:55.442 15:25:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.706 15:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.967 15:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:55.967 15:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:55.967 true 00:06:55.967 15:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:06:55.967 15:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.229 15:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.489 15:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:56.489 15:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:56.489 true 00:06:56.751 15:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:06:56.751 15:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.751 15:25:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.013 15:25:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:57.013 15:25:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:57.274 true 00:06:57.274 15:25:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:06:57.274 15:25:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.274 15:25:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.535 15:25:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:57.535 15:25:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:57.796 true 00:06:57.796 15:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:06:57.796 15:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.796 15:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.058 15:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:58.058 15:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:58.319 true 00:06:58.319 15:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:06:58.319 15:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.582 15:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.582 15:25:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:58.582 15:25:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:58.843 true 00:06:58.843 15:25:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:06:58.843 15:25:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.103 15:25:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.103 15:25:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:59.103 15:25:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:59.364 true 00:06:59.364 15:25:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:06:59.364 15:25:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.624 15:25:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.624 15:25:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:59.624 15:25:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:59.884 true 00:06:59.884 15:25:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:06:59.884 15:25:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.146 15:25:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.146 15:25:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:00.146 15:25:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:00.407 true 00:07:00.407 15:25:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:07:00.407 15:25:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.667 15:25:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.928 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:00.928 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:00.928 true 00:07:00.928 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:07:00.928 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.189 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.450 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:01.450 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:01.450 true 00:07:01.450 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:07:01.450 15:25:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.711 15:25:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.971 15:25:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:07:01.971 15:25:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:07:01.971 true 00:07:02.232 15:25:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:07:02.232 15:25:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.232 15:25:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.492 15:25:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:07:02.493 15:25:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:07:02.753 true 00:07:02.753 15:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:07:02.753 15:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.753 15:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.014 15:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:07:03.014 15:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:07:03.276 true 00:07:03.276 15:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:07:03.276 15:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.276 15:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.537 15:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:07:03.537 15:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:07:03.799 true 00:07:03.799 15:25:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:07:03.799 15:25:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.061 15:25:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.061 15:25:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:07:04.061 15:25:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:07:04.323 true 00:07:04.323 15:25:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:07:04.323 15:25:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.584 15:25:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.845 15:25:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:07:04.845 15:25:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:07:04.845 true 00:07:04.845 15:25:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:07:04.845 15:25:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.106 15:25:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.368 15:25:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:07:05.368 15:25:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:07:05.368 true 00:07:05.368 15:25:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:07:05.368 15:25:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.629 15:25:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.890 15:25:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:07:05.890 15:25:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:07:05.890 true 00:07:05.890 15:25:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:07:05.890 15:25:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.151 15:25:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.413 15:25:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:07:06.413 15:25:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:07:06.413 true 00:07:06.675 15:25:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:07:06.675 15:25:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.675 15:25:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.938 15:25:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:07:06.938 15:25:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:07:07.199 true 00:07:07.199 15:25:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:07:07.199 15:25:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.199 15:25:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.460 15:25:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:07:07.460 15:25:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:07:07.721 true 00:07:07.721 15:25:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:07:07.721 15:25:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.982 15:25:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.982 15:25:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:07:07.982 15:25:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:07:08.243 true 00:07:08.243 15:25:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:07:08.243 15:25:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.504 15:25:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.504 15:25:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:07:08.504 15:25:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:07:08.764 true 00:07:08.764 15:25:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:07:08.765 15:25:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.025 15:25:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.025 15:25:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:07:09.025 15:25:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:07:09.285 true 00:07:09.285 15:25:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:07:09.285 15:25:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.545 15:25:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.545 15:25:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:07:09.545 15:25:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:07:09.804 true 00:07:09.804 15:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:07:09.804 15:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.064 15:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.064 15:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:07:10.064 15:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:07:10.324 true 00:07:10.324 15:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:07:10.324 15:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.584 15:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.584 15:25:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:07:10.584 15:25:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:07:10.844 true 00:07:10.844 15:25:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:07:10.844 15:25:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.104 15:25:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.104 15:25:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:07:11.104 15:25:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:07:11.364 true 00:07:11.364 15:25:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:07:11.364 15:25:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.625 15:25:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.625 15:25:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:07:11.625 15:25:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:07:11.886 true 00:07:11.886 15:25:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:07:11.886 15:25:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.149 15:25:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.149 15:25:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:07:12.149 15:25:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:07:12.411 true 00:07:12.411 15:25:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:07:12.411 15:25:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.672 15:25:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.933 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:07:12.933 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:07:12.933 true 00:07:12.933 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:07:12.933 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.194 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.456 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:07:13.456 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:07:13.456 true 00:07:13.456 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:07:13.456 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.717 15:25:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.977 15:25:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:07:13.977 15:25:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:07:13.977 true 00:07:13.977 15:25:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:07:13.977 15:25:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.238 15:25:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.499 15:25:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:07:14.499 15:25:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:07:14.499 true 00:07:14.761 15:25:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:07:14.761 15:25:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.761 15:25:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.022 15:25:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:07:15.022 15:25:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:07:15.284 true 00:07:15.284 15:25:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:07:15.284 15:25:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.284 Initializing NVMe Controllers 00:07:15.284 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:15.284 Controller IO queue size 128, less than required. 00:07:15.284 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:15.284 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:15.284 Initialization complete. Launching workers. 00:07:15.284 ======================================================== 00:07:15.284 Latency(us) 00:07:15.284 Device Information : IOPS MiB/s Average min max 00:07:15.284 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 31111.37 15.19 4114.22 1141.29 7962.28 00:07:15.284 ======================================================== 00:07:15.284 Total : 31111.37 15.19 4114.22 1141.29 7962.28 00:07:15.284 00:07:15.284 15:25:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.547 15:25:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:07:15.547 15:25:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:07:15.809 true 00:07:15.809 15:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137829 00:07:15.809 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (137829) - No such process 00:07:15.809 15:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 137829 00:07:15.809 15:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.809 15:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:16.071 15:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:16.071 15:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:16.071 15:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:16.071 15:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:16.071 15:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:16.332 null0 00:07:16.332 15:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:16.332 15:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:16.332 15:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:16.332 null1 00:07:16.593 15:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:16.593 15:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:16.593 15:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:16.593 null2 00:07:16.593 15:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:16.593 15:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:16.593 15:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:16.855 null3 00:07:16.855 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:16.855 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:16.855 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:16.855 null4 00:07:17.116 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:17.116 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:17.116 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:17.116 null5 00:07:17.116 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:17.116 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:17.116 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:17.378 null6 00:07:17.378 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:17.378 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:17.378 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:17.640 null7 00:07:17.640 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:17.640 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:17.640 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:17.640 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:17.640 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:17.640 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:17.640 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:17.640 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:17.640 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:17.640 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:17.640 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.640 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:17.640 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:17.640 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:17.640 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:17.640 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:17.640 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:17.640 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:17.640 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.640 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:17.640 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:17.640 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:17.640 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:17.640 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:17.640 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:17.640 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:17.640 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.640 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:17.640 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:17.640 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:17.640 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:17.640 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:17.640 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:17.640 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:17.640 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.640 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:17.640 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:17.640 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:17.640 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:17.640 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:17.640 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:17.640 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:17.641 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:17.641 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:17.641 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.641 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:17.641 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:17.641 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:17.641 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:17.641 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:17.641 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:17.641 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:17.641 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:17.641 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.641 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:17.641 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:17.641 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:17.641 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:17.641 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:17.641 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:17.641 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.641 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:17.641 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:17.641 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 144565 144566 144568 144570 144572 144574 144575 144577 00:07:17.641 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:17.641 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:17.641 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:17.641 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.641 15:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:17.641 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:17.641 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:17.641 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.641 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:17.904 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:17.904 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:17.904 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:17.904 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:17.904 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.904 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.904 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:17.904 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.904 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.904 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:17.904 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.904 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.904 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:17.904 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.904 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.904 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:17.904 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.904 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.904 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:18.166 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.166 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.166 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:18.166 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.166 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.166 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:18.166 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.166 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.166 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:18.166 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:18.166 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:18.166 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:18.166 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.166 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:18.166 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:18.166 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:18.166 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.166 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.166 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:18.166 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:18.427 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.427 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.427 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:18.427 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.427 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.428 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:18.428 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.428 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.428 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:18.428 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.428 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.428 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:18.428 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.428 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.428 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:18.428 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:18.428 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:18.428 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.428 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.428 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:18.428 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:18.428 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:18.428 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.428 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.428 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:18.428 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.690 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:18.690 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.690 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.690 15:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:18.690 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.690 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.690 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:18.690 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.690 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.690 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:18.690 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:18.690 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.690 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:18.690 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.690 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:18.690 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.690 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.690 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:18.690 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.690 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.690 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:18.690 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:18.951 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:18.951 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.951 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.951 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:18.951 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.951 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.952 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:18.952 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:18.952 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:18.952 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.952 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.952 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.952 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:18.952 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:18.952 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.952 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.952 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:18.952 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:18.952 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:18.952 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.952 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.952 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:19.213 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.213 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.213 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:19.213 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:19.213 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.213 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.213 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:19.213 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.213 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.213 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:19.213 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:19.213 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.213 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.213 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:19.213 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.213 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.213 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:19.213 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:19.213 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:19.213 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:19.213 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.476 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:19.476 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.476 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.476 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:19.476 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.476 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.476 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:19.476 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:19.476 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.476 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.476 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:19.476 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.476 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.476 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:19.476 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.476 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.476 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:19.476 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.476 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.476 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:19.476 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.476 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.476 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:19.476 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:19.476 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:19.738 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.738 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.738 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:19.738 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:19.738 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:19.738 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:19.738 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.738 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.738 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.738 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:19.738 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:19.738 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:19.738 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.738 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.738 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:19.738 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.738 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.738 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:19.738 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.738 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.738 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:19.999 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.999 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.999 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:19.999 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.999 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.999 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:19.999 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:19.999 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.999 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.999 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:19.999 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.999 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.000 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:20.000 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:20.000 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:20.000 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:20.000 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.000 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:20.000 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.000 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.000 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:20.000 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:20.262 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.262 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.262 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:20.262 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:20.262 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.262 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.262 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:20.262 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.262 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.262 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:20.262 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.262 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.262 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:20.262 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:20.262 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.262 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:20.262 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.262 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:20.262 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.262 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.262 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:20.262 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.262 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.262 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:20.262 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:20.525 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:20.525 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.525 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.525 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.525 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:20.525 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.525 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.525 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:20.525 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:20.525 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:20.525 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:20.525 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.525 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.525 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:20.525 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.525 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.525 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:20.525 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.525 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.525 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:20.525 15:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:20.786 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:20.787 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.787 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.787 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:20.787 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.787 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.787 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:20.787 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.787 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.787 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:20.787 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:20.787 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.787 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:20.787 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.787 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.787 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:20.787 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.787 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.787 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:20.787 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.787 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.787 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:20.787 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:20.787 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:20.787 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:21.049 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.049 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.049 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:21.049 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:21.049 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.049 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.049 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:21.049 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:21.049 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.049 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:21.049 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.049 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:21.049 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.049 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.049 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:21.049 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.049 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.049 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:21.049 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.049 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.049 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.049 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:21.310 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.310 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.310 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:21.310 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.310 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.310 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:21.310 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.310 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.310 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:21.310 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.310 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.310 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.310 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.310 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.310 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.571 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.571 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.571 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:21.571 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:21.571 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:07:21.571 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:21.571 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:21.571 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:21.571 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:21.571 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:21.571 rmmod nvme_tcp 00:07:21.571 rmmod nvme_fabrics 00:07:21.571 rmmod nvme_keyring 00:07:21.572 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:21.572 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:21.572 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:21.572 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@513 -- # '[' -n 137316 ']' 00:07:21.572 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # killprocess 137316 00:07:21.572 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 137316 ']' 00:07:21.572 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 137316 00:07:21.572 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:07:21.572 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:21.572 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 137316 00:07:21.572 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:21.572 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:21.572 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 137316' 00:07:21.572 killing process with pid 137316 00:07:21.572 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 137316 00:07:21.572 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 137316 00:07:21.832 15:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:07:21.832 15:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:07:21.832 15:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:07:21.832 15:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:21.832 15:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-save 00:07:21.832 15:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:07:21.832 15:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-restore 00:07:21.832 15:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:21.832 15:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:21.832 15:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:21.832 15:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:21.832 15:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:23.747 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:23.747 00:07:23.747 real 0m49.454s 00:07:23.747 user 3m20.706s 00:07:23.747 sys 0m17.432s 00:07:23.747 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.747 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:23.747 ************************************ 00:07:23.747 END TEST nvmf_ns_hotplug_stress 00:07:23.747 ************************************ 00:07:23.747 15:26:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:23.747 15:26:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:23.747 15:26:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.747 15:26:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:24.010 ************************************ 00:07:24.010 START TEST nvmf_delete_subsystem 00:07:24.010 ************************************ 00:07:24.010 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:24.010 * Looking for test storage... 00:07:24.010 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:24.010 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:24.010 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:07:24.010 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:24.010 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:24.010 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:24.010 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:24.010 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:24.010 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:24.010 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:24.010 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:24.010 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:24.010 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:24.010 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:24.010 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:24.010 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:24.010 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:24.010 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:24.010 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:24.010 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:24.010 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:24.010 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:24.010 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:24.010 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:24.010 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:24.010 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:24.010 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:24.010 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:24.010 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:24.010 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:24.010 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:24.010 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:24.010 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:24.010 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:24.010 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:24.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.010 --rc genhtml_branch_coverage=1 00:07:24.010 --rc genhtml_function_coverage=1 00:07:24.010 --rc genhtml_legend=1 00:07:24.010 --rc geninfo_all_blocks=1 00:07:24.010 --rc geninfo_unexecuted_blocks=1 00:07:24.010 00:07:24.010 ' 00:07:24.010 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:24.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.010 --rc genhtml_branch_coverage=1 00:07:24.010 --rc genhtml_function_coverage=1 00:07:24.010 --rc genhtml_legend=1 00:07:24.010 --rc geninfo_all_blocks=1 00:07:24.010 --rc geninfo_unexecuted_blocks=1 00:07:24.010 00:07:24.010 ' 00:07:24.010 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:24.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.010 --rc genhtml_branch_coverage=1 00:07:24.010 --rc genhtml_function_coverage=1 00:07:24.010 --rc genhtml_legend=1 00:07:24.010 --rc geninfo_all_blocks=1 00:07:24.010 --rc geninfo_unexecuted_blocks=1 00:07:24.010 00:07:24.010 ' 00:07:24.010 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:24.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.010 --rc genhtml_branch_coverage=1 00:07:24.010 --rc genhtml_function_coverage=1 00:07:24.010 --rc genhtml_legend=1 00:07:24.010 --rc geninfo_all_blocks=1 00:07:24.010 --rc geninfo_unexecuted_blocks=1 00:07:24.010 00:07:24.011 ' 00:07:24.011 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:24.011 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:24.011 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:24.011 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:24.011 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:24.011 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:24.011 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:24.011 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:24.011 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:24.011 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:24.011 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:24.011 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:24.011 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:24.011 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:24.011 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:24.011 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:24.011 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:24.011 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:24.011 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:24.011 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:24.011 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:24.011 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:24.011 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:24.011 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.011 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.011 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.011 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:24.011 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.011 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:24.011 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:24.011 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:24.011 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:24.011 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:24.011 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:24.011 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:24.011 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:24.011 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:24.011 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:24.011 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:24.272 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:24.272 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:07:24.272 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:24.272 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:07:24.272 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:07:24.272 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:07:24.272 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:24.272 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:24.272 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:24.272 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:07:24.272 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:07:24.272 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:24.272 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:32.422 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:32.422 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:32.422 Found net devices under 0000:31:00.0: cvl_0_0 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:32.422 Found net devices under 0000:31:00.1: cvl_0_1 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # is_hw=yes 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:32.422 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:32.423 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:32.423 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:32.423 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:32.423 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:32.423 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:32.423 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:32.423 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:32.423 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:32.423 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:32.423 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:32.423 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:32.423 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:32.423 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:32.423 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:32.423 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:32.423 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.587 ms 00:07:32.423 00:07:32.423 --- 10.0.0.2 ping statistics --- 00:07:32.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.423 rtt min/avg/max/mdev = 0.587/0.587/0.587/0.000 ms 00:07:32.423 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:32.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:32.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:07:32.423 00:07:32.423 --- 10.0.0.1 ping statistics --- 00:07:32.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.423 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:07:32.423 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:32.423 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # return 0 00:07:32.423 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:07:32.423 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:32.423 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:07:32.423 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:07:32.423 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:32.423 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:07:32.423 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:07:32.423 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:32.423 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:07:32.423 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:32.423 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:32.423 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # nvmfpid=149826 00:07:32.423 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # waitforlisten 149826 00:07:32.423 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:32.423 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 149826 ']' 00:07:32.423 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.423 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:32.423 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.423 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:32.423 15:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:32.423 [2024-09-27 15:26:12.229984] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:07:32.423 [2024-09-27 15:26:12.230049] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:32.423 [2024-09-27 15:26:12.316937] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:32.423 [2024-09-27 15:26:12.363142] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:32.423 [2024-09-27 15:26:12.363196] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:32.423 [2024-09-27 15:26:12.363204] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:32.423 [2024-09-27 15:26:12.363212] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:32.423 [2024-09-27 15:26:12.363218] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:32.423 [2024-09-27 15:26:12.363324] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.423 [2024-09-27 15:26:12.363325] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.686 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:32.686 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:07:32.686 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:07:32.686 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:32.686 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:32.686 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:32.686 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:32.686 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.686 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:32.686 [2024-09-27 15:26:13.076055] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:32.686 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.686 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:32.686 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.686 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:32.686 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.686 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:32.686 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.686 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:32.686 [2024-09-27 15:26:13.100359] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:32.686 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.686 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:32.686 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.686 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:32.686 NULL1 00:07:32.686 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.686 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:32.686 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.686 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:32.686 Delay0 00:07:32.686 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.686 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.686 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.686 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:32.686 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.686 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=150089 00:07:32.686 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:32.686 15:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:32.948 [2024-09-27 15:26:13.227509] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:34.869 15:26:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:34.869 15:26:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.869 15:26:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.132 Read completed with error (sct=0, sc=8) 00:07:35.132 Write completed with error (sct=0, sc=8) 00:07:35.132 starting I/O failed: -6 00:07:35.132 Read completed with error (sct=0, sc=8) 00:07:35.132 Read completed with error (sct=0, sc=8) 00:07:35.132 Read completed with error (sct=0, sc=8) 00:07:35.132 Read completed with error (sct=0, sc=8) 00:07:35.132 starting I/O failed: -6 00:07:35.132 Read completed with error (sct=0, sc=8) 00:07:35.132 Read completed with error (sct=0, sc=8) 00:07:35.132 Read completed with error (sct=0, sc=8) 00:07:35.132 Read completed with error (sct=0, sc=8) 00:07:35.132 starting I/O failed: -6 00:07:35.132 Read completed with error (sct=0, sc=8) 00:07:35.132 Read completed with error (sct=0, sc=8) 00:07:35.132 Read completed with error (sct=0, sc=8) 00:07:35.132 Write completed with error (sct=0, sc=8) 00:07:35.132 starting I/O failed: -6 00:07:35.132 Read completed with error (sct=0, sc=8) 00:07:35.132 Write completed with error (sct=0, sc=8) 00:07:35.132 Write completed with error (sct=0, sc=8) 00:07:35.132 Read completed with error (sct=0, sc=8) 00:07:35.132 starting I/O failed: -6 00:07:35.132 Write completed with error (sct=0, sc=8) 00:07:35.132 Read completed with error (sct=0, sc=8) 00:07:35.132 Read completed with error (sct=0, sc=8) 00:07:35.132 Write completed with error (sct=0, sc=8) 00:07:35.132 starting I/O failed: -6 00:07:35.132 Write completed with error (sct=0, sc=8) 00:07:35.132 Read completed with error (sct=0, sc=8) 00:07:35.132 Read completed with error (sct=0, sc=8) 00:07:35.132 Read completed with error (sct=0, sc=8) 00:07:35.132 starting I/O failed: -6 00:07:35.132 Read completed with error (sct=0, sc=8) 00:07:35.132 Read completed with error (sct=0, sc=8) 00:07:35.132 Read completed with error (sct=0, sc=8) 00:07:35.132 Read completed with error (sct=0, sc=8) 00:07:35.132 starting I/O failed: -6 00:07:35.132 Read completed with error (sct=0, sc=8) 00:07:35.132 Write completed with error (sct=0, sc=8) 00:07:35.132 Read completed with error (sct=0, sc=8) 00:07:35.132 Read completed with error (sct=0, sc=8) 00:07:35.132 starting I/O failed: -6 00:07:35.132 Read completed with error (sct=0, sc=8) 00:07:35.132 Read completed with error (sct=0, sc=8) 00:07:35.132 Read completed with error (sct=0, sc=8) 00:07:35.132 Read completed with error (sct=0, sc=8) 00:07:35.132 starting I/O failed: -6 00:07:35.132 Write completed with error (sct=0, sc=8) 00:07:35.132 Write completed with error (sct=0, sc=8) 00:07:35.132 Read completed with error (sct=0, sc=8) 00:07:35.132 Read completed with error (sct=0, sc=8) 00:07:35.132 starting I/O failed: -6 00:07:35.132 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Write completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 starting I/O failed: -6 00:07:35.133 [2024-09-27 15:26:15.372857] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6a70 is same with the state(6) to be set 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Write completed with error (sct=0, sc=8) 00:07:35.133 Write completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Write completed with error (sct=0, sc=8) 00:07:35.133 Write completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Write completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Write completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Write completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Write completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Write completed with error (sct=0, sc=8) 00:07:35.133 Write completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Write completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Write completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Write completed with error (sct=0, sc=8) 00:07:35.133 Write completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Write completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Write completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Write completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 starting I/O failed: -6 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 starting I/O failed: -6 00:07:35.133 Write completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Write completed with error (sct=0, sc=8) 00:07:35.133 starting I/O failed: -6 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Write completed with error (sct=0, sc=8) 00:07:35.133 Write completed with error (sct=0, sc=8) 00:07:35.133 starting I/O failed: -6 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 starting I/O failed: -6 00:07:35.133 Write completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 starting I/O failed: -6 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Write completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 starting I/O failed: -6 00:07:35.133 Write completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 starting I/O failed: -6 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Write completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 starting I/O failed: -6 00:07:35.133 Write completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Write completed with error (sct=0, sc=8) 00:07:35.133 starting I/O failed: -6 00:07:35.133 [2024-09-27 15:26:15.374087] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f852800cfe0 is same with the state(6) to be set 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Write completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Write completed with error (sct=0, sc=8) 00:07:35.133 Write completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Write completed with error (sct=0, sc=8) 00:07:35.133 Write completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Write completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Write completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Write completed with error (sct=0, sc=8) 00:07:35.133 Write completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Write completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Write completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Write completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Write completed with error (sct=0, sc=8) 00:07:35.133 Read completed with error (sct=0, sc=8) 00:07:35.133 Write completed with error (sct=0, sc=8) 00:07:35.133 [2024-09-27 15:26:15.374575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f852800d640 is same with the state(6) to be set 00:07:36.079 [2024-09-27 15:26:16.331220] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b5b20 is same with the state(6) to be set 00:07:36.079 Read completed with error (sct=0, sc=8) 00:07:36.079 Read completed with error (sct=0, sc=8) 00:07:36.079 Write completed with error (sct=0, sc=8) 00:07:36.079 Write completed with error (sct=0, sc=8) 00:07:36.079 Read completed with error (sct=0, sc=8) 00:07:36.079 Read completed with error (sct=0, sc=8) 00:07:36.079 Read completed with error (sct=0, sc=8) 00:07:36.079 Read completed with error (sct=0, sc=8) 00:07:36.079 Read completed with error (sct=0, sc=8) 00:07:36.079 Read completed with error (sct=0, sc=8) 00:07:36.079 Read completed with error (sct=0, sc=8) 00:07:36.079 Read completed with error (sct=0, sc=8) 00:07:36.079 Read completed with error (sct=0, sc=8) 00:07:36.079 Write completed with error (sct=0, sc=8) 00:07:36.079 Read completed with error (sct=0, sc=8) 00:07:36.079 Write completed with error (sct=0, sc=8) 00:07:36.079 Read completed with error (sct=0, sc=8) 00:07:36.079 Write completed with error (sct=0, sc=8) 00:07:36.079 Read completed with error (sct=0, sc=8) 00:07:36.079 Write completed with error (sct=0, sc=8) 00:07:36.079 Read completed with error (sct=0, sc=8) 00:07:36.079 Write completed with error (sct=0, sc=8) 00:07:36.079 Write completed with error (sct=0, sc=8) 00:07:36.079 Write completed with error (sct=0, sc=8) 00:07:36.079 Read completed with error (sct=0, sc=8) 00:07:36.079 [2024-09-27 15:26:16.376451] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6c50 is same with the state(6) to be set 00:07:36.079 Write completed with error (sct=0, sc=8) 00:07:36.079 Read completed with error (sct=0, sc=8) 00:07:36.079 Write completed with error (sct=0, sc=8) 00:07:36.079 Read completed with error (sct=0, sc=8) 00:07:36.079 Read completed with error (sct=0, sc=8) 00:07:36.079 Read completed with error (sct=0, sc=8) 00:07:36.079 Write completed with error (sct=0, sc=8) 00:07:36.079 Read completed with error (sct=0, sc=8) 00:07:36.079 Read completed with error (sct=0, sc=8) 00:07:36.079 Read completed with error (sct=0, sc=8) 00:07:36.079 Read completed with error (sct=0, sc=8) 00:07:36.079 Read completed with error (sct=0, sc=8) 00:07:36.079 Write completed with error (sct=0, sc=8) 00:07:36.079 Read completed with error (sct=0, sc=8) 00:07:36.079 Read completed with error (sct=0, sc=8) 00:07:36.079 Read completed with error (sct=0, sc=8) 00:07:36.079 Read completed with error (sct=0, sc=8) 00:07:36.079 Read completed with error (sct=0, sc=8) 00:07:36.079 Read completed with error (sct=0, sc=8) 00:07:36.079 Write completed with error (sct=0, sc=8) 00:07:36.079 Read completed with error (sct=0, sc=8) 00:07:36.079 Read completed with error (sct=0, sc=8) 00:07:36.079 Write completed with error (sct=0, sc=8) 00:07:36.079 Read completed with error (sct=0, sc=8) 00:07:36.079 Write completed with error (sct=0, sc=8) 00:07:36.080 [2024-09-27 15:26:16.377257] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8528000c00 is same with the state(6) to be set 00:07:36.080 Read completed with error (sct=0, sc=8) 00:07:36.080 Read completed with error (sct=0, sc=8) 00:07:36.080 Read completed with error (sct=0, sc=8) 00:07:36.080 Read completed with error (sct=0, sc=8) 00:07:36.080 Read completed with error (sct=0, sc=8) 00:07:36.080 Write completed with error (sct=0, sc=8) 00:07:36.080 Read completed with error (sct=0, sc=8) 00:07:36.080 Read completed with error (sct=0, sc=8) 00:07:36.080 Read completed with error (sct=0, sc=8) 00:07:36.080 Read completed with error (sct=0, sc=8) 00:07:36.080 Read completed with error (sct=0, sc=8) 00:07:36.080 Write completed with error (sct=0, sc=8) 00:07:36.080 Read completed with error (sct=0, sc=8) 00:07:36.080 Write completed with error (sct=0, sc=8) 00:07:36.080 Write completed with error (sct=0, sc=8) 00:07:36.080 Read completed with error (sct=0, sc=8) 00:07:36.080 Write completed with error (sct=0, sc=8) 00:07:36.080 Read completed with error (sct=0, sc=8) 00:07:36.080 Read completed with error (sct=0, sc=8) 00:07:36.080 Read completed with error (sct=0, sc=8) 00:07:36.080 Read completed with error (sct=0, sc=8) 00:07:36.080 Read completed with error (sct=0, sc=8) 00:07:36.080 Read completed with error (sct=0, sc=8) 00:07:36.080 Read completed with error (sct=0, sc=8) 00:07:36.080 Read completed with error (sct=0, sc=8) 00:07:36.080 [2024-09-27 15:26:16.377354] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f852800d310 is same with the state(6) to be set 00:07:36.080 Read completed with error (sct=0, sc=8) 00:07:36.080 Read completed with error (sct=0, sc=8) 00:07:36.080 Read completed with error (sct=0, sc=8) 00:07:36.080 Write completed with error (sct=0, sc=8) 00:07:36.080 Read completed with error (sct=0, sc=8) 00:07:36.080 Write completed with error (sct=0, sc=8) 00:07:36.080 Read completed with error (sct=0, sc=8) 00:07:36.080 Read completed with error (sct=0, sc=8) 00:07:36.080 Write completed with error (sct=0, sc=8) 00:07:36.080 Read completed with error (sct=0, sc=8) 00:07:36.080 Read completed with error (sct=0, sc=8) 00:07:36.080 Write completed with error (sct=0, sc=8) 00:07:36.080 Read completed with error (sct=0, sc=8) 00:07:36.080 Read completed with error (sct=0, sc=8) 00:07:36.080 Read completed with error (sct=0, sc=8) 00:07:36.080 Read completed with error (sct=0, sc=8) 00:07:36.080 Write completed with error (sct=0, sc=8) 00:07:36.080 Read completed with error (sct=0, sc=8) 00:07:36.080 Read completed with error (sct=0, sc=8) 00:07:36.080 Read completed with error (sct=0, sc=8) 00:07:36.080 Read completed with error (sct=0, sc=8) 00:07:36.080 Read completed with error (sct=0, sc=8) 00:07:36.080 Write completed with error (sct=0, sc=8) 00:07:36.080 Write completed with error (sct=0, sc=8) 00:07:36.080 Read completed with error (sct=0, sc=8) 00:07:36.080 [2024-09-27 15:26:16.377582] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b80b0 is same with the state(6) to be set 00:07:36.080 Initializing NVMe Controllers 00:07:36.080 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:36.080 Controller IO queue size 128, less than required. 00:07:36.080 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:36.080 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:36.080 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:36.080 Initialization complete. Launching workers. 00:07:36.080 ======================================================== 00:07:36.080 Latency(us) 00:07:36.080 Device Information : IOPS MiB/s Average min max 00:07:36.080 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 171.27 0.08 892547.41 474.15 1013327.98 00:07:36.080 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 156.88 0.08 974673.72 509.58 2002264.39 00:07:36.080 ======================================================== 00:07:36.080 Total : 328.15 0.16 931809.01 474.15 2002264.39 00:07:36.080 00:07:36.080 [2024-09-27 15:26:16.378152] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b5b20 (9): Bad file descriptor 00:07:36.080 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:36.080 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.080 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:36.080 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 150089 00:07:36.080 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:36.653 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:36.653 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 150089 00:07:36.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (150089) - No such process 00:07:36.653 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 150089 00:07:36.653 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:07:36.653 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 150089 00:07:36.653 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:07:36.653 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.653 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:07:36.653 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.653 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 150089 00:07:36.653 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:07:36.653 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:36.653 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:36.653 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:36.653 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:36.653 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.653 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:36.653 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.653 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:36.653 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.653 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:36.653 [2024-09-27 15:26:16.909720] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:36.653 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.653 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.653 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.653 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:36.653 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.653 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=150855 00:07:36.653 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:36.653 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:36.653 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 150855 00:07:36.653 15:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:36.653 [2024-09-27 15:26:16.994735] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:37.226 15:26:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:37.226 15:26:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 150855 00:07:37.226 15:26:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:37.488 15:26:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:37.488 15:26:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 150855 00:07:37.488 15:26:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:38.061 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:38.061 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 150855 00:07:38.061 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:38.631 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:38.631 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 150855 00:07:38.631 15:26:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:39.202 15:26:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:39.202 15:26:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 150855 00:07:39.202 15:26:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:39.775 15:26:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:39.775 15:26:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 150855 00:07:39.775 15:26:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:39.775 Initializing NVMe Controllers 00:07:39.775 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:39.775 Controller IO queue size 128, less than required. 00:07:39.775 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:39.775 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:39.775 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:39.775 Initialization complete. Launching workers. 00:07:39.775 ======================================================== 00:07:39.775 Latency(us) 00:07:39.775 Device Information : IOPS MiB/s Average min max 00:07:39.775 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001800.91 1000142.75 1004517.15 00:07:39.775 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002948.48 1000212.77 1042386.48 00:07:39.775 ======================================================== 00:07:39.775 Total : 256.00 0.12 1002374.70 1000142.75 1042386.48 00:07:39.775 00:07:40.036 15:26:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:40.036 15:26:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 150855 00:07:40.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (150855) - No such process 00:07:40.036 15:26:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 150855 00:07:40.036 15:26:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:40.036 15:26:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:40.036 15:26:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:07:40.036 15:26:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:40.036 15:26:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:40.036 15:26:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:40.036 15:26:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:40.036 15:26:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:40.036 rmmod nvme_tcp 00:07:40.036 rmmod nvme_fabrics 00:07:40.036 rmmod nvme_keyring 00:07:40.297 15:26:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:40.298 15:26:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:40.298 15:26:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:40.298 15:26:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@513 -- # '[' -n 149826 ']' 00:07:40.298 15:26:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # killprocess 149826 00:07:40.298 15:26:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 149826 ']' 00:07:40.298 15:26:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 149826 00:07:40.298 15:26:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:07:40.298 15:26:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:40.298 15:26:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 149826 00:07:40.298 15:26:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:40.298 15:26:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:40.298 15:26:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 149826' 00:07:40.298 killing process with pid 149826 00:07:40.298 15:26:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 149826 00:07:40.298 15:26:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 149826 00:07:40.298 15:26:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:07:40.298 15:26:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:07:40.298 15:26:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:07:40.298 15:26:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:40.298 15:26:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-save 00:07:40.298 15:26:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:07:40.298 15:26:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-restore 00:07:40.298 15:26:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:40.298 15:26:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:40.298 15:26:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.298 15:26:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:40.298 15:26:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.849 15:26:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:42.849 00:07:42.849 real 0m18.536s 00:07:42.849 user 0m31.030s 00:07:42.849 sys 0m6.783s 00:07:42.849 15:26:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.849 15:26:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:42.849 ************************************ 00:07:42.849 END TEST nvmf_delete_subsystem 00:07:42.849 ************************************ 00:07:42.849 15:26:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:42.849 15:26:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:42.849 15:26:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.849 15:26:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:42.849 ************************************ 00:07:42.849 START TEST nvmf_host_management 00:07:42.849 ************************************ 00:07:42.849 15:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:42.849 * Looking for test storage... 00:07:42.849 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:42.849 15:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:42.849 15:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:07:42.849 15:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:42.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.849 --rc genhtml_branch_coverage=1 00:07:42.849 --rc genhtml_function_coverage=1 00:07:42.849 --rc genhtml_legend=1 00:07:42.849 --rc geninfo_all_blocks=1 00:07:42.849 --rc geninfo_unexecuted_blocks=1 00:07:42.849 00:07:42.849 ' 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:42.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.849 --rc genhtml_branch_coverage=1 00:07:42.849 --rc genhtml_function_coverage=1 00:07:42.849 --rc genhtml_legend=1 00:07:42.849 --rc geninfo_all_blocks=1 00:07:42.849 --rc geninfo_unexecuted_blocks=1 00:07:42.849 00:07:42.849 ' 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:42.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.849 --rc genhtml_branch_coverage=1 00:07:42.849 --rc genhtml_function_coverage=1 00:07:42.849 --rc genhtml_legend=1 00:07:42.849 --rc geninfo_all_blocks=1 00:07:42.849 --rc geninfo_unexecuted_blocks=1 00:07:42.849 00:07:42.849 ' 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:42.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.849 --rc genhtml_branch_coverage=1 00:07:42.849 --rc genhtml_function_coverage=1 00:07:42.849 --rc genhtml_legend=1 00:07:42.849 --rc geninfo_all_blocks=1 00:07:42.849 --rc geninfo_unexecuted_blocks=1 00:07:42.849 00:07:42.849 ' 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:42.849 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:42.850 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.850 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.850 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.850 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:42.850 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.850 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:42.850 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:42.850 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:42.850 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:42.850 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:42.850 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:42.850 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:42.850 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:42.850 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:42.850 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:42.850 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:42.850 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:42.850 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:42.850 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:42.850 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:07:42.850 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:42.850 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:07:42.850 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:07:42.850 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:07:42.850 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.850 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:42.850 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.850 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:07:42.850 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:07:42.850 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:42.850 15:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:50.999 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:50.999 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:50.999 Found net devices under 0000:31:00.0: cvl_0_0 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:50.999 Found net devices under 0000:31:00.1: cvl_0_1 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # is_hw=yes 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:50.999 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:51.000 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:51.000 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:51.000 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:51.000 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:51.000 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:51.000 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:51.000 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:51.000 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:51.000 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:51.000 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:51.000 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:51.000 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:51.000 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:51.000 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:51.000 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:51.000 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:51.000 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:51.000 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:51.000 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:51.000 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:51.000 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:51.000 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:51.000 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.573 ms 00:07:51.000 00:07:51.000 --- 10.0.0.2 ping statistics --- 00:07:51.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.000 rtt min/avg/max/mdev = 0.573/0.573/0.573/0.000 ms 00:07:51.000 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:51.000 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:51.000 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:07:51.000 00:07:51.000 --- 10.0.0.1 ping statistics --- 00:07:51.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.000 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:07:51.000 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:51.000 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # return 0 00:07:51.000 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:07:51.000 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:51.000 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:07:51.000 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:07:51.000 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:51.000 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:07:51.000 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:07:51.000 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:51.000 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:51.000 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:51.000 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:07:51.000 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:51.000 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.000 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=155943 00:07:51.000 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 155943 00:07:51.000 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:51.000 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 155943 ']' 00:07:51.000 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.000 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:51.000 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.000 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:51.000 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.000 [2024-09-27 15:26:30.855474] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:07:51.000 [2024-09-27 15:26:30.855536] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.000 [2024-09-27 15:26:30.945050] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:51.000 [2024-09-27 15:26:30.993585] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:51.000 [2024-09-27 15:26:30.993645] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:51.000 [2024-09-27 15:26:30.993656] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:51.000 [2024-09-27 15:26:30.993666] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:51.000 [2024-09-27 15:26:30.993673] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:51.000 [2024-09-27 15:26:30.993813] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.000 [2024-09-27 15:26:30.993972] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:51.000 [2024-09-27 15:26:30.994102] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.000 [2024-09-27 15:26:30.994103] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:07:51.262 15:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:51.262 15:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:51.263 15:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:07:51.263 15:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:51.263 15:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.263 15:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:51.263 15:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:51.263 15:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.263 15:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.263 [2024-09-27 15:26:31.733339] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:51.263 15:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.263 15:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:51.263 15:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:51.263 15:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.524 15:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:51.524 15:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:51.525 15:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:51.525 15:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.525 15:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.525 Malloc0 00:07:51.525 [2024-09-27 15:26:31.802883] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:51.525 15:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.525 15:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:51.525 15:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:51.525 15:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.525 15:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=156061 00:07:51.525 15:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 156061 /var/tmp/bdevperf.sock 00:07:51.525 15:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 156061 ']' 00:07:51.525 15:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:51.525 15:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:51.525 15:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:51.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:51.525 15:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:51.525 15:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:51.525 15:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:51.525 15:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.525 15:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:07:51.525 15:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:07:51.525 15:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:07:51.525 15:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:07:51.525 { 00:07:51.525 "params": { 00:07:51.525 "name": "Nvme$subsystem", 00:07:51.525 "trtype": "$TEST_TRANSPORT", 00:07:51.525 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:51.525 "adrfam": "ipv4", 00:07:51.525 "trsvcid": "$NVMF_PORT", 00:07:51.525 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:51.525 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:51.525 "hdgst": ${hdgst:-false}, 00:07:51.525 "ddgst": ${ddgst:-false} 00:07:51.525 }, 00:07:51.525 "method": "bdev_nvme_attach_controller" 00:07:51.525 } 00:07:51.525 EOF 00:07:51.525 )") 00:07:51.525 15:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:07:51.525 15:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:07:51.525 15:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:07:51.525 15:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:07:51.525 "params": { 00:07:51.525 "name": "Nvme0", 00:07:51.525 "trtype": "tcp", 00:07:51.525 "traddr": "10.0.0.2", 00:07:51.525 "adrfam": "ipv4", 00:07:51.525 "trsvcid": "4420", 00:07:51.525 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:51.525 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:51.525 "hdgst": false, 00:07:51.525 "ddgst": false 00:07:51.525 }, 00:07:51.525 "method": "bdev_nvme_attach_controller" 00:07:51.525 }' 00:07:51.525 [2024-09-27 15:26:31.911882] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:07:51.525 [2024-09-27 15:26:31.911958] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid156061 ] 00:07:51.525 [2024-09-27 15:26:31.997066] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.786 [2024-09-27 15:26:32.044160] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.786 Running I/O for 10 seconds... 00:07:52.362 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:52.362 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:52.362 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:52.362 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.362 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:52.362 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.362 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:52.362 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:52.362 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:52.362 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:52.362 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:52.362 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:52.362 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:52.362 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:52.362 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:52.362 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:52.362 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.362 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:52.362 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.362 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=649 00:07:52.362 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 649 -ge 100 ']' 00:07:52.362 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:52.362 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:52.362 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:52.362 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:52.362 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.362 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:52.362 [2024-09-27 15:26:32.806428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.362 [2024-09-27 15:26:32.806717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.363 [2024-09-27 15:26:32.806722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.363 [2024-09-27 15:26:32.806728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.363 [2024-09-27 15:26:32.806733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.363 [2024-09-27 15:26:32.806737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.363 [2024-09-27 15:26:32.806742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.363 [2024-09-27 15:26:32.806746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.363 [2024-09-27 15:26:32.806751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.363 [2024-09-27 15:26:32.806755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.363 [2024-09-27 15:26:32.806760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.363 [2024-09-27 15:26:32.806765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.363 [2024-09-27 15:26:32.806769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.363 [2024-09-27 15:26:32.806774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.363 [2024-09-27 15:26:32.806778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.363 [2024-09-27 15:26:32.806783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a02f0 is same with the state(6) to be set 00:07:52.363 [2024-09-27 15:26:32.810292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:52.363 [2024-09-27 15:26:32.810330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.363 [2024-09-27 15:26:32.810341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:52.363 [2024-09-27 15:26:32.810350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.363 [2024-09-27 15:26:32.810358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:52.363 [2024-09-27 15:26:32.810366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.363 [2024-09-27 15:26:32.810374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:52.363 [2024-09-27 15:26:32.810381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.363 [2024-09-27 15:26:32.810389] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca69d0 is same with the state(6) to be set 00:07:52.363 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.363 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:52.363 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.363 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:52.363 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.363 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:52.363 [2024-09-27 15:26:32.823874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca69d0 (9): Bad file descriptor 00:07:52.363 [2024-09-27 15:26:32.823969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.363 [2024-09-27 15:26:32.823982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.363 [2024-09-27 15:26:32.823997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.363 [2024-09-27 15:26:32.824005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.363 [2024-09-27 15:26:32.824015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.363 [2024-09-27 15:26:32.824022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.363 [2024-09-27 15:26:32.824032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.363 [2024-09-27 15:26:32.824039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.363 [2024-09-27 15:26:32.824049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.363 [2024-09-27 15:26:32.824056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.363 [2024-09-27 15:26:32.824066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.363 [2024-09-27 15:26:32.824073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.363 [2024-09-27 15:26:32.824083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.363 [2024-09-27 15:26:32.824090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.363 [2024-09-27 15:26:32.824100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.363 [2024-09-27 15:26:32.824107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.363 [2024-09-27 15:26:32.824117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.363 [2024-09-27 15:26:32.824124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.363 [2024-09-27 15:26:32.824134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.363 [2024-09-27 15:26:32.824141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.363 [2024-09-27 15:26:32.824151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.363 [2024-09-27 15:26:32.824158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.363 [2024-09-27 15:26:32.824167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.363 [2024-09-27 15:26:32.824175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.363 [2024-09-27 15:26:32.824189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.363 [2024-09-27 15:26:32.824197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.363 [2024-09-27 15:26:32.824206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.363 [2024-09-27 15:26:32.824213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.363 [2024-09-27 15:26:32.824223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.363 [2024-09-27 15:26:32.824231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.363 [2024-09-27 15:26:32.824240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.363 [2024-09-27 15:26:32.824247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.363 [2024-09-27 15:26:32.824257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.363 [2024-09-27 15:26:32.824264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.363 [2024-09-27 15:26:32.824274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.363 [2024-09-27 15:26:32.824281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.363 [2024-09-27 15:26:32.824290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.363 [2024-09-27 15:26:32.824297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.363 [2024-09-27 15:26:32.824307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.363 [2024-09-27 15:26:32.824315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.363 [2024-09-27 15:26:32.824324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.363 [2024-09-27 15:26:32.824332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.363 [2024-09-27 15:26:32.824343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.363 [2024-09-27 15:26:32.824350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.363 [2024-09-27 15:26:32.824360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.363 [2024-09-27 15:26:32.824368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.363 [2024-09-27 15:26:32.824377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.363 [2024-09-27 15:26:32.824384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.363 [2024-09-27 15:26:32.824394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-09-27 15:26:32.824403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.364 [2024-09-27 15:26:32.824412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-09-27 15:26:32.824420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.364 [2024-09-27 15:26:32.824429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-09-27 15:26:32.824437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.364 [2024-09-27 15:26:32.824446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-09-27 15:26:32.824454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.364 [2024-09-27 15:26:32.824463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-09-27 15:26:32.824471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.364 [2024-09-27 15:26:32.824480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-09-27 15:26:32.824487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.364 [2024-09-27 15:26:32.824497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-09-27 15:26:32.824504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.364 [2024-09-27 15:26:32.824513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-09-27 15:26:32.824520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.364 [2024-09-27 15:26:32.824530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-09-27 15:26:32.824537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.364 [2024-09-27 15:26:32.824546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-09-27 15:26:32.824555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.364 [2024-09-27 15:26:32.824564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-09-27 15:26:32.824572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.364 [2024-09-27 15:26:32.824581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-09-27 15:26:32.824589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.364 [2024-09-27 15:26:32.824598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-09-27 15:26:32.824606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.364 [2024-09-27 15:26:32.824616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-09-27 15:26:32.824624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.364 [2024-09-27 15:26:32.824634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-09-27 15:26:32.824641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.364 [2024-09-27 15:26:32.824650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-09-27 15:26:32.824658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.364 [2024-09-27 15:26:32.824667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-09-27 15:26:32.824674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.364 [2024-09-27 15:26:32.824684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-09-27 15:26:32.824691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.364 [2024-09-27 15:26:32.824700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-09-27 15:26:32.824708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.364 [2024-09-27 15:26:32.824718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-09-27 15:26:32.824727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.364 [2024-09-27 15:26:32.824737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-09-27 15:26:32.824744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.364 [2024-09-27 15:26:32.824753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-09-27 15:26:32.824761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.364 [2024-09-27 15:26:32.824770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-09-27 15:26:32.824778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.364 [2024-09-27 15:26:32.824787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-09-27 15:26:32.824794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.364 [2024-09-27 15:26:32.824804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-09-27 15:26:32.824811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.364 [2024-09-27 15:26:32.824820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-09-27 15:26:32.824829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.364 [2024-09-27 15:26:32.824838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-09-27 15:26:32.824846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.364 [2024-09-27 15:26:32.824856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-09-27 15:26:32.824865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.364 [2024-09-27 15:26:32.824875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-09-27 15:26:32.824882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.364 [2024-09-27 15:26:32.824899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-09-27 15:26:32.824907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.364 [2024-09-27 15:26:32.824917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-09-27 15:26:32.824924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.364 [2024-09-27 15:26:32.824934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-09-27 15:26:32.824941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.364 [2024-09-27 15:26:32.824951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-09-27 15:26:32.824958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.364 [2024-09-27 15:26:32.824967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-09-27 15:26:32.824974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.364 [2024-09-27 15:26:32.824984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-09-27 15:26:32.824991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.364 [2024-09-27 15:26:32.825000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-09-27 15:26:32.825007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.364 [2024-09-27 15:26:32.825017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-09-27 15:26:32.825024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.364 [2024-09-27 15:26:32.825033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.364 [2024-09-27 15:26:32.825041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.365 [2024-09-27 15:26:32.825052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.365 [2024-09-27 15:26:32.825059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.365 [2024-09-27 15:26:32.825069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.365 [2024-09-27 15:26:32.825076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.365 [2024-09-27 15:26:32.825127] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xebf6b0 was disconnected and freed. reset controller. 00:07:52.365 [2024-09-27 15:26:32.826313] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:52.365 task offset: 98048 on job bdev=Nvme0n1 fails 00:07:52.365 00:07:52.365 Latency(us) 00:07:52.365 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:52.365 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:52.365 Job: Nvme0n1 ended in about 0.58 seconds with error 00:07:52.365 Verification LBA range: start 0x0 length 0x400 00:07:52.365 Nvme0n1 : 0.58 1310.10 81.88 109.46 0.00 44025.42 1570.13 37355.52 00:07:52.365 =================================================================================================================== 00:07:52.365 Total : 1310.10 81.88 109.46 0.00 44025.42 1570.13 37355.52 00:07:52.365 [2024-09-27 15:26:32.828303] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:52.626 [2024-09-27 15:26:32.880999] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:53.569 15:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 156061 00:07:53.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (156061) - No such process 00:07:53.569 15:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:53.569 15:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:53.569 15:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:53.569 15:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:53.569 15:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:07:53.569 15:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:07:53.569 15:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:07:53.569 15:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:07:53.569 { 00:07:53.569 "params": { 00:07:53.569 "name": "Nvme$subsystem", 00:07:53.569 "trtype": "$TEST_TRANSPORT", 00:07:53.569 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:53.569 "adrfam": "ipv4", 00:07:53.569 "trsvcid": "$NVMF_PORT", 00:07:53.569 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:53.569 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:53.569 "hdgst": ${hdgst:-false}, 00:07:53.569 "ddgst": ${ddgst:-false} 00:07:53.569 }, 00:07:53.569 "method": "bdev_nvme_attach_controller" 00:07:53.569 } 00:07:53.569 EOF 00:07:53.569 )") 00:07:53.569 15:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:07:53.569 15:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:07:53.569 15:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:07:53.569 15:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:07:53.569 "params": { 00:07:53.569 "name": "Nvme0", 00:07:53.569 "trtype": "tcp", 00:07:53.569 "traddr": "10.0.0.2", 00:07:53.569 "adrfam": "ipv4", 00:07:53.569 "trsvcid": "4420", 00:07:53.569 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:53.569 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:53.569 "hdgst": false, 00:07:53.569 "ddgst": false 00:07:53.569 }, 00:07:53.569 "method": "bdev_nvme_attach_controller" 00:07:53.569 }' 00:07:53.570 [2024-09-27 15:26:33.880615] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:07:53.570 [2024-09-27 15:26:33.880670] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid156542 ] 00:07:53.570 [2024-09-27 15:26:33.960946] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.570 [2024-09-27 15:26:33.990928] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.831 Running I/O for 1 seconds... 00:07:55.218 1534.00 IOPS, 95.88 MiB/s 00:07:55.218 Latency(us) 00:07:55.218 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:55.218 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:55.218 Verification LBA range: start 0x0 length 0x400 00:07:55.218 Nvme0n1 : 1.04 1544.37 96.52 0.00 0.00 40749.93 6853.97 33423.36 00:07:55.218 =================================================================================================================== 00:07:55.218 Total : 1544.37 96.52 0.00 0.00 40749.93 6853.97 33423.36 00:07:55.218 15:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:55.218 15:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:55.218 15:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:55.218 15:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:55.218 15:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:55.218 15:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:07:55.218 15:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:55.218 15:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:55.218 15:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:55.218 15:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:55.218 15:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:55.218 rmmod nvme_tcp 00:07:55.218 rmmod nvme_fabrics 00:07:55.218 rmmod nvme_keyring 00:07:55.218 15:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:55.218 15:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:55.218 15:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:55.218 15:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 155943 ']' 00:07:55.218 15:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 155943 00:07:55.218 15:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 155943 ']' 00:07:55.218 15:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 155943 00:07:55.218 15:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:07:55.218 15:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:55.218 15:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 155943 00:07:55.218 15:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:55.218 15:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:55.218 15:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 155943' 00:07:55.218 killing process with pid 155943 00:07:55.218 15:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 155943 00:07:55.218 15:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 155943 00:07:55.486 [2024-09-27 15:26:35.712745] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:55.486 15:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:07:55.486 15:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:07:55.486 15:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:07:55.486 15:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:55.486 15:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-save 00:07:55.486 15:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:07:55.486 15:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-restore 00:07:55.486 15:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:55.486 15:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:55.486 15:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.486 15:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:55.486 15:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.403 15:26:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:57.403 15:26:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:57.403 00:07:57.403 real 0m14.941s 00:07:57.403 user 0m23.704s 00:07:57.403 sys 0m6.974s 00:07:57.403 15:26:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:57.403 15:26:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:57.403 ************************************ 00:07:57.403 END TEST nvmf_host_management 00:07:57.403 ************************************ 00:07:57.403 15:26:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:57.403 15:26:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:57.403 15:26:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:57.403 15:26:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:57.665 ************************************ 00:07:57.665 START TEST nvmf_lvol 00:07:57.665 ************************************ 00:07:57.665 15:26:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:57.665 * Looking for test storage... 00:07:57.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:57.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.665 --rc genhtml_branch_coverage=1 00:07:57.665 --rc genhtml_function_coverage=1 00:07:57.665 --rc genhtml_legend=1 00:07:57.665 --rc geninfo_all_blocks=1 00:07:57.665 --rc geninfo_unexecuted_blocks=1 00:07:57.665 00:07:57.665 ' 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:57.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.665 --rc genhtml_branch_coverage=1 00:07:57.665 --rc genhtml_function_coverage=1 00:07:57.665 --rc genhtml_legend=1 00:07:57.665 --rc geninfo_all_blocks=1 00:07:57.665 --rc geninfo_unexecuted_blocks=1 00:07:57.665 00:07:57.665 ' 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:57.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.665 --rc genhtml_branch_coverage=1 00:07:57.665 --rc genhtml_function_coverage=1 00:07:57.665 --rc genhtml_legend=1 00:07:57.665 --rc geninfo_all_blocks=1 00:07:57.665 --rc geninfo_unexecuted_blocks=1 00:07:57.665 00:07:57.665 ' 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:57.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.665 --rc genhtml_branch_coverage=1 00:07:57.665 --rc genhtml_function_coverage=1 00:07:57.665 --rc genhtml_legend=1 00:07:57.665 --rc geninfo_all_blocks=1 00:07:57.665 --rc geninfo_unexecuted_blocks=1 00:07:57.665 00:07:57.665 ' 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:57.665 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:57.666 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:57.666 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:57.666 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:57.666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:57.666 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:57.666 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:57.666 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:57.666 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:57.666 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:57.666 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:57.666 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:57.666 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:57.666 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:57.666 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:07:57.666 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:57.666 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:07:57.666 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:07:57.666 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:07:57.666 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.666 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:57.666 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.666 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:07:57.666 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:07:57.666 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:57.666 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:05.815 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:05.815 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:05.815 Found net devices under 0000:31:00.0: cvl_0_0 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:05.815 Found net devices under 0000:31:00.1: cvl_0_1 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # is_hw=yes 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:08:05.815 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:08:05.816 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:05.816 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:05.816 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:05.816 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:05.816 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:05.816 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:05.816 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:05.816 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:05.816 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:05.816 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:05.816 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:05.816 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:05.816 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:05.816 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:05.816 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:05.816 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:05.816 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:05.816 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:05.816 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:05.816 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:05.816 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:05.816 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:05.816 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:05.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:05.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.595 ms 00:08:05.816 00:08:05.816 --- 10.0.0.2 ping statistics --- 00:08:05.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.816 rtt min/avg/max/mdev = 0.595/0.595/0.595/0.000 ms 00:08:05.816 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:05.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:05.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:08:05.816 00:08:05.816 --- 10.0.0.1 ping statistics --- 00:08:05.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.816 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:08:05.816 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:05.816 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # return 0 00:08:05.816 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:05.816 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:05.816 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:05.816 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:05.816 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:05.816 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:05.816 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:05.816 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:05.816 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:05.816 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:05.816 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:05.816 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=161239 00:08:05.816 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 161239 00:08:05.816 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:05.816 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 161239 ']' 00:08:05.816 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.816 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:05.816 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.816 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:05.816 15:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:05.816 [2024-09-27 15:26:45.893951] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:08:05.816 [2024-09-27 15:26:45.894018] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:05.816 [2024-09-27 15:26:45.986077] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:05.816 [2024-09-27 15:26:46.033244] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:05.816 [2024-09-27 15:26:46.033299] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:05.816 [2024-09-27 15:26:46.033308] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:05.816 [2024-09-27 15:26:46.033315] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:05.816 [2024-09-27 15:26:46.033321] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:05.816 [2024-09-27 15:26:46.033427] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.816 [2024-09-27 15:26:46.033578] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:05.816 [2024-09-27 15:26:46.033579] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.391 15:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:06.391 15:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:08:06.391 15:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:06.391 15:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:06.391 15:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:06.391 15:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:06.391 15:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:06.654 [2024-09-27 15:26:46.926858] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:06.654 15:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:06.916 15:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:06.916 15:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:07.179 15:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:07.179 15:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:07.179 15:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:07.442 15:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=13d6b097-4075-4b7f-b6ca-18f0247042e0 00:08:07.442 15:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 13d6b097-4075-4b7f-b6ca-18f0247042e0 lvol 20 00:08:07.704 15:26:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=4e76a697-8128-4e22-89e8-106d9020f8e4 00:08:07.704 15:26:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:07.966 15:26:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4e76a697-8128-4e22-89e8-106d9020f8e4 00:08:07.966 15:26:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:08.227 [2024-09-27 15:26:48.584128] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:08.227 15:26:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:08.489 15:26:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=161809 00:08:08.489 15:26:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:08.489 15:26:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:09.432 15:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 4e76a697-8128-4e22-89e8-106d9020f8e4 MY_SNAPSHOT 00:08:09.694 15:26:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=3648c443-449f-47c0-bc4b-ff039fa2e55c 00:08:09.694 15:26:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 4e76a697-8128-4e22-89e8-106d9020f8e4 30 00:08:09.955 15:26:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 3648c443-449f-47c0-bc4b-ff039fa2e55c MY_CLONE 00:08:10.216 15:26:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=f257c61d-4256-4727-9d09-60c8c76a7267 00:08:10.216 15:26:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate f257c61d-4256-4727-9d09-60c8c76a7267 00:08:10.477 15:26:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 161809 00:08:20.482 Initializing NVMe Controllers 00:08:20.482 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:20.482 Controller IO queue size 128, less than required. 00:08:20.482 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:20.482 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:20.482 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:20.482 Initialization complete. Launching workers. 00:08:20.482 ======================================================== 00:08:20.482 Latency(us) 00:08:20.482 Device Information : IOPS MiB/s Average min max 00:08:20.482 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 17081.10 66.72 7497.55 1502.05 48418.79 00:08:20.482 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16526.30 64.56 7748.21 3753.78 56123.90 00:08:20.482 ======================================================== 00:08:20.482 Total : 33607.40 131.28 7620.81 1502.05 56123.90 00:08:20.482 00:08:20.482 15:26:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:20.482 15:26:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4e76a697-8128-4e22-89e8-106d9020f8e4 00:08:20.482 15:26:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 13d6b097-4075-4b7f-b6ca-18f0247042e0 00:08:20.482 15:26:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:20.482 15:26:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:20.482 15:26:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:20.482 15:26:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:20.482 15:26:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:20.482 15:26:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:20.482 15:26:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:20.482 15:26:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:20.482 15:26:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:20.482 rmmod nvme_tcp 00:08:20.482 rmmod nvme_fabrics 00:08:20.482 rmmod nvme_keyring 00:08:20.482 15:26:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:20.482 15:26:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:20.482 15:26:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:20.482 15:26:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 161239 ']' 00:08:20.482 15:26:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 161239 00:08:20.482 15:26:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 161239 ']' 00:08:20.482 15:26:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 161239 00:08:20.482 15:26:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:08:20.482 15:26:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:20.482 15:26:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 161239 00:08:20.482 15:26:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:20.482 15:26:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:20.482 15:26:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 161239' 00:08:20.482 killing process with pid 161239 00:08:20.482 15:26:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 161239 00:08:20.482 15:26:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 161239 00:08:20.482 15:27:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:20.482 15:27:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:20.482 15:27:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:20.482 15:27:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:20.482 15:27:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-save 00:08:20.482 15:27:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:20.482 15:27:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-restore 00:08:20.482 15:27:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:20.482 15:27:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:20.482 15:27:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.482 15:27:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:20.482 15:27:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.870 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:21.870 00:08:21.870 real 0m24.309s 00:08:21.870 user 1m5.542s 00:08:21.870 sys 0m8.723s 00:08:21.870 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:21.870 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:21.870 ************************************ 00:08:21.870 END TEST nvmf_lvol 00:08:21.870 ************************************ 00:08:21.870 15:27:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:21.870 15:27:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:21.870 15:27:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:21.870 15:27:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:21.870 ************************************ 00:08:21.870 START TEST nvmf_lvs_grow 00:08:21.870 ************************************ 00:08:21.870 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:22.132 * Looking for test storage... 00:08:22.132 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:22.132 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:22.132 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:08:22.132 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:22.132 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:22.132 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:22.132 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:22.132 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:22.132 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:22.132 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:22.132 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:22.132 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:22.132 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:22.132 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:22.132 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:22.132 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:22.132 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:22.132 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:22.132 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:22.132 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:22.132 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:22.132 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:22.132 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:22.132 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:22.132 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:22.132 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:22.132 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:22.132 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:22.132 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:22.132 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:22.132 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:22.132 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:22.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.133 --rc genhtml_branch_coverage=1 00:08:22.133 --rc genhtml_function_coverage=1 00:08:22.133 --rc genhtml_legend=1 00:08:22.133 --rc geninfo_all_blocks=1 00:08:22.133 --rc geninfo_unexecuted_blocks=1 00:08:22.133 00:08:22.133 ' 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:22.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.133 --rc genhtml_branch_coverage=1 00:08:22.133 --rc genhtml_function_coverage=1 00:08:22.133 --rc genhtml_legend=1 00:08:22.133 --rc geninfo_all_blocks=1 00:08:22.133 --rc geninfo_unexecuted_blocks=1 00:08:22.133 00:08:22.133 ' 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:22.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.133 --rc genhtml_branch_coverage=1 00:08:22.133 --rc genhtml_function_coverage=1 00:08:22.133 --rc genhtml_legend=1 00:08:22.133 --rc geninfo_all_blocks=1 00:08:22.133 --rc geninfo_unexecuted_blocks=1 00:08:22.133 00:08:22.133 ' 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:22.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.133 --rc genhtml_branch_coverage=1 00:08:22.133 --rc genhtml_function_coverage=1 00:08:22.133 --rc genhtml_legend=1 00:08:22.133 --rc geninfo_all_blocks=1 00:08:22.133 --rc geninfo_unexecuted_blocks=1 00:08:22.133 00:08:22.133 ' 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:22.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:22.133 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:30.286 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:30.286 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:30.286 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:30.287 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:30.287 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.287 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:30.287 Found net devices under 0000:31:00.0: cvl_0_0 00:08:30.287 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.287 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:30.287 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.287 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:30.287 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:30.287 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:30.287 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:30.287 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.287 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:30.287 Found net devices under 0000:31:00.1: cvl_0_1 00:08:30.287 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.287 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:08:30.287 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # is_hw=yes 00:08:30.287 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:08:30.287 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:08:30.287 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:08:30.287 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:30.287 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:30.287 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:30.287 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:30.287 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:30.287 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:30.287 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:30.287 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:30.287 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:30.287 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:30.287 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:30.287 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:30.287 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:30.287 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:30.287 15:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:30.287 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:30.287 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:30.287 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:30.287 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:30.287 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:30.287 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:30.287 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:30.287 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:30.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:30.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.491 ms 00:08:30.287 00:08:30.287 --- 10.0.0.2 ping statistics --- 00:08:30.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.287 rtt min/avg/max/mdev = 0.491/0.491/0.491/0.000 ms 00:08:30.287 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:30.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:30.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:08:30.287 00:08:30.287 --- 10.0.0.1 ping statistics --- 00:08:30.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.287 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:08:30.287 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:30.287 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # return 0 00:08:30.287 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:30.287 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:30.287 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:30.287 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:30.287 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:30.287 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:30.287 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:30.287 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:30.287 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:30.287 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:30.287 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:30.287 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=168995 00:08:30.287 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 168995 00:08:30.287 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:30.287 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 168995 ']' 00:08:30.287 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.287 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:30.287 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.287 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:30.287 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:30.287 [2024-09-27 15:27:10.312209] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:08:30.287 [2024-09-27 15:27:10.312278] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:30.287 [2024-09-27 15:27:10.401657] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.287 [2024-09-27 15:27:10.447713] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:30.287 [2024-09-27 15:27:10.447765] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:30.287 [2024-09-27 15:27:10.447773] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:30.287 [2024-09-27 15:27:10.447780] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:30.287 [2024-09-27 15:27:10.447786] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:30.287 [2024-09-27 15:27:10.447815] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.859 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:30.859 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:08:30.859 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:30.859 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:30.859 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:30.859 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:30.859 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:31.121 [2024-09-27 15:27:11.348047] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:31.121 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:31.121 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:31.121 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:31.121 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:31.121 ************************************ 00:08:31.121 START TEST lvs_grow_clean 00:08:31.121 ************************************ 00:08:31.121 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:08:31.121 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:31.121 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:31.121 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:31.121 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:31.121 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:31.121 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:31.121 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:31.121 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:31.121 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:31.383 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:31.383 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:31.383 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=9dd5fc20-9f29-4e02-bbeb-1af3ed29d910 00:08:31.383 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9dd5fc20-9f29-4e02-bbeb-1af3ed29d910 00:08:31.383 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:31.644 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:31.644 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:31.644 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9dd5fc20-9f29-4e02-bbeb-1af3ed29d910 lvol 150 00:08:31.905 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=c6e40d2f-6a19-4365-9ba1-3ba2b28e5294 00:08:31.905 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:31.905 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:31.905 [2024-09-27 15:27:12.343278] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:31.905 [2024-09-27 15:27:12.343350] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:31.905 true 00:08:31.905 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9dd5fc20-9f29-4e02-bbeb-1af3ed29d910 00:08:31.905 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:32.166 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:32.166 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:32.427 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c6e40d2f-6a19-4365-9ba1-3ba2b28e5294 00:08:32.427 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:32.687 [2024-09-27 15:27:13.061539] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:32.687 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:32.948 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=169605 00:08:32.948 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:32.948 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:32.948 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 169605 /var/tmp/bdevperf.sock 00:08:32.948 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 169605 ']' 00:08:32.948 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:32.948 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:32.948 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:32.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:32.948 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:32.948 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:32.948 [2024-09-27 15:27:13.317892] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:08:32.948 [2024-09-27 15:27:13.317969] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169605 ] 00:08:32.948 [2024-09-27 15:27:13.399945] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.208 [2024-09-27 15:27:13.446984] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.778 15:27:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:33.778 15:27:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:08:33.778 15:27:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:34.038 Nvme0n1 00:08:34.038 15:27:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:34.298 [ 00:08:34.298 { 00:08:34.298 "name": "Nvme0n1", 00:08:34.298 "aliases": [ 00:08:34.298 "c6e40d2f-6a19-4365-9ba1-3ba2b28e5294" 00:08:34.298 ], 00:08:34.298 "product_name": "NVMe disk", 00:08:34.298 "block_size": 4096, 00:08:34.298 "num_blocks": 38912, 00:08:34.298 "uuid": "c6e40d2f-6a19-4365-9ba1-3ba2b28e5294", 00:08:34.298 "numa_id": 0, 00:08:34.298 "assigned_rate_limits": { 00:08:34.298 "rw_ios_per_sec": 0, 00:08:34.298 "rw_mbytes_per_sec": 0, 00:08:34.298 "r_mbytes_per_sec": 0, 00:08:34.298 "w_mbytes_per_sec": 0 00:08:34.298 }, 00:08:34.298 "claimed": false, 00:08:34.298 "zoned": false, 00:08:34.298 "supported_io_types": { 00:08:34.298 "read": true, 00:08:34.298 "write": true, 00:08:34.298 "unmap": true, 00:08:34.298 "flush": true, 00:08:34.298 "reset": true, 00:08:34.298 "nvme_admin": true, 00:08:34.298 "nvme_io": true, 00:08:34.298 "nvme_io_md": false, 00:08:34.298 "write_zeroes": true, 00:08:34.298 "zcopy": false, 00:08:34.298 "get_zone_info": false, 00:08:34.298 "zone_management": false, 00:08:34.298 "zone_append": false, 00:08:34.298 "compare": true, 00:08:34.298 "compare_and_write": true, 00:08:34.298 "abort": true, 00:08:34.298 "seek_hole": false, 00:08:34.298 "seek_data": false, 00:08:34.298 "copy": true, 00:08:34.298 "nvme_iov_md": false 00:08:34.298 }, 00:08:34.298 "memory_domains": [ 00:08:34.298 { 00:08:34.298 "dma_device_id": "system", 00:08:34.298 "dma_device_type": 1 00:08:34.298 } 00:08:34.298 ], 00:08:34.298 "driver_specific": { 00:08:34.298 "nvme": [ 00:08:34.298 { 00:08:34.298 "trid": { 00:08:34.298 "trtype": "TCP", 00:08:34.298 "adrfam": "IPv4", 00:08:34.298 "traddr": "10.0.0.2", 00:08:34.298 "trsvcid": "4420", 00:08:34.298 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:34.298 }, 00:08:34.298 "ctrlr_data": { 00:08:34.298 "cntlid": 1, 00:08:34.298 "vendor_id": "0x8086", 00:08:34.298 "model_number": "SPDK bdev Controller", 00:08:34.298 "serial_number": "SPDK0", 00:08:34.298 "firmware_revision": "25.01", 00:08:34.298 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:34.298 "oacs": { 00:08:34.298 "security": 0, 00:08:34.298 "format": 0, 00:08:34.298 "firmware": 0, 00:08:34.298 "ns_manage": 0 00:08:34.298 }, 00:08:34.298 "multi_ctrlr": true, 00:08:34.298 "ana_reporting": false 00:08:34.298 }, 00:08:34.298 "vs": { 00:08:34.298 "nvme_version": "1.3" 00:08:34.298 }, 00:08:34.298 "ns_data": { 00:08:34.298 "id": 1, 00:08:34.298 "can_share": true 00:08:34.298 } 00:08:34.298 } 00:08:34.298 ], 00:08:34.298 "mp_policy": "active_passive" 00:08:34.298 } 00:08:34.298 } 00:08:34.298 ] 00:08:34.298 15:27:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:34.298 15:27:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=169866 00:08:34.298 15:27:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:34.298 Running I/O for 10 seconds... 00:08:35.685 Latency(us) 00:08:35.685 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:35.685 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.685 Nvme0n1 : 1.00 22725.00 88.77 0.00 0.00 0.00 0.00 0.00 00:08:35.685 =================================================================================================================== 00:08:35.685 Total : 22725.00 88.77 0.00 0.00 0.00 0.00 0.00 00:08:35.685 00:08:36.258 15:27:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9dd5fc20-9f29-4e02-bbeb-1af3ed29d910 00:08:36.519 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.519 Nvme0n1 : 2.00 24050.00 93.95 0.00 0.00 0.00 0.00 0.00 00:08:36.519 =================================================================================================================== 00:08:36.519 Total : 24050.00 93.95 0.00 0.00 0.00 0.00 0.00 00:08:36.519 00:08:36.519 true 00:08:36.519 15:27:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9dd5fc20-9f29-4e02-bbeb-1af3ed29d910 00:08:36.519 15:27:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:36.781 15:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:36.781 15:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:36.781 15:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 169866 00:08:37.353 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.353 Nvme0n1 : 3.00 24512.67 95.75 0.00 0.00 0.00 0.00 0.00 00:08:37.353 =================================================================================================================== 00:08:37.353 Total : 24512.67 95.75 0.00 0.00 0.00 0.00 0.00 00:08:37.353 00:08:38.296 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.296 Nvme0n1 : 4.00 24771.00 96.76 0.00 0.00 0.00 0.00 0.00 00:08:38.296 =================================================================================================================== 00:08:38.296 Total : 24771.00 96.76 0.00 0.00 0.00 0.00 0.00 00:08:38.296 00:08:39.681 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.681 Nvme0n1 : 5.00 24948.40 97.45 0.00 0.00 0.00 0.00 0.00 00:08:39.681 =================================================================================================================== 00:08:39.681 Total : 24948.40 97.45 0.00 0.00 0.00 0.00 0.00 00:08:39.681 00:08:40.623 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.623 Nvme0n1 : 6.00 25055.67 97.87 0.00 0.00 0.00 0.00 0.00 00:08:40.623 =================================================================================================================== 00:08:40.623 Total : 25055.67 97.87 0.00 0.00 0.00 0.00 0.00 00:08:40.623 00:08:41.565 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.565 Nvme0n1 : 7.00 25144.29 98.22 0.00 0.00 0.00 0.00 0.00 00:08:41.565 =================================================================================================================== 00:08:41.565 Total : 25144.29 98.22 0.00 0.00 0.00 0.00 0.00 00:08:41.565 00:08:42.511 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.511 Nvme0n1 : 8.00 25203.00 98.45 0.00 0.00 0.00 0.00 0.00 00:08:42.511 =================================================================================================================== 00:08:42.511 Total : 25203.00 98.45 0.00 0.00 0.00 0.00 0.00 00:08:42.511 00:08:43.454 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.454 Nvme0n1 : 9.00 25262.67 98.68 0.00 0.00 0.00 0.00 0.00 00:08:43.454 =================================================================================================================== 00:08:43.454 Total : 25262.67 98.68 0.00 0.00 0.00 0.00 0.00 00:08:43.454 00:08:44.397 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.398 Nvme0n1 : 10.00 25306.20 98.85 0.00 0.00 0.00 0.00 0.00 00:08:44.398 =================================================================================================================== 00:08:44.398 Total : 25306.20 98.85 0.00 0.00 0.00 0.00 0.00 00:08:44.398 00:08:44.398 00:08:44.398 Latency(us) 00:08:44.398 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.398 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.398 Nvme0n1 : 10.01 25305.97 98.85 0.00 0.00 5055.21 2307.41 13216.43 00:08:44.398 =================================================================================================================== 00:08:44.398 Total : 25305.97 98.85 0.00 0.00 5055.21 2307.41 13216.43 00:08:44.398 { 00:08:44.398 "results": [ 00:08:44.398 { 00:08:44.398 "job": "Nvme0n1", 00:08:44.398 "core_mask": "0x2", 00:08:44.398 "workload": "randwrite", 00:08:44.398 "status": "finished", 00:08:44.398 "queue_depth": 128, 00:08:44.398 "io_size": 4096, 00:08:44.398 "runtime": 10.00515, 00:08:44.398 "iops": 25305.967426775213, 00:08:44.398 "mibps": 98.85143526084067, 00:08:44.398 "io_failed": 0, 00:08:44.398 "io_timeout": 0, 00:08:44.398 "avg_latency_us": 5055.212832944956, 00:08:44.398 "min_latency_us": 2307.4133333333334, 00:08:44.398 "max_latency_us": 13216.426666666666 00:08:44.398 } 00:08:44.398 ], 00:08:44.398 "core_count": 1 00:08:44.398 } 00:08:44.398 15:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 169605 00:08:44.398 15:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 169605 ']' 00:08:44.398 15:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 169605 00:08:44.398 15:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:08:44.398 15:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:44.398 15:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 169605 00:08:44.398 15:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:44.398 15:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:44.398 15:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 169605' 00:08:44.398 killing process with pid 169605 00:08:44.398 15:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 169605 00:08:44.398 Received shutdown signal, test time was about 10.000000 seconds 00:08:44.398 00:08:44.398 Latency(us) 00:08:44.398 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.398 =================================================================================================================== 00:08:44.398 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:44.398 15:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 169605 00:08:44.659 15:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:44.920 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:44.920 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9dd5fc20-9f29-4e02-bbeb-1af3ed29d910 00:08:44.920 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:45.181 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:45.181 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:45.181 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:45.442 [2024-09-27 15:27:25.702216] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:45.442 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9dd5fc20-9f29-4e02-bbeb-1af3ed29d910 00:08:45.442 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:45.442 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9dd5fc20-9f29-4e02-bbeb-1af3ed29d910 00:08:45.442 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:45.442 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:45.442 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:45.442 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:45.442 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:45.442 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:45.442 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:45.442 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:45.442 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9dd5fc20-9f29-4e02-bbeb-1af3ed29d910 00:08:45.442 request: 00:08:45.442 { 00:08:45.442 "uuid": "9dd5fc20-9f29-4e02-bbeb-1af3ed29d910", 00:08:45.442 "method": "bdev_lvol_get_lvstores", 00:08:45.442 "req_id": 1 00:08:45.442 } 00:08:45.442 Got JSON-RPC error response 00:08:45.442 response: 00:08:45.442 { 00:08:45.442 "code": -19, 00:08:45.442 "message": "No such device" 00:08:45.442 } 00:08:45.703 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:45.703 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:45.703 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:45.703 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:45.703 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:45.703 aio_bdev 00:08:45.703 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c6e40d2f-6a19-4365-9ba1-3ba2b28e5294 00:08:45.703 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=c6e40d2f-6a19-4365-9ba1-3ba2b28e5294 00:08:45.703 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:45.703 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:08:45.703 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:45.703 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:45.703 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:45.963 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c6e40d2f-6a19-4365-9ba1-3ba2b28e5294 -t 2000 00:08:45.963 [ 00:08:45.963 { 00:08:45.963 "name": "c6e40d2f-6a19-4365-9ba1-3ba2b28e5294", 00:08:45.963 "aliases": [ 00:08:45.963 "lvs/lvol" 00:08:45.963 ], 00:08:45.963 "product_name": "Logical Volume", 00:08:45.963 "block_size": 4096, 00:08:45.963 "num_blocks": 38912, 00:08:45.963 "uuid": "c6e40d2f-6a19-4365-9ba1-3ba2b28e5294", 00:08:45.963 "assigned_rate_limits": { 00:08:45.963 "rw_ios_per_sec": 0, 00:08:45.963 "rw_mbytes_per_sec": 0, 00:08:45.963 "r_mbytes_per_sec": 0, 00:08:45.963 "w_mbytes_per_sec": 0 00:08:45.963 }, 00:08:45.963 "claimed": false, 00:08:45.963 "zoned": false, 00:08:45.963 "supported_io_types": { 00:08:45.963 "read": true, 00:08:45.963 "write": true, 00:08:45.963 "unmap": true, 00:08:45.963 "flush": false, 00:08:45.963 "reset": true, 00:08:45.963 "nvme_admin": false, 00:08:45.963 "nvme_io": false, 00:08:45.963 "nvme_io_md": false, 00:08:45.963 "write_zeroes": true, 00:08:45.963 "zcopy": false, 00:08:45.963 "get_zone_info": false, 00:08:45.963 "zone_management": false, 00:08:45.963 "zone_append": false, 00:08:45.963 "compare": false, 00:08:45.963 "compare_and_write": false, 00:08:45.963 "abort": false, 00:08:45.963 "seek_hole": true, 00:08:45.963 "seek_data": true, 00:08:45.963 "copy": false, 00:08:45.963 "nvme_iov_md": false 00:08:45.963 }, 00:08:45.963 "driver_specific": { 00:08:45.963 "lvol": { 00:08:45.963 "lvol_store_uuid": "9dd5fc20-9f29-4e02-bbeb-1af3ed29d910", 00:08:45.963 "base_bdev": "aio_bdev", 00:08:45.963 "thin_provision": false, 00:08:45.963 "num_allocated_clusters": 38, 00:08:45.963 "snapshot": false, 00:08:45.963 "clone": false, 00:08:45.963 "esnap_clone": false 00:08:45.963 } 00:08:45.963 } 00:08:45.963 } 00:08:45.963 ] 00:08:45.963 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:08:45.963 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9dd5fc20-9f29-4e02-bbeb-1af3ed29d910 00:08:45.963 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:46.224 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:46.224 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9dd5fc20-9f29-4e02-bbeb-1af3ed29d910 00:08:46.224 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:46.486 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:46.486 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c6e40d2f-6a19-4365-9ba1-3ba2b28e5294 00:08:46.486 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9dd5fc20-9f29-4e02-bbeb-1af3ed29d910 00:08:46.747 15:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:47.008 15:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:47.008 00:08:47.008 real 0m15.945s 00:08:47.008 user 0m15.592s 00:08:47.008 sys 0m1.434s 00:08:47.008 15:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:47.008 15:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:47.008 ************************************ 00:08:47.008 END TEST lvs_grow_clean 00:08:47.008 ************************************ 00:08:47.008 15:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:47.008 15:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:47.008 15:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:47.008 15:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:47.008 ************************************ 00:08:47.008 START TEST lvs_grow_dirty 00:08:47.008 ************************************ 00:08:47.008 15:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:08:47.008 15:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:47.008 15:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:47.008 15:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:47.008 15:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:47.008 15:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:47.008 15:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:47.008 15:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:47.008 15:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:47.008 15:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:47.268 15:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:47.268 15:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:47.529 15:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=4dd55798-2f2f-4ab6-881e-3c1a83730562 00:08:47.529 15:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4dd55798-2f2f-4ab6-881e-3c1a83730562 00:08:47.529 15:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:47.529 15:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:47.529 15:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:47.529 15:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4dd55798-2f2f-4ab6-881e-3c1a83730562 lvol 150 00:08:47.790 15:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=1659d569-daaf-47c0-9b84-4e84c1e99590 00:08:47.791 15:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:47.791 15:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:47.791 [2024-09-27 15:27:28.271943] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:47.791 [2024-09-27 15:27:28.271984] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:47.791 true 00:08:48.051 15:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4dd55798-2f2f-4ab6-881e-3c1a83730562 00:08:48.051 15:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:48.051 15:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:48.051 15:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:48.311 15:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1659d569-daaf-47c0-9b84-4e84c1e99590 00:08:48.311 15:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:48.572 [2024-09-27 15:27:28.929848] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:48.572 15:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:48.833 15:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=172919 00:08:48.833 15:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:48.833 15:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:48.833 15:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 172919 /var/tmp/bdevperf.sock 00:08:48.833 15:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 172919 ']' 00:08:48.833 15:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:48.833 15:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:48.833 15:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:48.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:48.833 15:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:48.833 15:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:48.833 [2024-09-27 15:27:29.161225] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:08:48.833 [2024-09-27 15:27:29.161281] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172919 ] 00:08:48.833 [2024-09-27 15:27:29.239324] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.833 [2024-09-27 15:27:29.267681] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.095 15:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:49.095 15:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:49.095 15:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:49.356 Nvme0n1 00:08:49.356 15:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:49.617 [ 00:08:49.617 { 00:08:49.617 "name": "Nvme0n1", 00:08:49.617 "aliases": [ 00:08:49.617 "1659d569-daaf-47c0-9b84-4e84c1e99590" 00:08:49.617 ], 00:08:49.617 "product_name": "NVMe disk", 00:08:49.617 "block_size": 4096, 00:08:49.617 "num_blocks": 38912, 00:08:49.617 "uuid": "1659d569-daaf-47c0-9b84-4e84c1e99590", 00:08:49.617 "numa_id": 0, 00:08:49.617 "assigned_rate_limits": { 00:08:49.617 "rw_ios_per_sec": 0, 00:08:49.617 "rw_mbytes_per_sec": 0, 00:08:49.617 "r_mbytes_per_sec": 0, 00:08:49.617 "w_mbytes_per_sec": 0 00:08:49.617 }, 00:08:49.617 "claimed": false, 00:08:49.617 "zoned": false, 00:08:49.617 "supported_io_types": { 00:08:49.617 "read": true, 00:08:49.617 "write": true, 00:08:49.617 "unmap": true, 00:08:49.617 "flush": true, 00:08:49.617 "reset": true, 00:08:49.617 "nvme_admin": true, 00:08:49.617 "nvme_io": true, 00:08:49.617 "nvme_io_md": false, 00:08:49.617 "write_zeroes": true, 00:08:49.617 "zcopy": false, 00:08:49.617 "get_zone_info": false, 00:08:49.617 "zone_management": false, 00:08:49.617 "zone_append": false, 00:08:49.617 "compare": true, 00:08:49.617 "compare_and_write": true, 00:08:49.617 "abort": true, 00:08:49.617 "seek_hole": false, 00:08:49.617 "seek_data": false, 00:08:49.617 "copy": true, 00:08:49.617 "nvme_iov_md": false 00:08:49.617 }, 00:08:49.617 "memory_domains": [ 00:08:49.617 { 00:08:49.617 "dma_device_id": "system", 00:08:49.617 "dma_device_type": 1 00:08:49.617 } 00:08:49.617 ], 00:08:49.617 "driver_specific": { 00:08:49.617 "nvme": [ 00:08:49.617 { 00:08:49.617 "trid": { 00:08:49.617 "trtype": "TCP", 00:08:49.617 "adrfam": "IPv4", 00:08:49.617 "traddr": "10.0.0.2", 00:08:49.617 "trsvcid": "4420", 00:08:49.617 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:49.617 }, 00:08:49.617 "ctrlr_data": { 00:08:49.617 "cntlid": 1, 00:08:49.617 "vendor_id": "0x8086", 00:08:49.617 "model_number": "SPDK bdev Controller", 00:08:49.617 "serial_number": "SPDK0", 00:08:49.617 "firmware_revision": "25.01", 00:08:49.617 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:49.617 "oacs": { 00:08:49.617 "security": 0, 00:08:49.617 "format": 0, 00:08:49.617 "firmware": 0, 00:08:49.617 "ns_manage": 0 00:08:49.617 }, 00:08:49.617 "multi_ctrlr": true, 00:08:49.617 "ana_reporting": false 00:08:49.617 }, 00:08:49.617 "vs": { 00:08:49.617 "nvme_version": "1.3" 00:08:49.617 }, 00:08:49.617 "ns_data": { 00:08:49.617 "id": 1, 00:08:49.617 "can_share": true 00:08:49.617 } 00:08:49.617 } 00:08:49.617 ], 00:08:49.617 "mp_policy": "active_passive" 00:08:49.617 } 00:08:49.617 } 00:08:49.617 ] 00:08:49.617 15:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=172953 00:08:49.617 15:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:49.617 15:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:49.617 Running I/O for 10 seconds... 00:08:50.562 Latency(us) 00:08:50.562 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:50.562 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:50.562 Nvme0n1 : 1.00 25249.00 98.63 0.00 0.00 0.00 0.00 0.00 00:08:50.562 =================================================================================================================== 00:08:50.562 Total : 25249.00 98.63 0.00 0.00 0.00 0.00 0.00 00:08:50.562 00:08:51.507 15:27:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4dd55798-2f2f-4ab6-881e-3c1a83730562 00:08:51.768 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.768 Nvme0n1 : 2.00 25381.50 99.15 0.00 0.00 0.00 0.00 0.00 00:08:51.768 =================================================================================================================== 00:08:51.768 Total : 25381.50 99.15 0.00 0.00 0.00 0.00 0.00 00:08:51.768 00:08:51.768 true 00:08:51.768 15:27:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4dd55798-2f2f-4ab6-881e-3c1a83730562 00:08:51.768 15:27:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:52.031 15:27:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:52.031 15:27:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:52.031 15:27:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 172953 00:08:52.603 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.603 Nvme0n1 : 3.00 25426.67 99.32 0.00 0.00 0.00 0.00 0.00 00:08:52.603 =================================================================================================================== 00:08:52.603 Total : 25426.67 99.32 0.00 0.00 0.00 0.00 0.00 00:08:52.603 00:08:53.548 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.548 Nvme0n1 : 4.00 25464.75 99.47 0.00 0.00 0.00 0.00 0.00 00:08:53.548 =================================================================================================================== 00:08:53.548 Total : 25464.75 99.47 0.00 0.00 0.00 0.00 0.00 00:08:53.548 00:08:54.935 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:54.935 Nvme0n1 : 5.00 25509.60 99.65 0.00 0.00 0.00 0.00 0.00 00:08:54.935 =================================================================================================================== 00:08:54.935 Total : 25509.60 99.65 0.00 0.00 0.00 0.00 0.00 00:08:54.935 00:08:55.881 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.881 Nvme0n1 : 6.00 25531.67 99.73 0.00 0.00 0.00 0.00 0.00 00:08:55.881 =================================================================================================================== 00:08:55.881 Total : 25531.67 99.73 0.00 0.00 0.00 0.00 0.00 00:08:55.881 00:08:56.824 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.824 Nvme0n1 : 7.00 25553.71 99.82 0.00 0.00 0.00 0.00 0.00 00:08:56.824 =================================================================================================================== 00:08:56.824 Total : 25553.71 99.82 0.00 0.00 0.00 0.00 0.00 00:08:56.824 00:08:57.781 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.781 Nvme0n1 : 8.00 25567.88 99.87 0.00 0.00 0.00 0.00 0.00 00:08:57.781 =================================================================================================================== 00:08:57.781 Total : 25567.88 99.87 0.00 0.00 0.00 0.00 0.00 00:08:57.781 00:08:58.725 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.726 Nvme0n1 : 9.00 25580.11 99.92 0.00 0.00 0.00 0.00 0.00 00:08:58.726 =================================================================================================================== 00:08:58.726 Total : 25580.11 99.92 0.00 0.00 0.00 0.00 0.00 00:08:58.726 00:08:59.669 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.669 Nvme0n1 : 10.00 25589.50 99.96 0.00 0.00 0.00 0.00 0.00 00:08:59.669 =================================================================================================================== 00:08:59.669 Total : 25589.50 99.96 0.00 0.00 0.00 0.00 0.00 00:08:59.669 00:08:59.669 00:08:59.669 Latency(us) 00:08:59.670 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:59.670 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.670 Nvme0n1 : 10.00 25590.13 99.96 0.00 0.00 4998.95 1570.13 9120.43 00:08:59.670 =================================================================================================================== 00:08:59.670 Total : 25590.13 99.96 0.00 0.00 4998.95 1570.13 9120.43 00:08:59.670 { 00:08:59.670 "results": [ 00:08:59.670 { 00:08:59.670 "job": "Nvme0n1", 00:08:59.670 "core_mask": "0x2", 00:08:59.670 "workload": "randwrite", 00:08:59.670 "status": "finished", 00:08:59.670 "queue_depth": 128, 00:08:59.670 "io_size": 4096, 00:08:59.670 "runtime": 10.002255, 00:08:59.670 "iops": 25590.129425814477, 00:08:59.670 "mibps": 99.9614430695878, 00:08:59.670 "io_failed": 0, 00:08:59.670 "io_timeout": 0, 00:08:59.670 "avg_latency_us": 4998.951573337918, 00:08:59.670 "min_latency_us": 1570.1333333333334, 00:08:59.670 "max_latency_us": 9120.426666666666 00:08:59.670 } 00:08:59.670 ], 00:08:59.670 "core_count": 1 00:08:59.670 } 00:08:59.670 15:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 172919 00:08:59.670 15:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 172919 ']' 00:08:59.670 15:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 172919 00:08:59.670 15:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:08:59.670 15:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:59.670 15:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 172919 00:08:59.670 15:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:59.670 15:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:59.670 15:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 172919' 00:08:59.670 killing process with pid 172919 00:08:59.670 15:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 172919 00:08:59.670 Received shutdown signal, test time was about 10.000000 seconds 00:08:59.670 00:08:59.670 Latency(us) 00:08:59.670 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:59.670 =================================================================================================================== 00:08:59.670 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:59.670 15:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 172919 00:08:59.930 15:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:59.930 15:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:00.192 15:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4dd55798-2f2f-4ab6-881e-3c1a83730562 00:09:00.192 15:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:00.453 15:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:00.453 15:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:00.453 15:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 168995 00:09:00.453 15:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 168995 00:09:00.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 168995 Killed "${NVMF_APP[@]}" "$@" 00:09:00.453 15:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:00.453 15:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:00.453 15:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:00.453 15:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:00.453 15:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:00.453 15:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=175206 00:09:00.453 15:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 175206 00:09:00.453 15:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:00.453 15:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 175206 ']' 00:09:00.453 15:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.453 15:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:00.453 15:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.454 15:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:00.454 15:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:00.454 [2024-09-27 15:27:40.922850] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:09:00.454 [2024-09-27 15:27:40.922915] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:00.715 [2024-09-27 15:27:41.005146] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.715 [2024-09-27 15:27:41.035031] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:00.715 [2024-09-27 15:27:41.035066] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:00.715 [2024-09-27 15:27:41.035071] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:00.715 [2024-09-27 15:27:41.035080] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:00.715 [2024-09-27 15:27:41.035085] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:00.715 [2024-09-27 15:27:41.035100] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.287 15:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:01.287 15:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:01.287 15:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:01.287 15:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:01.287 15:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:01.287 15:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:01.287 15:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:01.549 [2024-09-27 15:27:41.908350] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:01.549 [2024-09-27 15:27:41.908422] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:01.549 [2024-09-27 15:27:41.908445] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:01.549 15:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:01.549 15:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 1659d569-daaf-47c0-9b84-4e84c1e99590 00:09:01.549 15:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=1659d569-daaf-47c0-9b84-4e84c1e99590 00:09:01.549 15:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:01.549 15:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:01.549 15:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:01.549 15:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:01.549 15:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:01.810 15:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1659d569-daaf-47c0-9b84-4e84c1e99590 -t 2000 00:09:01.810 [ 00:09:01.810 { 00:09:01.810 "name": "1659d569-daaf-47c0-9b84-4e84c1e99590", 00:09:01.810 "aliases": [ 00:09:01.810 "lvs/lvol" 00:09:01.810 ], 00:09:01.810 "product_name": "Logical Volume", 00:09:01.810 "block_size": 4096, 00:09:01.810 "num_blocks": 38912, 00:09:01.810 "uuid": "1659d569-daaf-47c0-9b84-4e84c1e99590", 00:09:01.810 "assigned_rate_limits": { 00:09:01.810 "rw_ios_per_sec": 0, 00:09:01.810 "rw_mbytes_per_sec": 0, 00:09:01.810 "r_mbytes_per_sec": 0, 00:09:01.810 "w_mbytes_per_sec": 0 00:09:01.810 }, 00:09:01.810 "claimed": false, 00:09:01.810 "zoned": false, 00:09:01.810 "supported_io_types": { 00:09:01.810 "read": true, 00:09:01.810 "write": true, 00:09:01.810 "unmap": true, 00:09:01.810 "flush": false, 00:09:01.810 "reset": true, 00:09:01.810 "nvme_admin": false, 00:09:01.810 "nvme_io": false, 00:09:01.810 "nvme_io_md": false, 00:09:01.810 "write_zeroes": true, 00:09:01.810 "zcopy": false, 00:09:01.810 "get_zone_info": false, 00:09:01.810 "zone_management": false, 00:09:01.810 "zone_append": false, 00:09:01.810 "compare": false, 00:09:01.810 "compare_and_write": false, 00:09:01.810 "abort": false, 00:09:01.810 "seek_hole": true, 00:09:01.810 "seek_data": true, 00:09:01.810 "copy": false, 00:09:01.810 "nvme_iov_md": false 00:09:01.810 }, 00:09:01.810 "driver_specific": { 00:09:01.810 "lvol": { 00:09:01.810 "lvol_store_uuid": "4dd55798-2f2f-4ab6-881e-3c1a83730562", 00:09:01.810 "base_bdev": "aio_bdev", 00:09:01.810 "thin_provision": false, 00:09:01.810 "num_allocated_clusters": 38, 00:09:01.810 "snapshot": false, 00:09:01.810 "clone": false, 00:09:01.810 "esnap_clone": false 00:09:01.810 } 00:09:01.810 } 00:09:01.810 } 00:09:01.810 ] 00:09:01.810 15:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:01.810 15:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4dd55798-2f2f-4ab6-881e-3c1a83730562 00:09:01.810 15:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:02.072 15:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:02.072 15:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4dd55798-2f2f-4ab6-881e-3c1a83730562 00:09:02.072 15:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:02.333 15:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:02.333 15:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:02.333 [2024-09-27 15:27:42.769059] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:02.333 15:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4dd55798-2f2f-4ab6-881e-3c1a83730562 00:09:02.333 15:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:02.333 15:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4dd55798-2f2f-4ab6-881e-3c1a83730562 00:09:02.333 15:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:02.333 15:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:02.333 15:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:02.333 15:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:02.333 15:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:02.333 15:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:02.334 15:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:02.334 15:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:02.334 15:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4dd55798-2f2f-4ab6-881e-3c1a83730562 00:09:02.595 request: 00:09:02.595 { 00:09:02.596 "uuid": "4dd55798-2f2f-4ab6-881e-3c1a83730562", 00:09:02.596 "method": "bdev_lvol_get_lvstores", 00:09:02.596 "req_id": 1 00:09:02.596 } 00:09:02.596 Got JSON-RPC error response 00:09:02.596 response: 00:09:02.596 { 00:09:02.596 "code": -19, 00:09:02.596 "message": "No such device" 00:09:02.596 } 00:09:02.596 15:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:02.596 15:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:02.596 15:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:02.596 15:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:02.596 15:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:02.857 aio_bdev 00:09:02.857 15:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1659d569-daaf-47c0-9b84-4e84c1e99590 00:09:02.857 15:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=1659d569-daaf-47c0-9b84-4e84c1e99590 00:09:02.857 15:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:02.857 15:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:02.857 15:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:02.857 15:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:02.857 15:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:02.857 15:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1659d569-daaf-47c0-9b84-4e84c1e99590 -t 2000 00:09:03.118 [ 00:09:03.118 { 00:09:03.118 "name": "1659d569-daaf-47c0-9b84-4e84c1e99590", 00:09:03.118 "aliases": [ 00:09:03.118 "lvs/lvol" 00:09:03.118 ], 00:09:03.118 "product_name": "Logical Volume", 00:09:03.118 "block_size": 4096, 00:09:03.118 "num_blocks": 38912, 00:09:03.118 "uuid": "1659d569-daaf-47c0-9b84-4e84c1e99590", 00:09:03.118 "assigned_rate_limits": { 00:09:03.118 "rw_ios_per_sec": 0, 00:09:03.118 "rw_mbytes_per_sec": 0, 00:09:03.118 "r_mbytes_per_sec": 0, 00:09:03.118 "w_mbytes_per_sec": 0 00:09:03.118 }, 00:09:03.118 "claimed": false, 00:09:03.118 "zoned": false, 00:09:03.118 "supported_io_types": { 00:09:03.118 "read": true, 00:09:03.118 "write": true, 00:09:03.118 "unmap": true, 00:09:03.118 "flush": false, 00:09:03.118 "reset": true, 00:09:03.118 "nvme_admin": false, 00:09:03.118 "nvme_io": false, 00:09:03.118 "nvme_io_md": false, 00:09:03.118 "write_zeroes": true, 00:09:03.118 "zcopy": false, 00:09:03.118 "get_zone_info": false, 00:09:03.118 "zone_management": false, 00:09:03.118 "zone_append": false, 00:09:03.118 "compare": false, 00:09:03.118 "compare_and_write": false, 00:09:03.118 "abort": false, 00:09:03.118 "seek_hole": true, 00:09:03.118 "seek_data": true, 00:09:03.118 "copy": false, 00:09:03.118 "nvme_iov_md": false 00:09:03.118 }, 00:09:03.118 "driver_specific": { 00:09:03.118 "lvol": { 00:09:03.118 "lvol_store_uuid": "4dd55798-2f2f-4ab6-881e-3c1a83730562", 00:09:03.118 "base_bdev": "aio_bdev", 00:09:03.118 "thin_provision": false, 00:09:03.118 "num_allocated_clusters": 38, 00:09:03.118 "snapshot": false, 00:09:03.118 "clone": false, 00:09:03.118 "esnap_clone": false 00:09:03.118 } 00:09:03.118 } 00:09:03.118 } 00:09:03.118 ] 00:09:03.118 15:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:03.118 15:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4dd55798-2f2f-4ab6-881e-3c1a83730562 00:09:03.118 15:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:03.380 15:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:03.380 15:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4dd55798-2f2f-4ab6-881e-3c1a83730562 00:09:03.380 15:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:03.380 15:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:03.380 15:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1659d569-daaf-47c0-9b84-4e84c1e99590 00:09:03.641 15:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4dd55798-2f2f-4ab6-881e-3c1a83730562 00:09:03.903 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:03.903 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:03.903 00:09:03.903 real 0m16.935s 00:09:03.903 user 0m44.570s 00:09:03.903 sys 0m3.005s 00:09:03.903 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:03.903 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:03.903 ************************************ 00:09:03.903 END TEST lvs_grow_dirty 00:09:03.903 ************************************ 00:09:04.164 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:04.164 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:09:04.164 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:09:04.164 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:04.164 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:04.164 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:04.164 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:04.164 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:04.164 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:04.164 nvmf_trace.0 00:09:04.164 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:09:04.164 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:04.164 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:04.164 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:04.164 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:04.164 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:04.164 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:04.164 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:04.164 rmmod nvme_tcp 00:09:04.164 rmmod nvme_fabrics 00:09:04.164 rmmod nvme_keyring 00:09:04.164 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:04.164 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:04.164 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:04.164 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 175206 ']' 00:09:04.164 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 175206 00:09:04.164 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 175206 ']' 00:09:04.164 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 175206 00:09:04.164 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:09:04.164 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:04.164 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 175206 00:09:04.164 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:04.164 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:04.164 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 175206' 00:09:04.164 killing process with pid 175206 00:09:04.164 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 175206 00:09:04.164 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 175206 00:09:04.424 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:04.424 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:04.424 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:04.424 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:04.424 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-save 00:09:04.424 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:04.424 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-restore 00:09:04.424 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:04.425 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:04.425 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.425 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:04.425 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.338 15:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:06.338 00:09:06.338 real 0m44.471s 00:09:06.338 user 1m6.555s 00:09:06.338 sys 0m10.689s 00:09:06.338 15:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:06.338 15:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:06.338 ************************************ 00:09:06.338 END TEST nvmf_lvs_grow 00:09:06.338 ************************************ 00:09:06.338 15:27:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:06.338 15:27:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:06.338 15:27:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:06.338 15:27:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:06.600 ************************************ 00:09:06.600 START TEST nvmf_bdev_io_wait 00:09:06.600 ************************************ 00:09:06.600 15:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:06.600 * Looking for test storage... 00:09:06.600 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:06.600 15:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:06.600 15:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:09:06.600 15:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:06.600 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:06.600 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:06.600 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:06.600 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:06.600 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:06.600 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:06.600 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:06.600 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:06.600 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:06.600 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:06.600 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:06.600 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:06.600 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:06.600 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:06.600 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:06.600 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:06.600 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:06.600 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:06.600 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:06.600 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:06.600 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:06.600 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:06.600 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:06.600 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:06.600 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:06.600 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:06.600 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:06.600 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:06.600 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:06.600 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:06.600 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:06.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.600 --rc genhtml_branch_coverage=1 00:09:06.600 --rc genhtml_function_coverage=1 00:09:06.600 --rc genhtml_legend=1 00:09:06.600 --rc geninfo_all_blocks=1 00:09:06.600 --rc geninfo_unexecuted_blocks=1 00:09:06.600 00:09:06.600 ' 00:09:06.600 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:06.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.600 --rc genhtml_branch_coverage=1 00:09:06.600 --rc genhtml_function_coverage=1 00:09:06.600 --rc genhtml_legend=1 00:09:06.600 --rc geninfo_all_blocks=1 00:09:06.600 --rc geninfo_unexecuted_blocks=1 00:09:06.600 00:09:06.600 ' 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:06.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.601 --rc genhtml_branch_coverage=1 00:09:06.601 --rc genhtml_function_coverage=1 00:09:06.601 --rc genhtml_legend=1 00:09:06.601 --rc geninfo_all_blocks=1 00:09:06.601 --rc geninfo_unexecuted_blocks=1 00:09:06.601 00:09:06.601 ' 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:06.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.601 --rc genhtml_branch_coverage=1 00:09:06.601 --rc genhtml_function_coverage=1 00:09:06.601 --rc genhtml_legend=1 00:09:06.601 --rc geninfo_all_blocks=1 00:09:06.601 --rc geninfo_unexecuted_blocks=1 00:09:06.601 00:09:06.601 ' 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:06.601 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:06.601 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:14.748 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:14.748 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:14.748 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:14.748 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:14.748 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:14.748 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:14.748 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:14.748 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:14.748 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:14.748 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:14.748 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:14.748 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:14.748 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:14.748 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:14.748 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:14.748 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:14.748 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:14.748 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:14.748 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:14.748 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:14.748 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:14.748 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:14.748 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:14.748 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:14.748 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:14.748 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:14.748 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:14.748 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:09:14.748 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:09:14.748 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:09:14.748 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:09:14.748 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:14.748 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:14.748 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:14.748 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:14.748 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:14.748 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:14.748 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:14.748 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:14.748 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:14.748 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:14.748 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:14.748 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:14.748 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:14.748 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:14.748 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:14.749 Found net devices under 0000:31:00.0: cvl_0_0 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:14.749 Found net devices under 0000:31:00.1: cvl_0_1 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # is_hw=yes 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:14.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:14.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.584 ms 00:09:14.749 00:09:14.749 --- 10.0.0.2 ping statistics --- 00:09:14.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.749 rtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:14.749 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:14.749 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:09:14.749 00:09:14.749 --- 10.0.0.1 ping statistics --- 00:09:14.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.749 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # return 0 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=180307 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 180307 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 180307 ']' 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:14.749 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:14.749 [2024-09-27 15:27:54.807456] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:09:14.749 [2024-09-27 15:27:54.807519] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:14.749 [2024-09-27 15:27:54.897414] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:14.749 [2024-09-27 15:27:54.946352] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:14.749 [2024-09-27 15:27:54.946410] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:14.749 [2024-09-27 15:27:54.946418] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:14.749 [2024-09-27 15:27:54.946425] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:14.749 [2024-09-27 15:27:54.946431] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:14.749 [2024-09-27 15:27:54.946537] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.749 [2024-09-27 15:27:54.946693] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:14.749 [2024-09-27 15:27:54.946837] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.749 [2024-09-27 15:27:54.946838] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:15.323 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:15.323 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:09:15.323 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:15.323 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:15.323 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:15.323 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:15.323 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:15.323 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.323 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:15.323 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.323 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:15.323 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.323 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:15.323 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.323 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:15.323 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.323 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:15.323 [2024-09-27 15:27:55.753806] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:15.323 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.323 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:15.323 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.323 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:15.323 Malloc0 00:09:15.323 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.323 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:15.323 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.323 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:15.323 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.323 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:15.323 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.323 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:15.587 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.587 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:15.587 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.587 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:15.587 [2024-09-27 15:27:55.826671] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:15.587 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.587 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=180486 00:09:15.587 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:15.587 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:15.587 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=180488 00:09:15.587 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:15.587 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:15.587 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:15.587 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:15.587 { 00:09:15.587 "params": { 00:09:15.587 "name": "Nvme$subsystem", 00:09:15.587 "trtype": "$TEST_TRANSPORT", 00:09:15.587 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:15.587 "adrfam": "ipv4", 00:09:15.587 "trsvcid": "$NVMF_PORT", 00:09:15.587 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:15.587 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:15.587 "hdgst": ${hdgst:-false}, 00:09:15.587 "ddgst": ${ddgst:-false} 00:09:15.587 }, 00:09:15.587 "method": "bdev_nvme_attach_controller" 00:09:15.587 } 00:09:15.587 EOF 00:09:15.587 )") 00:09:15.587 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=180490 00:09:15.587 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:15.587 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:15.587 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:15.587 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:15.587 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:15.587 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:15.587 { 00:09:15.587 "params": { 00:09:15.587 "name": "Nvme$subsystem", 00:09:15.587 "trtype": "$TEST_TRANSPORT", 00:09:15.587 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:15.587 "adrfam": "ipv4", 00:09:15.587 "trsvcid": "$NVMF_PORT", 00:09:15.587 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:15.587 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:15.587 "hdgst": ${hdgst:-false}, 00:09:15.587 "ddgst": ${ddgst:-false} 00:09:15.587 }, 00:09:15.587 "method": "bdev_nvme_attach_controller" 00:09:15.587 } 00:09:15.587 EOF 00:09:15.587 )") 00:09:15.587 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=180493 00:09:15.587 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:15.587 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:15.587 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:15.587 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:15.587 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:15.587 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:15.587 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:15.587 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:15.587 { 00:09:15.587 "params": { 00:09:15.587 "name": "Nvme$subsystem", 00:09:15.587 "trtype": "$TEST_TRANSPORT", 00:09:15.587 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:15.587 "adrfam": "ipv4", 00:09:15.587 "trsvcid": "$NVMF_PORT", 00:09:15.587 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:15.587 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:15.587 "hdgst": ${hdgst:-false}, 00:09:15.587 "ddgst": ${ddgst:-false} 00:09:15.587 }, 00:09:15.587 "method": "bdev_nvme_attach_controller" 00:09:15.587 } 00:09:15.587 EOF 00:09:15.587 )") 00:09:15.587 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:15.587 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:15.587 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:15.587 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:15.587 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:15.587 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:15.587 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:15.587 { 00:09:15.587 "params": { 00:09:15.587 "name": "Nvme$subsystem", 00:09:15.587 "trtype": "$TEST_TRANSPORT", 00:09:15.587 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:15.587 "adrfam": "ipv4", 00:09:15.587 "trsvcid": "$NVMF_PORT", 00:09:15.587 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:15.587 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:15.587 "hdgst": ${hdgst:-false}, 00:09:15.587 "ddgst": ${ddgst:-false} 00:09:15.587 }, 00:09:15.587 "method": "bdev_nvme_attach_controller" 00:09:15.587 } 00:09:15.587 EOF 00:09:15.587 )") 00:09:15.587 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:15.587 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 180486 00:09:15.587 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:15.587 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:15.587 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:15.587 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:15.588 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:15.588 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:15.588 "params": { 00:09:15.588 "name": "Nvme1", 00:09:15.588 "trtype": "tcp", 00:09:15.588 "traddr": "10.0.0.2", 00:09:15.588 "adrfam": "ipv4", 00:09:15.588 "trsvcid": "4420", 00:09:15.588 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:15.588 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:15.588 "hdgst": false, 00:09:15.588 "ddgst": false 00:09:15.588 }, 00:09:15.588 "method": "bdev_nvme_attach_controller" 00:09:15.588 }' 00:09:15.588 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:15.588 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:15.588 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:15.588 "params": { 00:09:15.588 "name": "Nvme1", 00:09:15.588 "trtype": "tcp", 00:09:15.588 "traddr": "10.0.0.2", 00:09:15.588 "adrfam": "ipv4", 00:09:15.588 "trsvcid": "4420", 00:09:15.588 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:15.588 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:15.588 "hdgst": false, 00:09:15.588 "ddgst": false 00:09:15.588 }, 00:09:15.588 "method": "bdev_nvme_attach_controller" 00:09:15.588 }' 00:09:15.588 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:15.588 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:15.588 "params": { 00:09:15.588 "name": "Nvme1", 00:09:15.588 "trtype": "tcp", 00:09:15.588 "traddr": "10.0.0.2", 00:09:15.588 "adrfam": "ipv4", 00:09:15.588 "trsvcid": "4420", 00:09:15.588 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:15.588 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:15.588 "hdgst": false, 00:09:15.588 "ddgst": false 00:09:15.588 }, 00:09:15.588 "method": "bdev_nvme_attach_controller" 00:09:15.588 }' 00:09:15.588 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:15.588 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:15.588 "params": { 00:09:15.588 "name": "Nvme1", 00:09:15.588 "trtype": "tcp", 00:09:15.588 "traddr": "10.0.0.2", 00:09:15.588 "adrfam": "ipv4", 00:09:15.588 "trsvcid": "4420", 00:09:15.588 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:15.588 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:15.588 "hdgst": false, 00:09:15.588 "ddgst": false 00:09:15.588 }, 00:09:15.588 "method": "bdev_nvme_attach_controller" 00:09:15.588 }' 00:09:15.588 [2024-09-27 15:27:55.886131] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:09:15.588 [2024-09-27 15:27:55.886203] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:15.588 [2024-09-27 15:27:55.887385] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:09:15.588 [2024-09-27 15:27:55.887451] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:15.588 [2024-09-27 15:27:55.890832] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:09:15.588 [2024-09-27 15:27:55.890836] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:09:15.588 [2024-09-27 15:27:55.890909] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:15.588 [2024-09-27 15:27:55.890913] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:15.850 [2024-09-27 15:27:56.105967] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.850 [2024-09-27 15:27:56.134195] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:09:15.850 [2024-09-27 15:27:56.197954] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.850 [2024-09-27 15:27:56.226452] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:09:15.850 [2024-09-27 15:27:56.245318] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.850 [2024-09-27 15:27:56.268798] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:09:15.850 [2024-09-27 15:27:56.323495] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.112 [2024-09-27 15:27:56.348881] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:09:16.112 Running I/O for 1 seconds... 00:09:16.112 Running I/O for 1 seconds... 00:09:16.373 Running I/O for 1 seconds... 00:09:16.373 Running I/O for 1 seconds... 00:09:17.318 10294.00 IOPS, 40.21 MiB/s 00:09:17.318 Latency(us) 00:09:17.318 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:17.318 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:17.318 Nvme1n1 : 1.01 10354.11 40.45 0.00 0.00 12315.54 6471.68 19114.67 00:09:17.318 =================================================================================================================== 00:09:17.318 Total : 10354.11 40.45 0.00 0.00 12315.54 6471.68 19114.67 00:09:17.318 9405.00 IOPS, 36.74 MiB/s 00:09:17.318 Latency(us) 00:09:17.318 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:17.318 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:17.318 Nvme1n1 : 1.01 9461.92 36.96 0.00 0.00 13470.82 6144.00 22282.24 00:09:17.318 =================================================================================================================== 00:09:17.318 Total : 9461.92 36.96 0.00 0.00 13470.82 6144.00 22282.24 00:09:17.318 11921.00 IOPS, 46.57 MiB/s 00:09:17.318 Latency(us) 00:09:17.318 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:17.318 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:17.318 Nvme1n1 : 1.00 12016.87 46.94 0.00 0.00 10631.05 2170.88 21080.75 00:09:17.318 =================================================================================================================== 00:09:17.318 Total : 12016.87 46.94 0.00 0.00 10631.05 2170.88 21080.75 00:09:17.318 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 180488 00:09:17.318 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 180490 00:09:17.580 184824.00 IOPS, 721.97 MiB/s 00:09:17.580 Latency(us) 00:09:17.580 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:17.580 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:17.580 Nvme1n1 : 1.00 184459.94 720.55 0.00 0.00 690.13 305.49 1966.08 00:09:17.580 =================================================================================================================== 00:09:17.580 Total : 184459.94 720.55 0.00 0.00 690.13 305.49 1966.08 00:09:17.580 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 180493 00:09:17.580 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:17.580 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.580 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:17.580 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.580 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:17.580 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:17.580 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:17.580 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:17.580 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:17.580 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:17.580 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:17.580 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:17.580 rmmod nvme_tcp 00:09:17.580 rmmod nvme_fabrics 00:09:17.580 rmmod nvme_keyring 00:09:17.580 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:17.580 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:17.580 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:17.580 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 180307 ']' 00:09:17.580 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 180307 00:09:17.580 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 180307 ']' 00:09:17.580 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 180307 00:09:17.580 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:09:17.580 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:17.580 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 180307 00:09:17.842 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:17.842 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:17.842 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 180307' 00:09:17.842 killing process with pid 180307 00:09:17.842 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 180307 00:09:17.842 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 180307 00:09:17.842 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:17.842 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:17.842 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:17.842 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:17.842 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-save 00:09:17.842 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-restore 00:09:17.842 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:17.842 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:17.842 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:17.842 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:17.842 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:17.842 15:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:20.396 00:09:20.396 real 0m13.482s 00:09:20.396 user 0m20.545s 00:09:20.396 sys 0m7.681s 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:20.396 ************************************ 00:09:20.396 END TEST nvmf_bdev_io_wait 00:09:20.396 ************************************ 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:20.396 ************************************ 00:09:20.396 START TEST nvmf_queue_depth 00:09:20.396 ************************************ 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:20.396 * Looking for test storage... 00:09:20.396 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:20.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.396 --rc genhtml_branch_coverage=1 00:09:20.396 --rc genhtml_function_coverage=1 00:09:20.396 --rc genhtml_legend=1 00:09:20.396 --rc geninfo_all_blocks=1 00:09:20.396 --rc geninfo_unexecuted_blocks=1 00:09:20.396 00:09:20.396 ' 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:20.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.396 --rc genhtml_branch_coverage=1 00:09:20.396 --rc genhtml_function_coverage=1 00:09:20.396 --rc genhtml_legend=1 00:09:20.396 --rc geninfo_all_blocks=1 00:09:20.396 --rc geninfo_unexecuted_blocks=1 00:09:20.396 00:09:20.396 ' 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:20.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.396 --rc genhtml_branch_coverage=1 00:09:20.396 --rc genhtml_function_coverage=1 00:09:20.396 --rc genhtml_legend=1 00:09:20.396 --rc geninfo_all_blocks=1 00:09:20.396 --rc geninfo_unexecuted_blocks=1 00:09:20.396 00:09:20.396 ' 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:20.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.396 --rc genhtml_branch_coverage=1 00:09:20.396 --rc genhtml_function_coverage=1 00:09:20.396 --rc genhtml_legend=1 00:09:20.396 --rc geninfo_all_blocks=1 00:09:20.396 --rc geninfo_unexecuted_blocks=1 00:09:20.396 00:09:20.396 ' 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:20.396 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:20.397 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.397 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.397 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.397 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:20.397 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.397 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:20.397 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:20.397 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:20.397 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:20.397 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:20.397 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:20.397 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:20.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:20.397 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:20.397 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:20.397 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:20.397 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:20.397 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:20.397 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:20.397 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:20.397 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:20.397 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:20.397 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:20.397 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:20.397 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:20.397 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.397 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:20.397 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.397 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:20.397 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:20.397 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:20.397 15:28:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:28.546 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:28.546 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:28.546 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:28.546 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:28.546 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:28.546 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:28.546 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:28.546 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:28.546 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:28.546 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:28.546 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:28.546 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:28.546 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:28.546 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:28.546 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:28.546 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:28.546 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:28.547 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:28.547 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:28.547 Found net devices under 0000:31:00.0: cvl_0_0 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:28.547 Found net devices under 0000:31:00.1: cvl_0_1 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # is_hw=yes 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:28.547 15:28:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:28.547 15:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:28.547 15:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:28.547 15:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:28.547 15:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:28.547 15:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:28.547 15:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:28.547 15:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:28.547 15:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:28.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:28.547 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.548 ms 00:09:28.547 00:09:28.547 --- 10.0.0.2 ping statistics --- 00:09:28.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:28.547 rtt min/avg/max/mdev = 0.548/0.548/0.548/0.000 ms 00:09:28.547 15:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:28.547 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:28.547 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:09:28.547 00:09:28.547 --- 10.0.0.1 ping statistics --- 00:09:28.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:28.547 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:09:28.547 15:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:28.547 15:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # return 0 00:09:28.547 15:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:28.547 15:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:28.547 15:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:28.547 15:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:28.547 15:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:28.547 15:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:28.547 15:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:28.547 15:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:28.547 15:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:28.547 15:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:28.547 15:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:28.547 15:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=185269 00:09:28.547 15:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 185269 00:09:28.548 15:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:28.548 15:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 185269 ']' 00:09:28.548 15:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.548 15:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:28.548 15:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.548 15:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:28.548 15:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:28.548 [2024-09-27 15:28:08.360767] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:09:28.548 [2024-09-27 15:28:08.360835] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:28.548 [2024-09-27 15:28:08.453146] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.548 [2024-09-27 15:28:08.499616] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:28.548 [2024-09-27 15:28:08.499668] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:28.548 [2024-09-27 15:28:08.499679] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:28.548 [2024-09-27 15:28:08.499689] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:28.548 [2024-09-27 15:28:08.499698] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:28.548 [2024-09-27 15:28:08.499731] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.810 15:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:28.810 15:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:28.810 15:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:28.810 15:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:28.810 15:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:28.810 15:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:28.810 15:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:28.810 15:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.810 15:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:28.810 [2024-09-27 15:28:09.226016] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:28.810 15:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.810 15:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:28.810 15:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.810 15:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:28.810 Malloc0 00:09:28.810 15:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.810 15:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:28.810 15:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.810 15:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:28.810 15:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.810 15:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:28.810 15:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.810 15:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:28.810 15:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.810 15:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:28.810 15:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.810 15:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:29.072 [2024-09-27 15:28:09.298611] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:29.072 15:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.072 15:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=185607 00:09:29.072 15:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:29.072 15:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:29.072 15:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 185607 /var/tmp/bdevperf.sock 00:09:29.072 15:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 185607 ']' 00:09:29.072 15:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:29.072 15:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:29.072 15:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:29.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:29.072 15:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:29.072 15:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:29.072 [2024-09-27 15:28:09.354490] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:09:29.072 [2024-09-27 15:28:09.354555] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid185607 ] 00:09:29.072 [2024-09-27 15:28:09.437514] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.072 [2024-09-27 15:28:09.483610] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.017 15:28:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:30.017 15:28:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:30.018 15:28:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:30.018 15:28:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.018 15:28:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:30.018 NVMe0n1 00:09:30.018 15:28:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.018 15:28:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:30.018 Running I/O for 10 seconds... 00:09:40.227 9529.00 IOPS, 37.22 MiB/s 10714.00 IOPS, 41.85 MiB/s 10973.00 IOPS, 42.86 MiB/s 11474.50 IOPS, 44.82 MiB/s 11876.20 IOPS, 46.39 MiB/s 12123.00 IOPS, 47.36 MiB/s 12401.71 IOPS, 48.44 MiB/s 12541.50 IOPS, 48.99 MiB/s 12648.11 IOPS, 49.41 MiB/s 12793.60 IOPS, 49.98 MiB/s 00:09:40.227 Latency(us) 00:09:40.227 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:40.227 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:40.227 Verification LBA range: start 0x0 length 0x4000 00:09:40.227 NVMe0n1 : 10.07 12810.43 50.04 0.00 0.00 79677.47 25449.81 69031.25 00:09:40.227 =================================================================================================================== 00:09:40.227 Total : 12810.43 50.04 0.00 0.00 79677.47 25449.81 69031.25 00:09:40.227 { 00:09:40.227 "results": [ 00:09:40.227 { 00:09:40.227 "job": "NVMe0n1", 00:09:40.227 "core_mask": "0x1", 00:09:40.227 "workload": "verify", 00:09:40.227 "status": "finished", 00:09:40.227 "verify_range": { 00:09:40.227 "start": 0, 00:09:40.227 "length": 16384 00:09:40.227 }, 00:09:40.227 "queue_depth": 1024, 00:09:40.227 "io_size": 4096, 00:09:40.227 "runtime": 10.066794, 00:09:40.227 "iops": 12810.433987225724, 00:09:40.227 "mibps": 50.040757762600485, 00:09:40.227 "io_failed": 0, 00:09:40.227 "io_timeout": 0, 00:09:40.227 "avg_latency_us": 79677.47472622003, 00:09:40.227 "min_latency_us": 25449.81333333333, 00:09:40.227 "max_latency_us": 69031.25333333333 00:09:40.227 } 00:09:40.227 ], 00:09:40.227 "core_count": 1 00:09:40.227 } 00:09:40.227 15:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 185607 00:09:40.227 15:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 185607 ']' 00:09:40.227 15:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 185607 00:09:40.227 15:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:40.227 15:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:40.227 15:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 185607 00:09:40.227 15:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:40.227 15:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:40.227 15:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 185607' 00:09:40.227 killing process with pid 185607 00:09:40.227 15:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 185607 00:09:40.227 Received shutdown signal, test time was about 10.000000 seconds 00:09:40.227 00:09:40.227 Latency(us) 00:09:40.227 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:40.227 =================================================================================================================== 00:09:40.227 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:40.227 15:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 185607 00:09:40.227 15:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:40.227 15:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:40.227 15:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:40.227 15:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:40.227 15:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:40.227 15:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:40.227 15:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:40.227 15:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:40.227 rmmod nvme_tcp 00:09:40.227 rmmod nvme_fabrics 00:09:40.489 rmmod nvme_keyring 00:09:40.489 15:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:40.489 15:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:40.489 15:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:40.489 15:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 185269 ']' 00:09:40.489 15:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 185269 00:09:40.489 15:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 185269 ']' 00:09:40.489 15:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 185269 00:09:40.489 15:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:40.489 15:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:40.489 15:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 185269 00:09:40.489 15:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:40.489 15:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:40.489 15:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 185269' 00:09:40.489 killing process with pid 185269 00:09:40.489 15:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 185269 00:09:40.489 15:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 185269 00:09:40.489 15:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:40.489 15:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:40.489 15:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:40.489 15:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:40.489 15:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-save 00:09:40.489 15:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:40.489 15:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-restore 00:09:40.489 15:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:40.489 15:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:40.489 15:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.489 15:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:40.489 15:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.039 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:43.039 00:09:43.039 real 0m22.615s 00:09:43.039 user 0m25.790s 00:09:43.039 sys 0m7.067s 00:09:43.039 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:43.039 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:43.039 ************************************ 00:09:43.039 END TEST nvmf_queue_depth 00:09:43.039 ************************************ 00:09:43.039 15:28:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:43.039 15:28:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:43.039 15:28:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:43.039 15:28:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:43.039 ************************************ 00:09:43.039 START TEST nvmf_target_multipath 00:09:43.039 ************************************ 00:09:43.039 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:43.039 * Looking for test storage... 00:09:43.039 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:43.039 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:43.039 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:09:43.039 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:43.039 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:43.039 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.039 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.039 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:43.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.040 --rc genhtml_branch_coverage=1 00:09:43.040 --rc genhtml_function_coverage=1 00:09:43.040 --rc genhtml_legend=1 00:09:43.040 --rc geninfo_all_blocks=1 00:09:43.040 --rc geninfo_unexecuted_blocks=1 00:09:43.040 00:09:43.040 ' 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:43.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.040 --rc genhtml_branch_coverage=1 00:09:43.040 --rc genhtml_function_coverage=1 00:09:43.040 --rc genhtml_legend=1 00:09:43.040 --rc geninfo_all_blocks=1 00:09:43.040 --rc geninfo_unexecuted_blocks=1 00:09:43.040 00:09:43.040 ' 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:43.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.040 --rc genhtml_branch_coverage=1 00:09:43.040 --rc genhtml_function_coverage=1 00:09:43.040 --rc genhtml_legend=1 00:09:43.040 --rc geninfo_all_blocks=1 00:09:43.040 --rc geninfo_unexecuted_blocks=1 00:09:43.040 00:09:43.040 ' 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:43.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.040 --rc genhtml_branch_coverage=1 00:09:43.040 --rc genhtml_function_coverage=1 00:09:43.040 --rc genhtml_legend=1 00:09:43.040 --rc geninfo_all_blocks=1 00:09:43.040 --rc geninfo_unexecuted_blocks=1 00:09:43.040 00:09:43.040 ' 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:43.040 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:43.040 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:43.041 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:43.041 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:43.041 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:43.041 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:43.041 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:43.041 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:43.041 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.041 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:43.041 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.041 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:43.041 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:43.041 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:43.041 15:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:51.201 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:51.201 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:51.201 Found net devices under 0000:31:00.0: cvl_0_0 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:51.201 Found net devices under 0000:31:00.1: cvl_0_1 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # is_hw=yes 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:51.201 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:51.202 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:51.202 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:51.202 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:51.202 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:51.202 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.516 ms 00:09:51.202 00:09:51.202 --- 10.0.0.2 ping statistics --- 00:09:51.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.202 rtt min/avg/max/mdev = 0.516/0.516/0.516/0.000 ms 00:09:51.202 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:51.202 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:51.202 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.355 ms 00:09:51.202 00:09:51.202 --- 10.0.0.1 ping statistics --- 00:09:51.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.202 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:09:51.202 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:51.202 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # return 0 00:09:51.202 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:51.202 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:51.202 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:51.202 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:51.202 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:51.202 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:51.202 15:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:51.202 15:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:51.202 15:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:51.202 only one NIC for nvmf test 00:09:51.202 15:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:51.202 15:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:51.202 15:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:51.202 15:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:51.202 15:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:51.202 15:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:51.202 15:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:51.202 rmmod nvme_tcp 00:09:51.202 rmmod nvme_fabrics 00:09:51.202 rmmod nvme_keyring 00:09:51.202 15:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:51.202 15:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:51.202 15:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:51.202 15:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:09:51.202 15:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:51.202 15:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:51.202 15:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:51.202 15:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:51.202 15:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:09:51.202 15:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:51.202 15:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:09:51.202 15:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:51.202 15:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:51.202 15:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.202 15:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:51.202 15:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:53.123 00:09:53.123 real 0m10.100s 00:09:53.123 user 0m2.149s 00:09:53.123 sys 0m5.887s 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:53.123 ************************************ 00:09:53.123 END TEST nvmf_target_multipath 00:09:53.123 ************************************ 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:53.123 ************************************ 00:09:53.123 START TEST nvmf_zcopy 00:09:53.123 ************************************ 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:53.123 * Looking for test storage... 00:09:53.123 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:53.123 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:53.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.123 --rc genhtml_branch_coverage=1 00:09:53.124 --rc genhtml_function_coverage=1 00:09:53.124 --rc genhtml_legend=1 00:09:53.124 --rc geninfo_all_blocks=1 00:09:53.124 --rc geninfo_unexecuted_blocks=1 00:09:53.124 00:09:53.124 ' 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:53.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.124 --rc genhtml_branch_coverage=1 00:09:53.124 --rc genhtml_function_coverage=1 00:09:53.124 --rc genhtml_legend=1 00:09:53.124 --rc geninfo_all_blocks=1 00:09:53.124 --rc geninfo_unexecuted_blocks=1 00:09:53.124 00:09:53.124 ' 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:53.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.124 --rc genhtml_branch_coverage=1 00:09:53.124 --rc genhtml_function_coverage=1 00:09:53.124 --rc genhtml_legend=1 00:09:53.124 --rc geninfo_all_blocks=1 00:09:53.124 --rc geninfo_unexecuted_blocks=1 00:09:53.124 00:09:53.124 ' 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:53.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.124 --rc genhtml_branch_coverage=1 00:09:53.124 --rc genhtml_function_coverage=1 00:09:53.124 --rc genhtml_legend=1 00:09:53.124 --rc geninfo_all_blocks=1 00:09:53.124 --rc geninfo_unexecuted_blocks=1 00:09:53.124 00:09:53.124 ' 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:53.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:53.124 15:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:01.277 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:01.277 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:01.277 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:01.278 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.278 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:01.278 Found net devices under 0000:31:00.0: cvl_0_0 00:10:01.278 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.278 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:01.278 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.278 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:01.278 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:01.278 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:01.278 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:01.278 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.278 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:01.278 Found net devices under 0000:31:00.1: cvl_0_1 00:10:01.278 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.278 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:01.278 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # is_hw=yes 00:10:01.278 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:01.278 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:10:01.278 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:10:01.278 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:01.278 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:01.278 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:01.278 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:01.278 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:01.278 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:01.278 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:01.278 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:01.278 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:01.278 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:01.278 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:01.278 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:01.278 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:01.278 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:01.278 15:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:01.278 15:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:01.278 15:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:01.278 15:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:01.278 15:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:01.278 15:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:01.278 15:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:01.278 15:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:01.278 15:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:01.278 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:01.278 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.557 ms 00:10:01.278 00:10:01.278 --- 10.0.0.2 ping statistics --- 00:10:01.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.278 rtt min/avg/max/mdev = 0.557/0.557/0.557/0.000 ms 00:10:01.278 15:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:01.278 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:01.278 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:10:01.278 00:10:01.278 --- 10.0.0.1 ping statistics --- 00:10:01.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.278 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:10:01.278 15:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:01.278 15:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # return 0 00:10:01.278 15:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:01.278 15:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:01.278 15:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:01.278 15:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:01.278 15:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:01.278 15:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:01.278 15:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:01.278 15:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:01.278 15:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:01.278 15:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:01.278 15:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.278 15:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=196441 00:10:01.278 15:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 196441 00:10:01.278 15:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:01.278 15:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 196441 ']' 00:10:01.278 15:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.278 15:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:01.278 15:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.278 15:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:01.278 15:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.278 [2024-09-27 15:28:41.276727] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:10:01.278 [2024-09-27 15:28:41.276791] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:01.278 [2024-09-27 15:28:41.363782] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.278 [2024-09-27 15:28:41.411082] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:01.278 [2024-09-27 15:28:41.411135] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:01.278 [2024-09-27 15:28:41.411147] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:01.278 [2024-09-27 15:28:41.411156] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:01.278 [2024-09-27 15:28:41.411172] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:01.278 [2024-09-27 15:28:41.411200] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:01.853 15:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:01.853 15:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:10:01.853 15:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:01.853 15:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:01.853 15:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.853 15:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:01.853 15:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:01.853 15:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:01.853 15:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.853 15:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.853 [2024-09-27 15:28:42.141934] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:01.853 15:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.853 15:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:01.853 15:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.853 15:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.853 15:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.853 15:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:01.853 15:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.853 15:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.853 [2024-09-27 15:28:42.166206] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:01.853 15:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.853 15:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:01.853 15:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.853 15:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.853 15:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.853 15:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:01.853 15:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.853 15:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.853 malloc0 00:10:01.853 15:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.853 15:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:01.853 15:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.853 15:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.853 15:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.853 15:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:01.853 15:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:01.853 15:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:10:01.853 15:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:10:01.853 15:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:01.853 15:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:01.853 { 00:10:01.853 "params": { 00:10:01.853 "name": "Nvme$subsystem", 00:10:01.853 "trtype": "$TEST_TRANSPORT", 00:10:01.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:01.853 "adrfam": "ipv4", 00:10:01.853 "trsvcid": "$NVMF_PORT", 00:10:01.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:01.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:01.853 "hdgst": ${hdgst:-false}, 00:10:01.853 "ddgst": ${ddgst:-false} 00:10:01.853 }, 00:10:01.853 "method": "bdev_nvme_attach_controller" 00:10:01.853 } 00:10:01.853 EOF 00:10:01.853 )") 00:10:01.853 15:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:10:01.853 15:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:10:01.853 15:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:10:01.853 15:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:01.853 "params": { 00:10:01.853 "name": "Nvme1", 00:10:01.853 "trtype": "tcp", 00:10:01.853 "traddr": "10.0.0.2", 00:10:01.853 "adrfam": "ipv4", 00:10:01.853 "trsvcid": "4420", 00:10:01.853 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:01.853 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:01.853 "hdgst": false, 00:10:01.853 "ddgst": false 00:10:01.853 }, 00:10:01.853 "method": "bdev_nvme_attach_controller" 00:10:01.853 }' 00:10:01.853 [2024-09-27 15:28:42.279955] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:10:01.853 [2024-09-27 15:28:42.280033] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid196779 ] 00:10:02.115 [2024-09-27 15:28:42.369151] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.115 [2024-09-27 15:28:42.415783] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.377 Running I/O for 10 seconds... 00:10:12.254 6445.00 IOPS, 50.35 MiB/s 7649.00 IOPS, 59.76 MiB/s 8356.00 IOPS, 65.28 MiB/s 8714.75 IOPS, 68.08 MiB/s 8931.40 IOPS, 69.78 MiB/s 9062.67 IOPS, 70.80 MiB/s 9157.00 IOPS, 71.54 MiB/s 9228.75 IOPS, 72.10 MiB/s 9284.56 IOPS, 72.54 MiB/s 9332.80 IOPS, 72.91 MiB/s 00:10:12.254 Latency(us) 00:10:12.254 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:12.254 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:12.254 Verification LBA range: start 0x0 length 0x1000 00:10:12.254 Nvme1n1 : 10.01 9335.37 72.93 0.00 0.00 13666.88 1419.95 28617.39 00:10:12.254 =================================================================================================================== 00:10:12.254 Total : 9335.37 72.93 0.00 0.00 13666.88 1419.95 28617.39 00:10:12.515 15:28:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=198798 00:10:12.516 15:28:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:12.516 15:28:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:12.516 15:28:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:12.516 15:28:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:12.516 15:28:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:10:12.516 15:28:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:10:12.516 15:28:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:12.516 15:28:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:12.516 { 00:10:12.516 "params": { 00:10:12.516 "name": "Nvme$subsystem", 00:10:12.516 "trtype": "$TEST_TRANSPORT", 00:10:12.516 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:12.516 "adrfam": "ipv4", 00:10:12.516 "trsvcid": "$NVMF_PORT", 00:10:12.516 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:12.516 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:12.516 "hdgst": ${hdgst:-false}, 00:10:12.516 "ddgst": ${ddgst:-false} 00:10:12.516 }, 00:10:12.516 "method": "bdev_nvme_attach_controller" 00:10:12.516 } 00:10:12.516 EOF 00:10:12.516 )") 00:10:12.516 [2024-09-27 15:28:52.795062] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.516 [2024-09-27 15:28:52.795090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.516 15:28:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:10:12.516 15:28:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:10:12.516 15:28:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:10:12.516 15:28:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:12.516 "params": { 00:10:12.516 "name": "Nvme1", 00:10:12.516 "trtype": "tcp", 00:10:12.516 "traddr": "10.0.0.2", 00:10:12.516 "adrfam": "ipv4", 00:10:12.516 "trsvcid": "4420", 00:10:12.516 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:12.516 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:12.516 "hdgst": false, 00:10:12.516 "ddgst": false 00:10:12.516 }, 00:10:12.516 "method": "bdev_nvme_attach_controller" 00:10:12.516 }' 00:10:12.516 [2024-09-27 15:28:52.807071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.516 [2024-09-27 15:28:52.807088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.516 [2024-09-27 15:28:52.819092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.516 [2024-09-27 15:28:52.819101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.516 [2024-09-27 15:28:52.831122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.516 [2024-09-27 15:28:52.831131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.516 [2024-09-27 15:28:52.839978] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:10:12.516 [2024-09-27 15:28:52.840025] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid198798 ] 00:10:12.516 [2024-09-27 15:28:52.843160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.516 [2024-09-27 15:28:52.843174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.516 [2024-09-27 15:28:52.855186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.516 [2024-09-27 15:28:52.855195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.516 [2024-09-27 15:28:52.867216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.516 [2024-09-27 15:28:52.867224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.516 [2024-09-27 15:28:52.879248] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.516 [2024-09-27 15:28:52.879256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.516 [2024-09-27 15:28:52.891280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.516 [2024-09-27 15:28:52.891287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.516 [2024-09-27 15:28:52.903309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.516 [2024-09-27 15:28:52.903317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.516 [2024-09-27 15:28:52.914236] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.516 [2024-09-27 15:28:52.915341] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.516 [2024-09-27 15:28:52.915349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.516 [2024-09-27 15:28:52.927372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.516 [2024-09-27 15:28:52.927384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.516 [2024-09-27 15:28:52.939405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.516 [2024-09-27 15:28:52.939424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.516 [2024-09-27 15:28:52.942089] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.516 [2024-09-27 15:28:52.951431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.516 [2024-09-27 15:28:52.951441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.516 [2024-09-27 15:28:52.963468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.516 [2024-09-27 15:28:52.963483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.516 [2024-09-27 15:28:52.975496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.516 [2024-09-27 15:28:52.975506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.516 [2024-09-27 15:28:52.987527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.516 [2024-09-27 15:28:52.987536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.516 [2024-09-27 15:28:52.999556] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.516 [2024-09-27 15:28:52.999565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.777 [2024-09-27 15:28:53.011598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.777 [2024-09-27 15:28:53.011612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.777 [2024-09-27 15:28:53.023626] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.778 [2024-09-27 15:28:53.023637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.778 [2024-09-27 15:28:53.035659] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.778 [2024-09-27 15:28:53.035670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.778 [2024-09-27 15:28:53.047689] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.778 [2024-09-27 15:28:53.047699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.778 [2024-09-27 15:28:53.059720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.778 [2024-09-27 15:28:53.059729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.778 [2024-09-27 15:28:53.071750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.778 [2024-09-27 15:28:53.071758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.778 [2024-09-27 15:28:53.083782] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.778 [2024-09-27 15:28:53.083791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.778 [2024-09-27 15:28:53.095812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.778 [2024-09-27 15:28:53.095822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.778 [2024-09-27 15:28:53.107843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.778 [2024-09-27 15:28:53.107851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.778 [2024-09-27 15:28:53.119876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.778 [2024-09-27 15:28:53.119884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.778 [2024-09-27 15:28:53.131912] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.778 [2024-09-27 15:28:53.131923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.778 [2024-09-27 15:28:53.143942] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.778 [2024-09-27 15:28:53.143950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.778 [2024-09-27 15:28:53.155971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.778 [2024-09-27 15:28:53.155983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.778 [2024-09-27 15:28:53.168003] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.778 [2024-09-27 15:28:53.168011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.778 [2024-09-27 15:28:53.180037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.778 [2024-09-27 15:28:53.180046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.778 [2024-09-27 15:28:53.192075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.778 [2024-09-27 15:28:53.192090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.778 Running I/O for 5 seconds... 00:10:12.778 [2024-09-27 15:28:53.207889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.778 [2024-09-27 15:28:53.207910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.778 [2024-09-27 15:28:53.220734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.778 [2024-09-27 15:28:53.220750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.778 [2024-09-27 15:28:53.234286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.778 [2024-09-27 15:28:53.234303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.778 [2024-09-27 15:28:53.247929] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.778 [2024-09-27 15:28:53.247947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.778 [2024-09-27 15:28:53.261678] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.778 [2024-09-27 15:28:53.261696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.039 [2024-09-27 15:28:53.274651] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.039 [2024-09-27 15:28:53.274669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.039 [2024-09-27 15:28:53.287500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.039 [2024-09-27 15:28:53.287516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.039 [2024-09-27 15:28:53.301265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.039 [2024-09-27 15:28:53.301281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.039 [2024-09-27 15:28:53.313947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.039 [2024-09-27 15:28:53.313963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.039 [2024-09-27 15:28:53.327403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.039 [2024-09-27 15:28:53.327419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.039 [2024-09-27 15:28:53.340388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.039 [2024-09-27 15:28:53.340404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.039 [2024-09-27 15:28:53.352819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.039 [2024-09-27 15:28:53.352836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.039 [2024-09-27 15:28:53.366428] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.039 [2024-09-27 15:28:53.366444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.039 [2024-09-27 15:28:53.378815] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.039 [2024-09-27 15:28:53.378831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.039 [2024-09-27 15:28:53.392630] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.039 [2024-09-27 15:28:53.392646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.039 [2024-09-27 15:28:53.405394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.040 [2024-09-27 15:28:53.405414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.040 [2024-09-27 15:28:53.418881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.040 [2024-09-27 15:28:53.418904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.040 [2024-09-27 15:28:53.432420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.040 [2024-09-27 15:28:53.432436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.040 [2024-09-27 15:28:53.445901] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.040 [2024-09-27 15:28:53.445917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.040 [2024-09-27 15:28:53.458669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.040 [2024-09-27 15:28:53.458685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.040 [2024-09-27 15:28:53.472268] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.040 [2024-09-27 15:28:53.472284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.040 [2024-09-27 15:28:53.485587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.040 [2024-09-27 15:28:53.485603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.040 [2024-09-27 15:28:53.498747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.040 [2024-09-27 15:28:53.498762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.040 [2024-09-27 15:28:53.511413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.040 [2024-09-27 15:28:53.511428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.040 [2024-09-27 15:28:53.523658] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.040 [2024-09-27 15:28:53.523674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.302 [2024-09-27 15:28:53.536883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.302 [2024-09-27 15:28:53.536903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.302 [2024-09-27 15:28:53.550435] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.302 [2024-09-27 15:28:53.550450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.302 [2024-09-27 15:28:53.562993] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.302 [2024-09-27 15:28:53.563008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.302 [2024-09-27 15:28:53.575848] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.302 [2024-09-27 15:28:53.575864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.302 [2024-09-27 15:28:53.588526] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.302 [2024-09-27 15:28:53.588542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.302 [2024-09-27 15:28:53.601770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.302 [2024-09-27 15:28:53.601786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.302 [2024-09-27 15:28:53.614225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.302 [2024-09-27 15:28:53.614241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.302 [2024-09-27 15:28:53.627143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.302 [2024-09-27 15:28:53.627159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.302 [2024-09-27 15:28:53.639842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.302 [2024-09-27 15:28:53.639858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.302 [2024-09-27 15:28:53.652887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.302 [2024-09-27 15:28:53.652912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.302 [2024-09-27 15:28:53.666344] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.302 [2024-09-27 15:28:53.666360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.302 [2024-09-27 15:28:53.678996] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.302 [2024-09-27 15:28:53.679012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.302 [2024-09-27 15:28:53.692361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.302 [2024-09-27 15:28:53.692377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.302 [2024-09-27 15:28:53.704922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.302 [2024-09-27 15:28:53.704937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.302 [2024-09-27 15:28:53.717879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.302 [2024-09-27 15:28:53.717902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.302 [2024-09-27 15:28:53.731491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.302 [2024-09-27 15:28:53.731507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.302 [2024-09-27 15:28:53.744214] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.302 [2024-09-27 15:28:53.744230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.302 [2024-09-27 15:28:53.757626] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.302 [2024-09-27 15:28:53.757643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.302 [2024-09-27 15:28:53.769956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.302 [2024-09-27 15:28:53.769972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.302 [2024-09-27 15:28:53.782713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.302 [2024-09-27 15:28:53.782745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.564 [2024-09-27 15:28:53.795092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.564 [2024-09-27 15:28:53.795108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.564 [2024-09-27 15:28:53.807661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.564 [2024-09-27 15:28:53.807679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.564 [2024-09-27 15:28:53.820699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.564 [2024-09-27 15:28:53.820715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.564 [2024-09-27 15:28:53.833164] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.564 [2024-09-27 15:28:53.833181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.564 [2024-09-27 15:28:53.846772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.564 [2024-09-27 15:28:53.846788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.564 [2024-09-27 15:28:53.859239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.564 [2024-09-27 15:28:53.859255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.564 [2024-09-27 15:28:53.872658] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.564 [2024-09-27 15:28:53.872674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.564 [2024-09-27 15:28:53.886123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.564 [2024-09-27 15:28:53.886139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.564 [2024-09-27 15:28:53.899504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.564 [2024-09-27 15:28:53.899519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.564 [2024-09-27 15:28:53.912675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.564 [2024-09-27 15:28:53.912690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.564 [2024-09-27 15:28:53.926137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.564 [2024-09-27 15:28:53.926152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.564 [2024-09-27 15:28:53.939549] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.564 [2024-09-27 15:28:53.939564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.564 [2024-09-27 15:28:53.953187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.564 [2024-09-27 15:28:53.953203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.564 [2024-09-27 15:28:53.966904] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.564 [2024-09-27 15:28:53.966919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.564 [2024-09-27 15:28:53.980142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.564 [2024-09-27 15:28:53.980158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.564 [2024-09-27 15:28:53.993418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.564 [2024-09-27 15:28:53.993433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.564 [2024-09-27 15:28:54.005850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.564 [2024-09-27 15:28:54.005866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.564 [2024-09-27 15:28:54.018532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.564 [2024-09-27 15:28:54.018548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.564 [2024-09-27 15:28:54.031921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.564 [2024-09-27 15:28:54.031936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.564 [2024-09-27 15:28:54.045438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.564 [2024-09-27 15:28:54.045453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.825 [2024-09-27 15:28:54.058139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.825 [2024-09-27 15:28:54.058154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.825 [2024-09-27 15:28:54.071517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.825 [2024-09-27 15:28:54.071533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.825 [2024-09-27 15:28:54.084338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.825 [2024-09-27 15:28:54.084354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.825 [2024-09-27 15:28:54.097784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.825 [2024-09-27 15:28:54.097799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.825 [2024-09-27 15:28:54.111026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.825 [2024-09-27 15:28:54.111042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.825 [2024-09-27 15:28:54.124216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.826 [2024-09-27 15:28:54.124231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.826 [2024-09-27 15:28:54.137448] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.826 [2024-09-27 15:28:54.137463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.826 [2024-09-27 15:28:54.150623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.826 [2024-09-27 15:28:54.150638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.826 [2024-09-27 15:28:54.164155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.826 [2024-09-27 15:28:54.164171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.826 [2024-09-27 15:28:54.176660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.826 [2024-09-27 15:28:54.176675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.826 [2024-09-27 15:28:54.189099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.826 [2024-09-27 15:28:54.189115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.826 19079.00 IOPS, 149.05 MiB/s [2024-09-27 15:28:54.202127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.826 [2024-09-27 15:28:54.202142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.826 [2024-09-27 15:28:54.215522] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.826 [2024-09-27 15:28:54.215538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.826 [2024-09-27 15:28:54.228708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.826 [2024-09-27 15:28:54.228723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.826 [2024-09-27 15:28:54.241541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.826 [2024-09-27 15:28:54.241556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.826 [2024-09-27 15:28:54.255044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.826 [2024-09-27 15:28:54.255060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.826 [2024-09-27 15:28:54.268786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.826 [2024-09-27 15:28:54.268802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.826 [2024-09-27 15:28:54.281836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.826 [2024-09-27 15:28:54.281851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.826 [2024-09-27 15:28:54.295492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.826 [2024-09-27 15:28:54.295507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.826 [2024-09-27 15:28:54.308690] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.826 [2024-09-27 15:28:54.308706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.088 [2024-09-27 15:28:54.322086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.088 [2024-09-27 15:28:54.322102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.088 [2024-09-27 15:28:54.335373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.088 [2024-09-27 15:28:54.335388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.088 [2024-09-27 15:28:54.348825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.088 [2024-09-27 15:28:54.348840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.088 [2024-09-27 15:28:54.361791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.088 [2024-09-27 15:28:54.361805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.088 [2024-09-27 15:28:54.374553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.088 [2024-09-27 15:28:54.374569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.088 [2024-09-27 15:28:54.387452] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.088 [2024-09-27 15:28:54.387471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.088 [2024-09-27 15:28:54.401212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.088 [2024-09-27 15:28:54.401227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.088 [2024-09-27 15:28:54.414590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.088 [2024-09-27 15:28:54.414607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.088 [2024-09-27 15:28:54.428065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.088 [2024-09-27 15:28:54.428081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.088 [2024-09-27 15:28:54.440859] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.088 [2024-09-27 15:28:54.440874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.088 [2024-09-27 15:28:54.453984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.088 [2024-09-27 15:28:54.453999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.088 [2024-09-27 15:28:54.467326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.088 [2024-09-27 15:28:54.467341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.088 [2024-09-27 15:28:54.480548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.088 [2024-09-27 15:28:54.480563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.088 [2024-09-27 15:28:54.493210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.088 [2024-09-27 15:28:54.493226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.088 [2024-09-27 15:28:54.506828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.088 [2024-09-27 15:28:54.506843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.088 [2024-09-27 15:28:54.519309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.089 [2024-09-27 15:28:54.519325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.089 [2024-09-27 15:28:54.533005] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.089 [2024-09-27 15:28:54.533020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.089 [2024-09-27 15:28:54.546252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.089 [2024-09-27 15:28:54.546267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.089 [2024-09-27 15:28:54.559608] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.089 [2024-09-27 15:28:54.559623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.089 [2024-09-27 15:28:54.573398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.089 [2024-09-27 15:28:54.573414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.350 [2024-09-27 15:28:54.586664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.350 [2024-09-27 15:28:54.586680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.350 [2024-09-27 15:28:54.599580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.350 [2024-09-27 15:28:54.599596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.350 [2024-09-27 15:28:54.613112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.350 [2024-09-27 15:28:54.613128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.350 [2024-09-27 15:28:54.626295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.350 [2024-09-27 15:28:54.626310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.350 [2024-09-27 15:28:54.639623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.350 [2024-09-27 15:28:54.639651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.350 [2024-09-27 15:28:54.652851] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.350 [2024-09-27 15:28:54.652866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.350 [2024-09-27 15:28:54.666706] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.350 [2024-09-27 15:28:54.666721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.350 [2024-09-27 15:28:54.680072] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.350 [2024-09-27 15:28:54.680087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.350 [2024-09-27 15:28:54.692820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.350 [2024-09-27 15:28:54.692836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.350 [2024-09-27 15:28:54.705788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.350 [2024-09-27 15:28:54.705804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.350 [2024-09-27 15:28:54.719242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.350 [2024-09-27 15:28:54.719257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.350 [2024-09-27 15:28:54.732713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.350 [2024-09-27 15:28:54.732728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.350 [2024-09-27 15:28:54.746004] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.350 [2024-09-27 15:28:54.746019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.350 [2024-09-27 15:28:54.759329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.350 [2024-09-27 15:28:54.759345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.350 [2024-09-27 15:28:54.772918] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.350 [2024-09-27 15:28:54.772934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.350 [2024-09-27 15:28:54.786491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.350 [2024-09-27 15:28:54.786507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.350 [2024-09-27 15:28:54.799279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.350 [2024-09-27 15:28:54.799294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.350 [2024-09-27 15:28:54.812475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.350 [2024-09-27 15:28:54.812493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.350 [2024-09-27 15:28:54.825683] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.350 [2024-09-27 15:28:54.825699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.612 [2024-09-27 15:28:54.839271] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.612 [2024-09-27 15:28:54.839288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.612 [2024-09-27 15:28:54.853109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.612 [2024-09-27 15:28:54.853125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.612 [2024-09-27 15:28:54.866017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.612 [2024-09-27 15:28:54.866033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.612 [2024-09-27 15:28:54.879239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.612 [2024-09-27 15:28:54.879254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.612 [2024-09-27 15:28:54.892169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.612 [2024-09-27 15:28:54.892189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.612 [2024-09-27 15:28:54.905527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.612 [2024-09-27 15:28:54.905542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.612 [2024-09-27 15:28:54.918940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.612 [2024-09-27 15:28:54.918957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.612 [2024-09-27 15:28:54.931699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.612 [2024-09-27 15:28:54.931715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.612 [2024-09-27 15:28:54.944105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.612 [2024-09-27 15:28:54.944121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.612 [2024-09-27 15:28:54.956773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.612 [2024-09-27 15:28:54.956789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.612 [2024-09-27 15:28:54.970056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.612 [2024-09-27 15:28:54.970072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.612 [2024-09-27 15:28:54.983182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.612 [2024-09-27 15:28:54.983198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.612 [2024-09-27 15:28:54.996094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.612 [2024-09-27 15:28:54.996110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.612 [2024-09-27 15:28:55.009733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.612 [2024-09-27 15:28:55.009749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.612 [2024-09-27 15:28:55.023514] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.612 [2024-09-27 15:28:55.023530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.612 [2024-09-27 15:28:55.035990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.612 [2024-09-27 15:28:55.036006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.612 [2024-09-27 15:28:55.048798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.612 [2024-09-27 15:28:55.048813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.612 [2024-09-27 15:28:55.062140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.612 [2024-09-27 15:28:55.062155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.612 [2024-09-27 15:28:55.074699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.612 [2024-09-27 15:28:55.074715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.612 [2024-09-27 15:28:55.086989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.612 [2024-09-27 15:28:55.087005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.612 [2024-09-27 15:28:55.099879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.612 [2024-09-27 15:28:55.099900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.874 [2024-09-27 15:28:55.113010] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.874 [2024-09-27 15:28:55.113026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.874 [2024-09-27 15:28:55.125699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.874 [2024-09-27 15:28:55.125714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.874 [2024-09-27 15:28:55.139027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.874 [2024-09-27 15:28:55.139047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.874 [2024-09-27 15:28:55.152525] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.874 [2024-09-27 15:28:55.152542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.874 [2024-09-27 15:28:55.166254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.874 [2024-09-27 15:28:55.166271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.874 [2024-09-27 15:28:55.179989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.874 [2024-09-27 15:28:55.180004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.874 [2024-09-27 15:28:55.193172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.874 [2024-09-27 15:28:55.193188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.874 19187.00 IOPS, 149.90 MiB/s [2024-09-27 15:28:55.205696] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.874 [2024-09-27 15:28:55.205712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.874 [2024-09-27 15:28:55.218204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.874 [2024-09-27 15:28:55.218220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.874 [2024-09-27 15:28:55.230957] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.874 [2024-09-27 15:28:55.230972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.874 [2024-09-27 15:28:55.244596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.874 [2024-09-27 15:28:55.244611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.874 [2024-09-27 15:28:55.257372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.874 [2024-09-27 15:28:55.257387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.874 [2024-09-27 15:28:55.270958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.874 [2024-09-27 15:28:55.270974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.874 [2024-09-27 15:28:55.284073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.874 [2024-09-27 15:28:55.284088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.874 [2024-09-27 15:28:55.297143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.874 [2024-09-27 15:28:55.297159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.874 [2024-09-27 15:28:55.310279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.874 [2024-09-27 15:28:55.310294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.874 [2024-09-27 15:28:55.323734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.874 [2024-09-27 15:28:55.323750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.874 [2024-09-27 15:28:55.337140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.874 [2024-09-27 15:28:55.337155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.874 [2024-09-27 15:28:55.350405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.874 [2024-09-27 15:28:55.350421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.136 [2024-09-27 15:28:55.364083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.136 [2024-09-27 15:28:55.364099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.136 [2024-09-27 15:28:55.377038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.136 [2024-09-27 15:28:55.377054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.136 [2024-09-27 15:28:55.390679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.136 [2024-09-27 15:28:55.390696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.136 [2024-09-27 15:28:55.403439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.136 [2024-09-27 15:28:55.403455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.136 [2024-09-27 15:28:55.417056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.136 [2024-09-27 15:28:55.417072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.136 [2024-09-27 15:28:55.430165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.136 [2024-09-27 15:28:55.430181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.136 [2024-09-27 15:28:55.442948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.136 [2024-09-27 15:28:55.442964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.136 [2024-09-27 15:28:55.455516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.136 [2024-09-27 15:28:55.455532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.136 [2024-09-27 15:28:55.468374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.136 [2024-09-27 15:28:55.468389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.136 [2024-09-27 15:28:55.480783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.136 [2024-09-27 15:28:55.480799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.136 [2024-09-27 15:28:55.493375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.136 [2024-09-27 15:28:55.493391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.136 [2024-09-27 15:28:55.506288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.136 [2024-09-27 15:28:55.506303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.136 [2024-09-27 15:28:55.519803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.136 [2024-09-27 15:28:55.519818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.136 [2024-09-27 15:28:55.533221] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.136 [2024-09-27 15:28:55.533237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.136 [2024-09-27 15:28:55.546401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.136 [2024-09-27 15:28:55.546417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.136 [2024-09-27 15:28:55.559078] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.136 [2024-09-27 15:28:55.559093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.136 [2024-09-27 15:28:55.572385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.136 [2024-09-27 15:28:55.572400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.136 [2024-09-27 15:28:55.585207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.136 [2024-09-27 15:28:55.585223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.136 [2024-09-27 15:28:55.597542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.136 [2024-09-27 15:28:55.597557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.136 [2024-09-27 15:28:55.610541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.136 [2024-09-27 15:28:55.610556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.136 [2024-09-27 15:28:55.623443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.136 [2024-09-27 15:28:55.623459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.398 [2024-09-27 15:28:55.636488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.398 [2024-09-27 15:28:55.636503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.398 [2024-09-27 15:28:55.649731] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.398 [2024-09-27 15:28:55.649747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.399 [2024-09-27 15:28:55.663356] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.399 [2024-09-27 15:28:55.663371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.399 [2024-09-27 15:28:55.676646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.399 [2024-09-27 15:28:55.676662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.399 [2024-09-27 15:28:55.690186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.399 [2024-09-27 15:28:55.690201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.399 [2024-09-27 15:28:55.702654] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.399 [2024-09-27 15:28:55.702669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.399 [2024-09-27 15:28:55.715173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.399 [2024-09-27 15:28:55.715188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.399 [2024-09-27 15:28:55.728068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.399 [2024-09-27 15:28:55.728084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.399 [2024-09-27 15:28:55.741629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.399 [2024-09-27 15:28:55.741644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.399 [2024-09-27 15:28:55.754907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.399 [2024-09-27 15:28:55.754924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.399 [2024-09-27 15:28:55.767783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.399 [2024-09-27 15:28:55.767799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.399 [2024-09-27 15:28:55.780926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.399 [2024-09-27 15:28:55.780941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.399 [2024-09-27 15:28:55.794555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.399 [2024-09-27 15:28:55.794570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.399 [2024-09-27 15:28:55.806996] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.399 [2024-09-27 15:28:55.807012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.399 [2024-09-27 15:28:55.819778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.399 [2024-09-27 15:28:55.819796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.399 [2024-09-27 15:28:55.833261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.399 [2024-09-27 15:28:55.833277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.399 [2024-09-27 15:28:55.846837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.399 [2024-09-27 15:28:55.846853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.399 [2024-09-27 15:28:55.859577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.399 [2024-09-27 15:28:55.859592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.399 [2024-09-27 15:28:55.872141] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.399 [2024-09-27 15:28:55.872156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.399 [2024-09-27 15:28:55.885450] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.399 [2024-09-27 15:28:55.885466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.659 [2024-09-27 15:28:55.898231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.659 [2024-09-27 15:28:55.898247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.659 [2024-09-27 15:28:55.910908] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.659 [2024-09-27 15:28:55.910923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.660 [2024-09-27 15:28:55.923396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.660 [2024-09-27 15:28:55.923411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.660 [2024-09-27 15:28:55.936254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.660 [2024-09-27 15:28:55.936269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.660 [2024-09-27 15:28:55.949667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.660 [2024-09-27 15:28:55.949682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.660 [2024-09-27 15:28:55.963064] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.660 [2024-09-27 15:28:55.963080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.660 [2024-09-27 15:28:55.975969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.660 [2024-09-27 15:28:55.975984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.660 [2024-09-27 15:28:55.989321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.660 [2024-09-27 15:28:55.989336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.660 [2024-09-27 15:28:56.002801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.660 [2024-09-27 15:28:56.002816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.660 [2024-09-27 15:28:56.015800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.660 [2024-09-27 15:28:56.015815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.660 [2024-09-27 15:28:56.028397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.660 [2024-09-27 15:28:56.028412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.660 [2024-09-27 15:28:56.041602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.660 [2024-09-27 15:28:56.041619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.660 [2024-09-27 15:28:56.054948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.660 [2024-09-27 15:28:56.054963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.660 [2024-09-27 15:28:56.068195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.660 [2024-09-27 15:28:56.068211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.660 [2024-09-27 15:28:56.081118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.660 [2024-09-27 15:28:56.081133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.660 [2024-09-27 15:28:56.094288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.660 [2024-09-27 15:28:56.094304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.660 [2024-09-27 15:28:56.107779] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.660 [2024-09-27 15:28:56.107794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.660 [2024-09-27 15:28:56.120808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.660 [2024-09-27 15:28:56.120826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.660 [2024-09-27 15:28:56.133361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.660 [2024-09-27 15:28:56.133377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.660 [2024-09-27 15:28:56.146361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.660 [2024-09-27 15:28:56.146376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.921 [2024-09-27 15:28:56.159741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.921 [2024-09-27 15:28:56.159757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.921 [2024-09-27 15:28:56.173076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.921 [2024-09-27 15:28:56.173091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.921 [2024-09-27 15:28:56.185748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.921 [2024-09-27 15:28:56.185762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.921 [2024-09-27 15:28:56.198546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.921 [2024-09-27 15:28:56.198561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.921 19219.67 IOPS, 150.15 MiB/s [2024-09-27 15:28:56.210810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.921 [2024-09-27 15:28:56.210824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.921 [2024-09-27 15:28:56.223649] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.921 [2024-09-27 15:28:56.223664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.921 [2024-09-27 15:28:56.237148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.921 [2024-09-27 15:28:56.237163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.921 [2024-09-27 15:28:56.249652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.921 [2024-09-27 15:28:56.249667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.921 [2024-09-27 15:28:56.263225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.921 [2024-09-27 15:28:56.263240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.921 [2024-09-27 15:28:56.275863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.921 [2024-09-27 15:28:56.275879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.921 [2024-09-27 15:28:56.289112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.921 [2024-09-27 15:28:56.289127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.921 [2024-09-27 15:28:56.302609] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.921 [2024-09-27 15:28:56.302624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.921 [2024-09-27 15:28:56.315783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.921 [2024-09-27 15:28:56.315799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.921 [2024-09-27 15:28:56.329417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.921 [2024-09-27 15:28:56.329433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.921 [2024-09-27 15:28:56.342394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.921 [2024-09-27 15:28:56.342409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.921 [2024-09-27 15:28:56.355627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.921 [2024-09-27 15:28:56.355642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.921 [2024-09-27 15:28:56.369179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.921 [2024-09-27 15:28:56.369198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.921 [2024-09-27 15:28:56.382266] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.921 [2024-09-27 15:28:56.382281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.921 [2024-09-27 15:28:56.395017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.921 [2024-09-27 15:28:56.395034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.921 [2024-09-27 15:28:56.408729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.921 [2024-09-27 15:28:56.408744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.183 [2024-09-27 15:28:56.421663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.183 [2024-09-27 15:28:56.421679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.183 [2024-09-27 15:28:56.434463] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.183 [2024-09-27 15:28:56.434478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.183 [2024-09-27 15:28:56.446750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.183 [2024-09-27 15:28:56.446765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.183 [2024-09-27 15:28:56.459472] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.183 [2024-09-27 15:28:56.459488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.183 [2024-09-27 15:28:56.473177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.183 [2024-09-27 15:28:56.473192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.183 [2024-09-27 15:28:56.486570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.183 [2024-09-27 15:28:56.486586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.183 [2024-09-27 15:28:56.500081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.183 [2024-09-27 15:28:56.500096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.183 [2024-09-27 15:28:56.513150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.183 [2024-09-27 15:28:56.513165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.183 [2024-09-27 15:28:56.526227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.183 [2024-09-27 15:28:56.526243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.183 [2024-09-27 15:28:56.539501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.183 [2024-09-27 15:28:56.539516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.183 [2024-09-27 15:28:56.552871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.183 [2024-09-27 15:28:56.552887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.183 [2024-09-27 15:28:56.565704] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.183 [2024-09-27 15:28:56.565719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.183 [2024-09-27 15:28:56.578812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.183 [2024-09-27 15:28:56.578827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.183 [2024-09-27 15:28:56.592061] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.183 [2024-09-27 15:28:56.592076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.183 [2024-09-27 15:28:56.604886] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.183 [2024-09-27 15:28:56.604906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.183 [2024-09-27 15:28:56.618328] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.183 [2024-09-27 15:28:56.618347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.183 [2024-09-27 15:28:56.632065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.183 [2024-09-27 15:28:56.632081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.183 [2024-09-27 15:28:56.645310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.183 [2024-09-27 15:28:56.645327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.183 [2024-09-27 15:28:56.657821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.183 [2024-09-27 15:28:56.657837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.445 [2024-09-27 15:28:56.671453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.445 [2024-09-27 15:28:56.671469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.445 [2024-09-27 15:28:56.684062] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.445 [2024-09-27 15:28:56.684078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.445 [2024-09-27 15:28:56.696868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.445 [2024-09-27 15:28:56.696883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.445 [2024-09-27 15:28:56.710491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.445 [2024-09-27 15:28:56.710507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.445 [2024-09-27 15:28:56.723191] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.445 [2024-09-27 15:28:56.723207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.445 [2024-09-27 15:28:56.736914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.445 [2024-09-27 15:28:56.736929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.445 [2024-09-27 15:28:56.749437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.445 [2024-09-27 15:28:56.749453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.445 [2024-09-27 15:28:56.761950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.445 [2024-09-27 15:28:56.761966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.445 [2024-09-27 15:28:56.774657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.445 [2024-09-27 15:28:56.774673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.445 [2024-09-27 15:28:56.787262] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.445 [2024-09-27 15:28:56.787278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.445 [2024-09-27 15:28:56.799804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.445 [2024-09-27 15:28:56.799820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.445 [2024-09-27 15:28:56.813388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.445 [2024-09-27 15:28:56.813404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.445 [2024-09-27 15:28:56.826714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.445 [2024-09-27 15:28:56.826730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.445 [2024-09-27 15:28:56.839514] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.445 [2024-09-27 15:28:56.839531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.445 [2024-09-27 15:28:56.852394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.445 [2024-09-27 15:28:56.852410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.445 [2024-09-27 15:28:56.865660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.445 [2024-09-27 15:28:56.865675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.445 [2024-09-27 15:28:56.879104] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.445 [2024-09-27 15:28:56.879121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.445 [2024-09-27 15:28:56.891784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.445 [2024-09-27 15:28:56.891800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.445 [2024-09-27 15:28:56.905180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.445 [2024-09-27 15:28:56.905195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.445 [2024-09-27 15:28:56.918189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.445 [2024-09-27 15:28:56.918205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.445 [2024-09-27 15:28:56.931460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.445 [2024-09-27 15:28:56.931475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.707 [2024-09-27 15:28:56.944713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.707 [2024-09-27 15:28:56.944728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.707 [2024-09-27 15:28:56.957797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.707 [2024-09-27 15:28:56.957813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.707 [2024-09-27 15:28:56.970678] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.707 [2024-09-27 15:28:56.970693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.707 [2024-09-27 15:28:56.983728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.707 [2024-09-27 15:28:56.983743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.707 [2024-09-27 15:28:56.996906] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.707 [2024-09-27 15:28:56.996922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.707 [2024-09-27 15:28:57.010417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.707 [2024-09-27 15:28:57.010432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.707 [2024-09-27 15:28:57.023679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.707 [2024-09-27 15:28:57.023694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.707 [2024-09-27 15:28:57.036773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.707 [2024-09-27 15:28:57.036788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.707 [2024-09-27 15:28:57.050497] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.707 [2024-09-27 15:28:57.050513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.707 [2024-09-27 15:28:57.063751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.707 [2024-09-27 15:28:57.063766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.707 [2024-09-27 15:28:57.076479] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.707 [2024-09-27 15:28:57.076495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.707 [2024-09-27 15:28:57.089286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.707 [2024-09-27 15:28:57.089301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.707 [2024-09-27 15:28:57.103109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.707 [2024-09-27 15:28:57.103125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.707 [2024-09-27 15:28:57.116414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.707 [2024-09-27 15:28:57.116429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.707 [2024-09-27 15:28:57.129328] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.707 [2024-09-27 15:28:57.129345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.707 [2024-09-27 15:28:57.142277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.707 [2024-09-27 15:28:57.142293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.707 [2024-09-27 15:28:57.155352] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.707 [2024-09-27 15:28:57.155368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.707 [2024-09-27 15:28:57.168531] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.707 [2024-09-27 15:28:57.168547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.707 [2024-09-27 15:28:57.181557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.707 [2024-09-27 15:28:57.181573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.707 [2024-09-27 15:28:57.194150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.707 [2024-09-27 15:28:57.194166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.969 19233.25 IOPS, 150.26 MiB/s [2024-09-27 15:28:57.207102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.969 [2024-09-27 15:28:57.207118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.969 [2024-09-27 15:28:57.220634] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.969 [2024-09-27 15:28:57.220650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.969 [2024-09-27 15:28:57.234073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.969 [2024-09-27 15:28:57.234090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.969 [2024-09-27 15:28:57.246663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.969 [2024-09-27 15:28:57.246679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.969 [2024-09-27 15:28:57.259910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.969 [2024-09-27 15:28:57.259925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.969 [2024-09-27 15:28:57.273179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.969 [2024-09-27 15:28:57.273195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.969 [2024-09-27 15:28:57.286714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.969 [2024-09-27 15:28:57.286730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.970 [2024-09-27 15:28:57.299916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.970 [2024-09-27 15:28:57.299932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.970 [2024-09-27 15:28:57.313549] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.970 [2024-09-27 15:28:57.313565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.970 [2024-09-27 15:28:57.326440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.970 [2024-09-27 15:28:57.326455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.970 [2024-09-27 15:28:57.339772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.970 [2024-09-27 15:28:57.339787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.970 [2024-09-27 15:28:57.353326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.970 [2024-09-27 15:28:57.353342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.970 [2024-09-27 15:28:57.366782] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.970 [2024-09-27 15:28:57.366798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.970 [2024-09-27 15:28:57.380495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.970 [2024-09-27 15:28:57.380510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.970 [2024-09-27 15:28:57.393673] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.970 [2024-09-27 15:28:57.393689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.970 [2024-09-27 15:28:57.407179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.970 [2024-09-27 15:28:57.407194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.970 [2024-09-27 15:28:57.420244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.970 [2024-09-27 15:28:57.420259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.970 [2024-09-27 15:28:57.433613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.970 [2024-09-27 15:28:57.433629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.970 [2024-09-27 15:28:57.447203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.970 [2024-09-27 15:28:57.447219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.231 [2024-09-27 15:28:57.460613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.231 [2024-09-27 15:28:57.460630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.231 [2024-09-27 15:28:57.473872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.231 [2024-09-27 15:28:57.473887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.231 [2024-09-27 15:28:57.487201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.231 [2024-09-27 15:28:57.487216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.231 [2024-09-27 15:28:57.499921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.231 [2024-09-27 15:28:57.499936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.231 [2024-09-27 15:28:57.512278] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.231 [2024-09-27 15:28:57.512294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.231 [2024-09-27 15:28:57.525475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.231 [2024-09-27 15:28:57.525490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.231 [2024-09-27 15:28:57.538869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.232 [2024-09-27 15:28:57.538884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.232 [2024-09-27 15:28:57.552276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.232 [2024-09-27 15:28:57.552291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.232 [2024-09-27 15:28:57.564454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.232 [2024-09-27 15:28:57.564470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.232 [2024-09-27 15:28:57.577784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.232 [2024-09-27 15:28:57.577799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.232 [2024-09-27 15:28:57.591409] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.232 [2024-09-27 15:28:57.591425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.232 [2024-09-27 15:28:57.603880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.232 [2024-09-27 15:28:57.603906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.232 [2024-09-27 15:28:57.616911] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.232 [2024-09-27 15:28:57.616929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.232 [2024-09-27 15:28:57.630085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.232 [2024-09-27 15:28:57.630100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.232 [2024-09-27 15:28:57.642885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.232 [2024-09-27 15:28:57.642905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.232 [2024-09-27 15:28:57.656113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.232 [2024-09-27 15:28:57.656129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.232 [2024-09-27 15:28:57.669390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.232 [2024-09-27 15:28:57.669407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.232 [2024-09-27 15:28:57.682637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.232 [2024-09-27 15:28:57.682653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.232 [2024-09-27 15:28:57.695482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.232 [2024-09-27 15:28:57.695498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.232 [2024-09-27 15:28:57.707884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.232 [2024-09-27 15:28:57.707907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.494 [2024-09-27 15:28:57.721029] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.494 [2024-09-27 15:28:57.721045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.494 [2024-09-27 15:28:57.733978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.494 [2024-09-27 15:28:57.733993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.494 [2024-09-27 15:28:57.747452] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.494 [2024-09-27 15:28:57.747467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.494 [2024-09-27 15:28:57.760152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.494 [2024-09-27 15:28:57.760167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.494 [2024-09-27 15:28:57.773223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.494 [2024-09-27 15:28:57.773239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.494 [2024-09-27 15:28:57.786609] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.494 [2024-09-27 15:28:57.786624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.494 [2024-09-27 15:28:57.799923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.494 [2024-09-27 15:28:57.799940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.494 [2024-09-27 15:28:57.812375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.494 [2024-09-27 15:28:57.812391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.494 [2024-09-27 15:28:57.825701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.495 [2024-09-27 15:28:57.825717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.495 [2024-09-27 15:28:57.839263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.495 [2024-09-27 15:28:57.839279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.495 [2024-09-27 15:28:57.851626] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.495 [2024-09-27 15:28:57.851646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.495 [2024-09-27 15:28:57.864582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.495 [2024-09-27 15:28:57.864597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.495 [2024-09-27 15:28:57.877184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.495 [2024-09-27 15:28:57.877200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.495 [2024-09-27 15:28:57.889453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.495 [2024-09-27 15:28:57.889468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.495 [2024-09-27 15:28:57.902803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.495 [2024-09-27 15:28:57.902820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.495 [2024-09-27 15:28:57.915998] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.495 [2024-09-27 15:28:57.916013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.495 [2024-09-27 15:28:57.929889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.495 [2024-09-27 15:28:57.929909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.495 [2024-09-27 15:28:57.943391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.495 [2024-09-27 15:28:57.943406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.495 [2024-09-27 15:28:57.956704] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.495 [2024-09-27 15:28:57.956720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.495 [2024-09-27 15:28:57.970344] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.495 [2024-09-27 15:28:57.970360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.757 [2024-09-27 15:28:57.983750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.757 [2024-09-27 15:28:57.983765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.757 [2024-09-27 15:28:57.996485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.757 [2024-09-27 15:28:57.996500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.757 [2024-09-27 15:28:58.009315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.757 [2024-09-27 15:28:58.009331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.757 [2024-09-27 15:28:58.022797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.757 [2024-09-27 15:28:58.022813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.757 [2024-09-27 15:28:58.035782] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.757 [2024-09-27 15:28:58.035797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.757 [2024-09-27 15:28:58.048465] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.757 [2024-09-27 15:28:58.048480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.757 [2024-09-27 15:28:58.061454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.757 [2024-09-27 15:28:58.061469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.757 [2024-09-27 15:28:58.074683] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.757 [2024-09-27 15:28:58.074698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.757 [2024-09-27 15:28:58.087930] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.757 [2024-09-27 15:28:58.087945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.757 [2024-09-27 15:28:58.100801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.757 [2024-09-27 15:28:58.100821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.757 [2024-09-27 15:28:58.114386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.757 [2024-09-27 15:28:58.114401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.757 [2024-09-27 15:28:58.127086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.757 [2024-09-27 15:28:58.127101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.757 [2024-09-27 15:28:58.140589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.757 [2024-09-27 15:28:58.140604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.757 [2024-09-27 15:28:58.153868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.757 [2024-09-27 15:28:58.153883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.757 [2024-09-27 15:28:58.167457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.757 [2024-09-27 15:28:58.167473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.757 [2024-09-27 15:28:58.181009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.757 [2024-09-27 15:28:58.181024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.757 [2024-09-27 15:28:58.193681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.757 [2024-09-27 15:28:58.193696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.757 [2024-09-27 15:28:58.207125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.757 [2024-09-27 15:28:58.207140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.757 19235.80 IOPS, 150.28 MiB/s 00:10:17.757 Latency(us) 00:10:17.757 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:17.757 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:17.757 Nvme1n1 : 5.01 19236.42 150.28 0.00 0.00 6648.43 2962.77 17257.81 00:10:17.757 =================================================================================================================== 00:10:17.757 Total : 19236.42 150.28 0.00 0.00 6648.43 2962.77 17257.81 00:10:17.757 [2024-09-27 15:28:58.216651] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.757 [2024-09-27 15:28:58.216666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.757 [2024-09-27 15:28:58.228682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.757 [2024-09-27 15:28:58.228697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.757 [2024-09-27 15:28:58.240716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.757 [2024-09-27 15:28:58.240727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.020 [2024-09-27 15:28:58.252746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.020 [2024-09-27 15:28:58.252758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.020 [2024-09-27 15:28:58.264777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.020 [2024-09-27 15:28:58.264789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.020 [2024-09-27 15:28:58.276804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.020 [2024-09-27 15:28:58.276813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.020 [2024-09-27 15:28:58.288835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.020 [2024-09-27 15:28:58.288845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.020 [2024-09-27 15:28:58.300866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.020 [2024-09-27 15:28:58.300878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.020 [2024-09-27 15:28:58.312900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.020 [2024-09-27 15:28:58.312910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.020 [2024-09-27 15:28:58.324926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.020 [2024-09-27 15:28:58.324935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.020 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (198798) - No such process 00:10:18.020 15:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 198798 00:10:18.020 15:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.020 15:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.020 15:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.020 15:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.020 15:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:18.020 15:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.020 15:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.020 delay0 00:10:18.020 15:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.020 15:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:18.020 15:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.020 15:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.020 15:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.020 15:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:18.282 [2024-09-27 15:28:58.524179] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:24.871 [2024-09-27 15:29:04.644226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x115b700 is same with the state(6) to be set 00:10:24.871 [2024-09-27 15:29:04.644262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x115b700 is same with the state(6) to be set 00:10:24.871 [2024-09-27 15:29:04.644270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x115b700 is same with the state(6) to be set 00:10:24.871 Initializing NVMe Controllers 00:10:24.871 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:24.871 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:24.871 Initialization complete. Launching workers. 00:10:24.871 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 180 00:10:24.871 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 470, failed to submit 30 00:10:24.871 success 314, unsuccessful 156, failed 0 00:10:24.871 15:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:24.871 15:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:24.871 15:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:24.871 15:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:24.871 15:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:24.871 15:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:24.871 15:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:24.871 15:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:24.871 rmmod nvme_tcp 00:10:24.871 rmmod nvme_fabrics 00:10:24.871 rmmod nvme_keyring 00:10:24.871 15:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:24.871 15:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:24.872 15:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:24.872 15:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 196441 ']' 00:10:24.872 15:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 196441 00:10:24.872 15:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 196441 ']' 00:10:24.872 15:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 196441 00:10:24.872 15:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:10:24.872 15:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:24.872 15:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 196441 00:10:24.872 15:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:24.872 15:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:24.872 15:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 196441' 00:10:24.872 killing process with pid 196441 00:10:24.872 15:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 196441 00:10:24.872 15:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 196441 00:10:24.872 15:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:24.872 15:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:24.872 15:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:24.872 15:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:24.872 15:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-save 00:10:24.872 15:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:24.872 15:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-restore 00:10:24.872 15:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:24.872 15:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:24.872 15:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.872 15:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:24.872 15:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:26.789 15:29:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:26.789 00:10:26.789 real 0m33.695s 00:10:26.789 user 0m45.100s 00:10:26.789 sys 0m10.306s 00:10:26.789 15:29:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:26.789 15:29:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:26.789 ************************************ 00:10:26.789 END TEST nvmf_zcopy 00:10:26.789 ************************************ 00:10:26.789 15:29:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:26.789 15:29:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:26.790 15:29:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:26.790 15:29:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:26.790 ************************************ 00:10:26.790 START TEST nvmf_nmic 00:10:26.790 ************************************ 00:10:26.790 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:26.790 * Looking for test storage... 00:10:26.790 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:26.790 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:26.790 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:10:26.790 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:26.790 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:26.790 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:26.790 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:26.790 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:26.790 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:26.790 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:26.790 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:26.790 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:26.790 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:26.790 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:26.790 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:26.790 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:26.790 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:26.790 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:26.790 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:26.790 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:26.790 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:26.790 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:26.790 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:26.790 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:26.790 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:26.790 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:26.790 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:26.790 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:26.790 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:26.790 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:26.790 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:26.790 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:26.790 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:26.790 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:26.790 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:26.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.790 --rc genhtml_branch_coverage=1 00:10:26.790 --rc genhtml_function_coverage=1 00:10:26.790 --rc genhtml_legend=1 00:10:26.790 --rc geninfo_all_blocks=1 00:10:26.790 --rc geninfo_unexecuted_blocks=1 00:10:26.790 00:10:26.790 ' 00:10:26.790 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:26.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.790 --rc genhtml_branch_coverage=1 00:10:26.790 --rc genhtml_function_coverage=1 00:10:26.790 --rc genhtml_legend=1 00:10:26.790 --rc geninfo_all_blocks=1 00:10:26.790 --rc geninfo_unexecuted_blocks=1 00:10:26.790 00:10:26.790 ' 00:10:26.790 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:26.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.790 --rc genhtml_branch_coverage=1 00:10:26.790 --rc genhtml_function_coverage=1 00:10:26.790 --rc genhtml_legend=1 00:10:26.790 --rc geninfo_all_blocks=1 00:10:26.790 --rc geninfo_unexecuted_blocks=1 00:10:26.790 00:10:26.790 ' 00:10:26.790 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:26.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.790 --rc genhtml_branch_coverage=1 00:10:26.790 --rc genhtml_function_coverage=1 00:10:26.790 --rc genhtml_legend=1 00:10:26.790 --rc geninfo_all_blocks=1 00:10:26.790 --rc geninfo_unexecuted_blocks=1 00:10:26.790 00:10:26.790 ' 00:10:26.790 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:26.790 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:27.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:27.052 15:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:35.202 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:35.202 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:35.202 Found net devices under 0000:31:00.0: cvl_0_0 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:35.202 Found net devices under 0000:31:00.1: cvl_0_1 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # is_hw=yes 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:10:35.202 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:10:35.203 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:35.203 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:35.203 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:35.203 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:35.203 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:35.203 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:35.203 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:35.203 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:35.203 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:35.203 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:35.203 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:35.203 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:35.203 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:35.203 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:35.203 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:35.203 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:35.203 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:35.203 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:35.203 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:35.203 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:35.203 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:35.203 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:35.203 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:35.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:35.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.570 ms 00:10:35.203 00:10:35.203 --- 10.0.0.2 ping statistics --- 00:10:35.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.203 rtt min/avg/max/mdev = 0.570/0.570/0.570/0.000 ms 00:10:35.203 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:35.203 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:35.203 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:10:35.203 00:10:35.203 --- 10.0.0.1 ping statistics --- 00:10:35.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.203 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:10:35.203 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:35.203 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # return 0 00:10:35.203 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:35.203 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:35.203 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:35.203 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:35.203 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:35.203 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:35.203 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:35.203 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:35.203 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:35.203 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:35.203 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.203 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=205540 00:10:35.203 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 205540 00:10:35.203 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:35.203 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 205540 ']' 00:10:35.203 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.203 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:35.203 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.203 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:35.203 15:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.203 [2024-09-27 15:29:15.031452] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:10:35.203 [2024-09-27 15:29:15.031517] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:35.203 [2024-09-27 15:29:15.122198] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:35.203 [2024-09-27 15:29:15.170929] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:35.203 [2024-09-27 15:29:15.170985] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:35.203 [2024-09-27 15:29:15.170995] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:35.203 [2024-09-27 15:29:15.171002] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:35.203 [2024-09-27 15:29:15.171008] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:35.203 [2024-09-27 15:29:15.171087] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:35.203 [2024-09-27 15:29:15.171244] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:35.203 [2024-09-27 15:29:15.171400] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.203 [2024-09-27 15:29:15.171400] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:35.465 15:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:35.465 15:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:10:35.465 15:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:35.465 15:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:35.465 15:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.465 15:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:35.465 15:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:35.465 15:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.465 15:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.465 [2024-09-27 15:29:15.916053] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:35.465 15:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.465 15:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:35.465 15:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.465 15:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.465 Malloc0 00:10:35.465 15:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.465 15:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:35.465 15:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.465 15:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.726 15:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.726 15:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:35.726 15:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.726 15:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.726 15:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.726 15:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:35.726 15:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.726 15:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.726 [2024-09-27 15:29:15.981695] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:35.726 15:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.726 15:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:35.726 test case1: single bdev can't be used in multiple subsystems 00:10:35.726 15:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:35.726 15:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.726 15:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.726 15:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.726 15:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:35.726 15:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.726 15:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.726 15:29:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.726 15:29:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:35.726 15:29:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:35.726 15:29:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.726 15:29:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.726 [2024-09-27 15:29:16.017519] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:35.726 [2024-09-27 15:29:16.017547] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:35.726 [2024-09-27 15:29:16.017556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.726 request: 00:10:35.726 { 00:10:35.726 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:35.726 "namespace": { 00:10:35.726 "bdev_name": "Malloc0", 00:10:35.726 "no_auto_visible": false 00:10:35.726 }, 00:10:35.726 "method": "nvmf_subsystem_add_ns", 00:10:35.726 "req_id": 1 00:10:35.726 } 00:10:35.726 Got JSON-RPC error response 00:10:35.726 response: 00:10:35.726 { 00:10:35.726 "code": -32602, 00:10:35.726 "message": "Invalid parameters" 00:10:35.726 } 00:10:35.726 15:29:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:35.726 15:29:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:35.727 15:29:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:35.727 15:29:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:35.727 Adding namespace failed - expected result. 00:10:35.727 15:29:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:35.727 test case2: host connect to nvmf target in multiple paths 00:10:35.727 15:29:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:35.727 15:29:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.727 15:29:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.727 [2024-09-27 15:29:16.029716] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:35.727 15:29:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.727 15:29:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:37.643 15:29:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:39.028 15:29:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:39.028 15:29:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:39.028 15:29:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:39.028 15:29:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:39.028 15:29:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:40.941 15:29:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:40.941 15:29:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:40.941 15:29:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:40.941 15:29:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:40.941 15:29:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:40.941 15:29:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:40.941 15:29:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:40.941 [global] 00:10:40.941 thread=1 00:10:40.941 invalidate=1 00:10:40.941 rw=write 00:10:40.941 time_based=1 00:10:40.941 runtime=1 00:10:40.941 ioengine=libaio 00:10:40.941 direct=1 00:10:40.941 bs=4096 00:10:40.941 iodepth=1 00:10:40.941 norandommap=0 00:10:40.941 numjobs=1 00:10:40.941 00:10:40.941 verify_dump=1 00:10:40.941 verify_backlog=512 00:10:40.941 verify_state_save=0 00:10:40.941 do_verify=1 00:10:40.941 verify=crc32c-intel 00:10:40.941 [job0] 00:10:40.941 filename=/dev/nvme0n1 00:10:40.941 Could not set queue depth (nvme0n1) 00:10:41.548 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.548 fio-3.35 00:10:41.548 Starting 1 thread 00:10:42.934 00:10:42.934 job0: (groupid=0, jobs=1): err= 0: pid=207092: Fri Sep 27 15:29:23 2024 00:10:42.934 read: IOPS=92, BW=370KiB/s (379kB/s)(380KiB/1026msec) 00:10:42.934 slat (nsec): min=23962, max=37834, avg=24618.42, stdev=1383.64 00:10:42.934 clat (usec): min=697, max=43017, avg=7438.26, stdev=15097.88 00:10:42.934 lat (usec): min=722, max=43042, avg=7462.88, stdev=15097.82 00:10:42.935 clat percentiles (usec): 00:10:42.935 | 1.00th=[ 701], 5.00th=[ 783], 10.00th=[ 824], 20.00th=[ 889], 00:10:42.935 | 30.00th=[ 938], 40.00th=[ 955], 50.00th=[ 963], 60.00th=[ 979], 00:10:42.935 | 70.00th=[ 996], 80.00th=[ 1020], 90.00th=[42206], 95.00th=[42206], 00:10:42.935 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:10:42.935 | 99.99th=[43254] 00:10:42.935 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:10:42.935 slat (nsec): min=9487, max=62702, avg=27600.97, stdev=9911.21 00:10:42.935 clat (usec): min=315, max=1055, avg=583.98, stdev=103.61 00:10:42.935 lat (usec): min=325, max=1087, avg=611.59, stdev=108.25 00:10:42.935 clat percentiles (usec): 00:10:42.935 | 1.00th=[ 343], 5.00th=[ 400], 10.00th=[ 445], 20.00th=[ 490], 00:10:42.935 | 30.00th=[ 553], 40.00th=[ 570], 50.00th=[ 586], 60.00th=[ 611], 00:10:42.935 | 70.00th=[ 635], 80.00th=[ 668], 90.00th=[ 701], 95.00th=[ 725], 00:10:42.935 | 99.00th=[ 816], 99.50th=[ 930], 99.90th=[ 1057], 99.95th=[ 1057], 00:10:42.935 | 99.99th=[ 1057] 00:10:42.935 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:42.935 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:42.935 lat (usec) : 500=18.29%, 750=63.10%, 1000=13.67% 00:10:42.935 lat (msec) : 2=2.47%, 50=2.47% 00:10:42.935 cpu : usr=0.78%, sys=1.56%, ctx=607, majf=0, minf=1 00:10:42.935 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:42.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.935 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.935 issued rwts: total=95,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:42.935 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:42.935 00:10:42.935 Run status group 0 (all jobs): 00:10:42.935 READ: bw=370KiB/s (379kB/s), 370KiB/s-370KiB/s (379kB/s-379kB/s), io=380KiB (389kB), run=1026-1026msec 00:10:42.935 WRITE: bw=1996KiB/s (2044kB/s), 1996KiB/s-1996KiB/s (2044kB/s-2044kB/s), io=2048KiB (2097kB), run=1026-1026msec 00:10:42.935 00:10:42.935 Disk stats (read/write): 00:10:42.935 nvme0n1: ios=142/512, merge=0/0, ticks=637/283, in_queue=920, util=93.69% 00:10:42.935 15:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:42.935 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:42.935 15:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:42.935 15:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:42.935 15:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:42.935 15:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:42.935 15:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:42.935 15:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:42.935 15:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:42.935 15:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:42.935 15:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:42.935 15:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:42.935 15:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:42.935 15:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:42.935 15:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:42.935 15:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:42.935 15:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:42.935 rmmod nvme_tcp 00:10:42.935 rmmod nvme_fabrics 00:10:42.935 rmmod nvme_keyring 00:10:42.935 15:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:42.935 15:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:42.935 15:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:42.935 15:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 205540 ']' 00:10:42.935 15:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 205540 00:10:42.935 15:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 205540 ']' 00:10:42.935 15:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 205540 00:10:42.935 15:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:10:42.935 15:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:42.935 15:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 205540 00:10:42.935 15:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:42.935 15:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:42.935 15:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 205540' 00:10:42.935 killing process with pid 205540 00:10:42.935 15:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 205540 00:10:42.935 15:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 205540 00:10:43.196 15:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:43.197 15:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:43.197 15:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:43.197 15:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:43.197 15:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-save 00:10:43.197 15:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:43.197 15:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-restore 00:10:43.197 15:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:43.197 15:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:43.197 15:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.197 15:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:43.197 15:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:45.111 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:45.111 00:10:45.111 real 0m18.488s 00:10:45.111 user 0m51.432s 00:10:45.111 sys 0m6.861s 00:10:45.111 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:45.111 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:45.111 ************************************ 00:10:45.111 END TEST nvmf_nmic 00:10:45.111 ************************************ 00:10:45.111 15:29:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:45.111 15:29:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:45.111 15:29:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:45.111 15:29:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:45.372 ************************************ 00:10:45.372 START TEST nvmf_fio_target 00:10:45.372 ************************************ 00:10:45.372 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:45.372 * Looking for test storage... 00:10:45.372 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:45.372 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:45.372 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:10:45.372 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:45.372 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:45.372 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:45.372 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:45.372 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:45.372 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:45.372 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:45.372 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:45.372 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:45.372 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:45.372 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:45.372 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:45.372 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:45.372 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:45.372 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:45.372 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:45.372 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:45.372 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:45.372 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:45.372 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:45.372 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:45.372 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:45.372 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:45.372 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:45.372 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:45.372 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:45.372 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:45.372 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:45.372 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:45.372 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:45.372 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:45.372 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:45.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.372 --rc genhtml_branch_coverage=1 00:10:45.372 --rc genhtml_function_coverage=1 00:10:45.372 --rc genhtml_legend=1 00:10:45.372 --rc geninfo_all_blocks=1 00:10:45.372 --rc geninfo_unexecuted_blocks=1 00:10:45.372 00:10:45.372 ' 00:10:45.372 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:45.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.372 --rc genhtml_branch_coverage=1 00:10:45.372 --rc genhtml_function_coverage=1 00:10:45.372 --rc genhtml_legend=1 00:10:45.372 --rc geninfo_all_blocks=1 00:10:45.372 --rc geninfo_unexecuted_blocks=1 00:10:45.372 00:10:45.372 ' 00:10:45.372 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:45.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.372 --rc genhtml_branch_coverage=1 00:10:45.372 --rc genhtml_function_coverage=1 00:10:45.372 --rc genhtml_legend=1 00:10:45.372 --rc geninfo_all_blocks=1 00:10:45.372 --rc geninfo_unexecuted_blocks=1 00:10:45.372 00:10:45.372 ' 00:10:45.372 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:45.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.372 --rc genhtml_branch_coverage=1 00:10:45.372 --rc genhtml_function_coverage=1 00:10:45.372 --rc genhtml_legend=1 00:10:45.372 --rc geninfo_all_blocks=1 00:10:45.372 --rc geninfo_unexecuted_blocks=1 00:10:45.372 00:10:45.372 ' 00:10:45.372 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:45.373 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:45.373 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:45.373 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:45.373 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:45.373 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:45.373 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:45.373 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:45.373 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:45.373 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:45.373 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:45.373 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:45.373 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:45.373 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:45.373 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:45.373 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:45.373 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:45.373 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:45.373 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:45.373 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:45.634 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:45.634 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:45.634 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:45.634 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.634 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.634 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.634 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:45.634 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.634 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:45.634 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:45.634 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:45.634 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:45.634 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:45.634 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:45.634 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:45.635 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:45.635 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:45.635 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:45.635 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:45.635 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:45.635 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:45.635 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:45.635 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:45.635 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:45.635 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:45.635 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:45.635 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:45.635 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:45.635 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.635 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:45.635 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:45.635 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:45.635 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:45.635 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:45.635 15:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:53.779 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:53.779 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:53.779 Found net devices under 0000:31:00.0: cvl_0_0 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:53.779 Found net devices under 0000:31:00.1: cvl_0_1 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # is_hw=yes 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:53.779 15:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:53.779 15:29:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:53.779 15:29:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:53.779 15:29:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:53.779 15:29:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:53.779 15:29:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:53.779 15:29:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:53.780 15:29:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:53.780 15:29:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:53.780 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:53.780 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:10:53.780 00:10:53.780 --- 10.0.0.2 ping statistics --- 00:10:53.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.780 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:10:53.780 15:29:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:53.780 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:53.780 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:10:53.780 00:10:53.780 --- 10.0.0.1 ping statistics --- 00:10:53.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.780 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:10:53.780 15:29:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:53.780 15:29:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # return 0 00:10:53.780 15:29:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:53.780 15:29:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:53.780 15:29:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:53.780 15:29:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:53.780 15:29:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:53.780 15:29:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:53.780 15:29:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:53.780 15:29:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:53.780 15:29:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:53.780 15:29:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:53.780 15:29:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.780 15:29:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=211727 00:10:53.780 15:29:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 211727 00:10:53.780 15:29:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:53.780 15:29:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 211727 ']' 00:10:53.780 15:29:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.780 15:29:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:53.780 15:29:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.780 15:29:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:53.780 15:29:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.780 [2024-09-27 15:29:33.355518] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:10:53.780 [2024-09-27 15:29:33.355588] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:53.780 [2024-09-27 15:29:33.443259] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:53.780 [2024-09-27 15:29:33.483955] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:53.780 [2024-09-27 15:29:33.484004] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:53.780 [2024-09-27 15:29:33.484010] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:53.780 [2024-09-27 15:29:33.484015] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:53.780 [2024-09-27 15:29:33.484020] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:53.780 [2024-09-27 15:29:33.484242] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.780 [2024-09-27 15:29:33.484397] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:53.780 [2024-09-27 15:29:33.484441] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.780 [2024-09-27 15:29:33.484444] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:53.780 15:29:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:53.780 15:29:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:53.780 15:29:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:53.780 15:29:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:53.780 15:29:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.780 15:29:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:53.780 15:29:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:54.041 [2024-09-27 15:29:34.359712] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:54.041 15:29:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:54.302 15:29:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:54.302 15:29:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:54.302 15:29:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:54.302 15:29:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:54.563 15:29:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:54.563 15:29:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:54.823 15:29:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:54.823 15:29:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:55.085 15:29:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:55.085 15:29:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:55.085 15:29:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:55.347 15:29:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:55.347 15:29:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:55.607 15:29:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:55.607 15:29:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:55.607 15:29:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:55.867 15:29:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:55.867 15:29:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:56.128 15:29:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:56.128 15:29:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:56.390 15:29:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:56.390 [2024-09-27 15:29:36.788245] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:56.390 15:29:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:56.651 15:29:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:56.912 15:29:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:58.297 15:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:58.297 15:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:58.297 15:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:58.297 15:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:58.297 15:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:58.297 15:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:11:00.845 15:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:00.845 15:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:00.845 15:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:00.845 15:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:11:00.845 15:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:00.845 15:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:11:00.845 15:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:00.845 [global] 00:11:00.845 thread=1 00:11:00.845 invalidate=1 00:11:00.845 rw=write 00:11:00.845 time_based=1 00:11:00.845 runtime=1 00:11:00.845 ioengine=libaio 00:11:00.845 direct=1 00:11:00.845 bs=4096 00:11:00.845 iodepth=1 00:11:00.845 norandommap=0 00:11:00.845 numjobs=1 00:11:00.845 00:11:00.845 verify_dump=1 00:11:00.845 verify_backlog=512 00:11:00.845 verify_state_save=0 00:11:00.845 do_verify=1 00:11:00.845 verify=crc32c-intel 00:11:00.845 [job0] 00:11:00.845 filename=/dev/nvme0n1 00:11:00.845 [job1] 00:11:00.845 filename=/dev/nvme0n2 00:11:00.845 [job2] 00:11:00.845 filename=/dev/nvme0n3 00:11:00.845 [job3] 00:11:00.845 filename=/dev/nvme0n4 00:11:00.845 Could not set queue depth (nvme0n1) 00:11:00.845 Could not set queue depth (nvme0n2) 00:11:00.845 Could not set queue depth (nvme0n3) 00:11:00.845 Could not set queue depth (nvme0n4) 00:11:00.845 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:00.845 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:00.845 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:00.845 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:00.845 fio-3.35 00:11:00.845 Starting 4 threads 00:11:02.232 00:11:02.232 job0: (groupid=0, jobs=1): err= 0: pid=213430: Fri Sep 27 15:29:42 2024 00:11:02.232 read: IOPS=15, BW=63.3KiB/s (64.8kB/s)(64.0KiB/1011msec) 00:11:02.232 slat (nsec): min=24448, max=42127, avg=26035.19, stdev=4297.20 00:11:02.232 clat (usec): min=981, max=42821, avg=39176.97, stdev=10202.19 00:11:02.232 lat (usec): min=1023, max=42846, avg=39203.00, stdev=10197.90 00:11:02.232 clat percentiles (usec): 00:11:02.232 | 1.00th=[ 979], 5.00th=[ 979], 10.00th=[40633], 20.00th=[41157], 00:11:02.232 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:11:02.232 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:11:02.232 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:11:02.232 | 99.99th=[42730] 00:11:02.232 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:11:02.232 slat (usec): min=9, max=114, avg=31.92, stdev= 7.94 00:11:02.232 clat (usec): min=277, max=1019, avg=709.30, stdev=136.20 00:11:02.232 lat (usec): min=288, max=1051, avg=741.21, stdev=136.91 00:11:02.232 clat percentiles (usec): 00:11:02.232 | 1.00th=[ 322], 5.00th=[ 465], 10.00th=[ 523], 20.00th=[ 594], 00:11:02.232 | 30.00th=[ 652], 40.00th=[ 693], 50.00th=[ 725], 60.00th=[ 758], 00:11:02.232 | 70.00th=[ 799], 80.00th=[ 832], 90.00th=[ 865], 95.00th=[ 898], 00:11:02.232 | 99.00th=[ 955], 99.50th=[ 979], 99.90th=[ 1020], 99.95th=[ 1020], 00:11:02.232 | 99.99th=[ 1020] 00:11:02.232 bw ( KiB/s): min= 4087, max= 4087, per=36.41%, avg=4087.00, stdev= 0.00, samples=1 00:11:02.232 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:11:02.232 lat (usec) : 500=6.82%, 750=49.05%, 1000=41.10% 00:11:02.232 lat (msec) : 2=0.19%, 50=2.84% 00:11:02.233 cpu : usr=0.89%, sys=1.49%, ctx=529, majf=0, minf=1 00:11:02.233 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:02.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.233 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:02.233 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:02.233 job1: (groupid=0, jobs=1): err= 0: pid=213431: Fri Sep 27 15:29:42 2024 00:11:02.233 read: IOPS=21, BW=87.2KiB/s (89.3kB/s)(88.0KiB/1009msec) 00:11:02.233 slat (nsec): min=7362, max=25543, avg=23631.82, stdev=4967.98 00:11:02.233 clat (usec): min=179, max=42123, avg=39800.75, stdev=8860.72 00:11:02.233 lat (usec): min=188, max=42148, avg=39824.38, stdev=8863.98 00:11:02.233 clat percentiles (usec): 00:11:02.233 | 1.00th=[ 180], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:02.233 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:11:02.233 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:02.233 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:02.233 | 99.99th=[42206] 00:11:02.233 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:11:02.233 slat (nsec): min=9443, max=51882, avg=23061.26, stdev=12178.95 00:11:02.233 clat (usec): min=95, max=645, avg=231.42, stdev=131.40 00:11:02.233 lat (usec): min=106, max=697, avg=254.48, stdev=140.47 00:11:02.233 clat percentiles (usec): 00:11:02.233 | 1.00th=[ 100], 5.00th=[ 104], 10.00th=[ 108], 20.00th=[ 111], 00:11:02.233 | 30.00th=[ 116], 40.00th=[ 127], 50.00th=[ 225], 60.00th=[ 251], 00:11:02.233 | 70.00th=[ 281], 80.00th=[ 359], 90.00th=[ 412], 95.00th=[ 498], 00:11:02.233 | 99.00th=[ 586], 99.50th=[ 611], 99.90th=[ 644], 99.95th=[ 644], 00:11:02.233 | 99.99th=[ 644] 00:11:02.233 bw ( KiB/s): min= 4096, max= 4096, per=36.49%, avg=4096.00, stdev= 0.00, samples=1 00:11:02.233 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:02.233 lat (usec) : 100=0.94%, 250=55.43%, 500=35.02%, 750=4.68% 00:11:02.233 lat (msec) : 50=3.93% 00:11:02.233 cpu : usr=0.40%, sys=1.39%, ctx=537, majf=0, minf=1 00:11:02.233 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:02.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.233 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:02.233 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:02.233 job2: (groupid=0, jobs=1): err= 0: pid=213432: Fri Sep 27 15:29:42 2024 00:11:02.233 read: IOPS=577, BW=2310KiB/s (2365kB/s)(2312KiB/1001msec) 00:11:02.233 slat (nsec): min=7045, max=59656, avg=24244.13, stdev=6714.49 00:11:02.233 clat (usec): min=422, max=950, avg=742.23, stdev=106.62 00:11:02.233 lat (usec): min=430, max=975, avg=766.47, stdev=108.31 00:11:02.233 clat percentiles (usec): 00:11:02.233 | 1.00th=[ 465], 5.00th=[ 553], 10.00th=[ 586], 20.00th=[ 652], 00:11:02.233 | 30.00th=[ 685], 40.00th=[ 717], 50.00th=[ 758], 60.00th=[ 791], 00:11:02.233 | 70.00th=[ 816], 80.00th=[ 840], 90.00th=[ 873], 95.00th=[ 889], 00:11:02.233 | 99.00th=[ 914], 99.50th=[ 930], 99.90th=[ 947], 99.95th=[ 947], 00:11:02.233 | 99.99th=[ 947] 00:11:02.233 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:11:02.233 slat (nsec): min=10149, max=53382, avg=32315.08, stdev=8481.66 00:11:02.233 clat (usec): min=220, max=810, avg=499.62, stdev=107.14 00:11:02.233 lat (usec): min=231, max=844, avg=531.94, stdev=110.20 00:11:02.233 clat percentiles (usec): 00:11:02.233 | 1.00th=[ 251], 5.00th=[ 318], 10.00th=[ 363], 20.00th=[ 396], 00:11:02.233 | 30.00th=[ 453], 40.00th=[ 478], 50.00th=[ 502], 60.00th=[ 529], 00:11:02.233 | 70.00th=[ 570], 80.00th=[ 603], 90.00th=[ 635], 95.00th=[ 660], 00:11:02.233 | 99.00th=[ 717], 99.50th=[ 750], 99.90th=[ 807], 99.95th=[ 807], 00:11:02.233 | 99.99th=[ 807] 00:11:02.233 bw ( KiB/s): min= 4096, max= 4096, per=36.49%, avg=4096.00, stdev= 0.00, samples=1 00:11:02.233 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:02.233 lat (usec) : 250=0.44%, 500=32.02%, 750=48.75%, 1000=18.79% 00:11:02.233 cpu : usr=1.70%, sys=5.50%, ctx=1604, majf=0, minf=1 00:11:02.233 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:02.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.233 issued rwts: total=578,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:02.233 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:02.233 job3: (groupid=0, jobs=1): err= 0: pid=213433: Fri Sep 27 15:29:42 2024 00:11:02.233 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:11:02.233 slat (nsec): min=9687, max=57805, avg=26965.46, stdev=2910.13 00:11:02.233 clat (usec): min=676, max=1269, avg=979.42, stdev=76.06 00:11:02.233 lat (usec): min=703, max=1313, avg=1006.38, stdev=75.88 00:11:02.233 clat percentiles (usec): 00:11:02.233 | 1.00th=[ 783], 5.00th=[ 840], 10.00th=[ 873], 20.00th=[ 922], 00:11:02.233 | 30.00th=[ 955], 40.00th=[ 971], 50.00th=[ 988], 60.00th=[ 1004], 00:11:02.233 | 70.00th=[ 1020], 80.00th=[ 1037], 90.00th=[ 1057], 95.00th=[ 1090], 00:11:02.233 | 99.00th=[ 1156], 99.50th=[ 1172], 99.90th=[ 1270], 99.95th=[ 1270], 00:11:02.233 | 99.99th=[ 1270] 00:11:02.233 write: IOPS=788, BW=3153KiB/s (3229kB/s)(3156KiB/1001msec); 0 zone resets 00:11:02.233 slat (nsec): min=9298, max=66901, avg=29773.70, stdev=9892.95 00:11:02.233 clat (usec): min=233, max=919, avg=572.04, stdev=112.37 00:11:02.233 lat (usec): min=244, max=953, avg=601.81, stdev=117.13 00:11:02.233 clat percentiles (usec): 00:11:02.233 | 1.00th=[ 302], 5.00th=[ 363], 10.00th=[ 429], 20.00th=[ 469], 00:11:02.233 | 30.00th=[ 523], 40.00th=[ 553], 50.00th=[ 578], 60.00th=[ 603], 00:11:02.233 | 70.00th=[ 635], 80.00th=[ 676], 90.00th=[ 709], 95.00th=[ 742], 00:11:02.233 | 99.00th=[ 799], 99.50th=[ 824], 99.90th=[ 922], 99.95th=[ 922], 00:11:02.233 | 99.99th=[ 922] 00:11:02.233 bw ( KiB/s): min= 4087, max= 4087, per=36.41%, avg=4087.00, stdev= 0.00, samples=1 00:11:02.233 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:11:02.233 lat (usec) : 250=0.31%, 500=15.30%, 750=42.97%, 1000=24.52% 00:11:02.233 lat (msec) : 2=16.91% 00:11:02.233 cpu : usr=3.00%, sys=4.60%, ctx=1301, majf=0, minf=2 00:11:02.233 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:02.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.233 issued rwts: total=512,789,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:02.233 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:02.233 00:11:02.233 Run status group 0 (all jobs): 00:11:02.233 READ: bw=4463KiB/s (4570kB/s), 63.3KiB/s-2310KiB/s (64.8kB/s-2365kB/s), io=4512KiB (4620kB), run=1001-1011msec 00:11:02.233 WRITE: bw=11.0MiB/s (11.5MB/s), 2026KiB/s-4092KiB/s (2074kB/s-4190kB/s), io=11.1MiB (11.6MB), run=1001-1011msec 00:11:02.233 00:11:02.233 Disk stats (read/write): 00:11:02.233 nvme0n1: ios=61/512, merge=0/0, ticks=737/336, in_queue=1073, util=93.79% 00:11:02.233 nvme0n2: ios=40/512, merge=0/0, ticks=1636/114, in_queue=1750, util=96.52% 00:11:02.233 nvme0n3: ios=534/801, merge=0/0, ticks=1285/359, in_queue=1644, util=96.38% 00:11:02.233 nvme0n4: ios=507/512, merge=0/0, ticks=479/235, in_queue=714, util=89.34% 00:11:02.233 15:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:02.233 [global] 00:11:02.233 thread=1 00:11:02.233 invalidate=1 00:11:02.233 rw=randwrite 00:11:02.233 time_based=1 00:11:02.233 runtime=1 00:11:02.233 ioengine=libaio 00:11:02.233 direct=1 00:11:02.233 bs=4096 00:11:02.233 iodepth=1 00:11:02.233 norandommap=0 00:11:02.233 numjobs=1 00:11:02.233 00:11:02.233 verify_dump=1 00:11:02.233 verify_backlog=512 00:11:02.233 verify_state_save=0 00:11:02.233 do_verify=1 00:11:02.233 verify=crc32c-intel 00:11:02.233 [job0] 00:11:02.233 filename=/dev/nvme0n1 00:11:02.233 [job1] 00:11:02.233 filename=/dev/nvme0n2 00:11:02.233 [job2] 00:11:02.233 filename=/dev/nvme0n3 00:11:02.233 [job3] 00:11:02.233 filename=/dev/nvme0n4 00:11:02.233 Could not set queue depth (nvme0n1) 00:11:02.233 Could not set queue depth (nvme0n2) 00:11:02.233 Could not set queue depth (nvme0n3) 00:11:02.233 Could not set queue depth (nvme0n4) 00:11:02.495 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:02.495 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:02.495 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:02.495 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:02.495 fio-3.35 00:11:02.495 Starting 4 threads 00:11:03.881 00:11:03.881 job0: (groupid=0, jobs=1): err= 0: pid=213954: Fri Sep 27 15:29:44 2024 00:11:03.881 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:11:03.881 slat (nsec): min=7844, max=42201, avg=24760.59, stdev=2390.82 00:11:03.881 clat (usec): min=493, max=1604, avg=962.29, stdev=130.21 00:11:03.881 lat (usec): min=517, max=1628, avg=987.05, stdev=130.19 00:11:03.881 clat percentiles (usec): 00:11:03.881 | 1.00th=[ 619], 5.00th=[ 734], 10.00th=[ 799], 20.00th=[ 865], 00:11:03.881 | 30.00th=[ 914], 40.00th=[ 947], 50.00th=[ 979], 60.00th=[ 1004], 00:11:03.881 | 70.00th=[ 1029], 80.00th=[ 1057], 90.00th=[ 1106], 95.00th=[ 1139], 00:11:03.881 | 99.00th=[ 1237], 99.50th=[ 1270], 99.90th=[ 1598], 99.95th=[ 1598], 00:11:03.881 | 99.99th=[ 1598] 00:11:03.881 write: IOPS=844, BW=3377KiB/s (3458kB/s)(3380KiB/1001msec); 0 zone resets 00:11:03.881 slat (usec): min=9, max=113, avg=28.34, stdev= 8.39 00:11:03.881 clat (usec): min=128, max=1109, avg=544.41, stdev=141.53 00:11:03.881 lat (usec): min=138, max=1139, avg=572.75, stdev=143.79 00:11:03.881 clat percentiles (usec): 00:11:03.881 | 1.00th=[ 253], 5.00th=[ 322], 10.00th=[ 363], 20.00th=[ 433], 00:11:03.881 | 30.00th=[ 461], 40.00th=[ 502], 50.00th=[ 537], 60.00th=[ 578], 00:11:03.881 | 70.00th=[ 619], 80.00th=[ 660], 90.00th=[ 725], 95.00th=[ 783], 00:11:03.881 | 99.00th=[ 898], 99.50th=[ 930], 99.90th=[ 1106], 99.95th=[ 1106], 00:11:03.881 | 99.99th=[ 1106] 00:11:03.881 bw ( KiB/s): min= 4096, max= 4096, per=32.59%, avg=4096.00, stdev= 0.00, samples=1 00:11:03.881 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:03.881 lat (usec) : 250=0.44%, 500=24.47%, 750=35.30%, 1000=24.32% 00:11:03.881 lat (msec) : 2=15.48% 00:11:03.881 cpu : usr=2.20%, sys=3.70%, ctx=1358, majf=0, minf=1 00:11:03.881 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:03.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.881 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.881 issued rwts: total=512,845,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.881 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:03.881 job1: (groupid=0, jobs=1): err= 0: pid=213955: Fri Sep 27 15:29:44 2024 00:11:03.881 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:11:03.881 slat (nsec): min=25127, max=54366, avg=25949.10, stdev=2779.07 00:11:03.881 clat (usec): min=684, max=1337, avg=1088.46, stdev=84.14 00:11:03.881 lat (usec): min=710, max=1362, avg=1114.41, stdev=83.94 00:11:03.881 clat percentiles (usec): 00:11:03.881 | 1.00th=[ 865], 5.00th=[ 938], 10.00th=[ 963], 20.00th=[ 1037], 00:11:03.881 | 30.00th=[ 1057], 40.00th=[ 1074], 50.00th=[ 1090], 60.00th=[ 1106], 00:11:03.881 | 70.00th=[ 1123], 80.00th=[ 1156], 90.00th=[ 1188], 95.00th=[ 1221], 00:11:03.881 | 99.00th=[ 1270], 99.50th=[ 1287], 99.90th=[ 1336], 99.95th=[ 1336], 00:11:03.881 | 99.99th=[ 1336] 00:11:03.881 write: IOPS=673, BW=2693KiB/s (2758kB/s)(2696KiB/1001msec); 0 zone resets 00:11:03.881 slat (nsec): min=8826, max=78163, avg=28549.48, stdev=8874.30 00:11:03.881 clat (usec): min=312, max=878, avg=594.86, stdev=104.85 00:11:03.881 lat (usec): min=326, max=909, avg=623.41, stdev=107.95 00:11:03.881 clat percentiles (usec): 00:11:03.881 | 1.00th=[ 326], 5.00th=[ 424], 10.00th=[ 449], 20.00th=[ 510], 00:11:03.881 | 30.00th=[ 553], 40.00th=[ 570], 50.00th=[ 594], 60.00th=[ 619], 00:11:03.881 | 70.00th=[ 652], 80.00th=[ 685], 90.00th=[ 734], 95.00th=[ 758], 00:11:03.881 | 99.00th=[ 824], 99.50th=[ 824], 99.90th=[ 881], 99.95th=[ 881], 00:11:03.881 | 99.99th=[ 881] 00:11:03.881 bw ( KiB/s): min= 4096, max= 4096, per=32.59%, avg=4096.00, stdev= 0.00, samples=1 00:11:03.881 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:03.881 lat (usec) : 500=9.78%, 750=43.25%, 1000=10.12% 00:11:03.881 lat (msec) : 2=36.85% 00:11:03.881 cpu : usr=1.80%, sys=5.10%, ctx=1187, majf=0, minf=1 00:11:03.881 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:03.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.881 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.881 issued rwts: total=512,674,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.881 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:03.881 job2: (groupid=0, jobs=1): err= 0: pid=213956: Fri Sep 27 15:29:44 2024 00:11:03.881 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:11:03.881 slat (nsec): min=7341, max=43943, avg=25021.43, stdev=2602.42 00:11:03.881 clat (usec): min=427, max=1262, avg=945.34, stdev=126.32 00:11:03.881 lat (usec): min=453, max=1287, avg=970.36, stdev=126.32 00:11:03.881 clat percentiles (usec): 00:11:03.881 | 1.00th=[ 578], 5.00th=[ 709], 10.00th=[ 775], 20.00th=[ 848], 00:11:03.881 | 30.00th=[ 898], 40.00th=[ 930], 50.00th=[ 963], 60.00th=[ 996], 00:11:03.881 | 70.00th=[ 1020], 80.00th=[ 1045], 90.00th=[ 1090], 95.00th=[ 1123], 00:11:03.881 | 99.00th=[ 1188], 99.50th=[ 1188], 99.90th=[ 1270], 99.95th=[ 1270], 00:11:03.881 | 99.99th=[ 1270] 00:11:03.881 write: IOPS=840, BW=3361KiB/s (3441kB/s)(3364KiB/1001msec); 0 zone resets 00:11:03.881 slat (nsec): min=9328, max=73237, avg=29435.34, stdev=7868.50 00:11:03.881 clat (usec): min=143, max=961, avg=555.99, stdev=135.68 00:11:03.881 lat (usec): min=153, max=974, avg=585.43, stdev=137.52 00:11:03.881 clat percentiles (usec): 00:11:03.881 | 1.00th=[ 258], 5.00th=[ 322], 10.00th=[ 379], 20.00th=[ 429], 00:11:03.881 | 30.00th=[ 490], 40.00th=[ 529], 50.00th=[ 553], 60.00th=[ 594], 00:11:03.881 | 70.00th=[ 644], 80.00th=[ 676], 90.00th=[ 725], 95.00th=[ 766], 00:11:03.881 | 99.00th=[ 857], 99.50th=[ 865], 99.90th=[ 963], 99.95th=[ 963], 00:11:03.881 | 99.99th=[ 963] 00:11:03.881 bw ( KiB/s): min= 4096, max= 4096, per=32.59%, avg=4096.00, stdev= 0.00, samples=1 00:11:03.881 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:03.881 lat (usec) : 250=0.44%, 500=20.33%, 750=39.69%, 1000=24.76% 00:11:03.881 lat (msec) : 2=14.78% 00:11:03.881 cpu : usr=2.70%, sys=3.30%, ctx=1354, majf=0, minf=1 00:11:03.881 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:03.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.881 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.881 issued rwts: total=512,841,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.881 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:03.881 job3: (groupid=0, jobs=1): err= 0: pid=213957: Fri Sep 27 15:29:44 2024 00:11:03.881 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:11:03.881 slat (nsec): min=7215, max=50594, avg=28158.12, stdev=3109.89 00:11:03.881 clat (usec): min=465, max=1176, avg=940.89, stdev=104.55 00:11:03.881 lat (usec): min=473, max=1204, avg=969.05, stdev=104.90 00:11:03.881 clat percentiles (usec): 00:11:03.881 | 1.00th=[ 660], 5.00th=[ 742], 10.00th=[ 791], 20.00th=[ 865], 00:11:03.881 | 30.00th=[ 906], 40.00th=[ 938], 50.00th=[ 963], 60.00th=[ 979], 00:11:03.881 | 70.00th=[ 1004], 80.00th=[ 1020], 90.00th=[ 1057], 95.00th=[ 1090], 00:11:03.881 | 99.00th=[ 1123], 99.50th=[ 1156], 99.90th=[ 1172], 99.95th=[ 1172], 00:11:03.881 | 99.99th=[ 1172] 00:11:03.882 write: IOPS=784, BW=3137KiB/s (3212kB/s)(3140KiB/1001msec); 0 zone resets 00:11:03.882 slat (usec): min=9, max=228, avg=33.86, stdev=11.05 00:11:03.882 clat (usec): min=252, max=909, avg=594.25, stdev=115.59 00:11:03.882 lat (usec): min=288, max=943, avg=628.12, stdev=118.34 00:11:03.882 clat percentiles (usec): 00:11:03.882 | 1.00th=[ 302], 5.00th=[ 392], 10.00th=[ 453], 20.00th=[ 494], 00:11:03.882 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 594], 60.00th=[ 619], 00:11:03.882 | 70.00th=[ 660], 80.00th=[ 693], 90.00th=[ 742], 95.00th=[ 775], 00:11:03.882 | 99.00th=[ 857], 99.50th=[ 881], 99.90th=[ 914], 99.95th=[ 914], 00:11:03.882 | 99.99th=[ 914] 00:11:03.882 bw ( KiB/s): min= 4096, max= 4096, per=32.59%, avg=4096.00, stdev= 0.00, samples=1 00:11:03.882 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:03.882 lat (usec) : 500=13.34%, 750=44.41%, 1000=30.30% 00:11:03.882 lat (msec) : 2=11.95% 00:11:03.882 cpu : usr=2.50%, sys=5.80%, ctx=1301, majf=0, minf=1 00:11:03.882 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:03.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.882 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.882 issued rwts: total=512,785,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.882 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:03.882 00:11:03.882 Run status group 0 (all jobs): 00:11:03.882 READ: bw=8184KiB/s (8380kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:11:03.882 WRITE: bw=12.3MiB/s (12.9MB/s), 2693KiB/s-3377KiB/s (2758kB/s-3458kB/s), io=12.3MiB (12.9MB), run=1001-1001msec 00:11:03.882 00:11:03.882 Disk stats (read/write): 00:11:03.882 nvme0n1: ios=524/512, merge=0/0, ticks=494/261, in_queue=755, util=81.06% 00:11:03.882 nvme0n2: ios=397/512, merge=0/0, ticks=393/233, in_queue=626, util=80.19% 00:11:03.882 nvme0n3: ios=481/512, merge=0/0, ticks=446/263, in_queue=709, util=86.27% 00:11:03.882 nvme0n4: ios=487/512, merge=0/0, ticks=932/225, in_queue=1157, util=99.77% 00:11:03.882 15:29:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:03.882 [global] 00:11:03.882 thread=1 00:11:03.882 invalidate=1 00:11:03.882 rw=write 00:11:03.882 time_based=1 00:11:03.882 runtime=1 00:11:03.882 ioengine=libaio 00:11:03.882 direct=1 00:11:03.882 bs=4096 00:11:03.882 iodepth=128 00:11:03.882 norandommap=0 00:11:03.882 numjobs=1 00:11:03.882 00:11:03.882 verify_dump=1 00:11:03.882 verify_backlog=512 00:11:03.882 verify_state_save=0 00:11:03.882 do_verify=1 00:11:03.882 verify=crc32c-intel 00:11:03.882 [job0] 00:11:03.882 filename=/dev/nvme0n1 00:11:03.882 [job1] 00:11:03.882 filename=/dev/nvme0n2 00:11:03.882 [job2] 00:11:03.882 filename=/dev/nvme0n3 00:11:03.882 [job3] 00:11:03.882 filename=/dev/nvme0n4 00:11:03.882 Could not set queue depth (nvme0n1) 00:11:03.882 Could not set queue depth (nvme0n2) 00:11:03.882 Could not set queue depth (nvme0n3) 00:11:03.882 Could not set queue depth (nvme0n4) 00:11:04.143 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:04.143 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:04.143 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:04.143 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:04.143 fio-3.35 00:11:04.143 Starting 4 threads 00:11:05.527 00:11:05.528 job0: (groupid=0, jobs=1): err= 0: pid=214479: Fri Sep 27 15:29:45 2024 00:11:05.528 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:11:05.528 slat (nsec): min=949, max=16298k, avg=118845.84, stdev=883141.96 00:11:05.528 clat (usec): min=4049, max=57505, avg=14072.85, stdev=7757.48 00:11:05.528 lat (usec): min=4055, max=57521, avg=14191.69, stdev=7825.96 00:11:05.528 clat percentiles (usec): 00:11:05.528 | 1.00th=[ 5669], 5.00th=[ 7963], 10.00th=[ 8356], 20.00th=[ 8717], 00:11:05.528 | 30.00th=[ 8979], 40.00th=[ 9896], 50.00th=[11076], 60.00th=[13304], 00:11:05.528 | 70.00th=[15401], 80.00th=[17957], 90.00th=[22414], 95.00th=[29492], 00:11:05.528 | 99.00th=[46924], 99.50th=[55837], 99.90th=[57410], 99.95th=[57410], 00:11:05.528 | 99.99th=[57410] 00:11:05.528 write: IOPS=4560, BW=17.8MiB/s (18.7MB/s)(17.9MiB/1003msec); 0 zone resets 00:11:05.528 slat (nsec): min=1659, max=8796.7k, avg=107876.44, stdev=622386.13 00:11:05.528 clat (usec): min=1179, max=72114, avg=15172.97, stdev=12859.80 00:11:05.528 lat (usec): min=1188, max=72119, avg=15280.85, stdev=12923.19 00:11:05.528 clat percentiles (usec): 00:11:05.528 | 1.00th=[ 3359], 5.00th=[ 5014], 10.00th=[ 5735], 20.00th=[ 7177], 00:11:05.528 | 30.00th=[ 7767], 40.00th=[ 8979], 50.00th=[10421], 60.00th=[13042], 00:11:05.528 | 70.00th=[14746], 80.00th=[16188], 90.00th=[35390], 95.00th=[44303], 00:11:05.528 | 99.00th=[66847], 99.50th=[69731], 99.90th=[71828], 99.95th=[71828], 00:11:05.528 | 99.99th=[71828] 00:11:05.528 bw ( KiB/s): min=16384, max=19192, per=18.36%, avg=17788.00, stdev=1985.56, samples=2 00:11:05.528 iops : min= 4096, max= 4798, avg=4447.00, stdev=496.39, samples=2 00:11:05.528 lat (msec) : 2=0.12%, 4=0.93%, 10=42.61%, 20=39.13%, 50=14.95% 00:11:05.528 lat (msec) : 100=2.26% 00:11:05.528 cpu : usr=2.79%, sys=3.59%, ctx=426, majf=0, minf=1 00:11:05.528 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:05.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:05.528 issued rwts: total=4096,4574,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.528 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:05.528 job1: (groupid=0, jobs=1): err= 0: pid=214480: Fri Sep 27 15:29:45 2024 00:11:05.528 read: IOPS=6440, BW=25.2MiB/s (26.4MB/s)(25.2MiB/1003msec) 00:11:05.528 slat (nsec): min=904, max=20775k, avg=81592.73, stdev=671876.82 00:11:05.528 clat (usec): min=1149, max=57260, avg=9382.37, stdev=6204.18 00:11:05.528 lat (usec): min=1321, max=57264, avg=9463.96, stdev=6280.29 00:11:05.528 clat percentiles (usec): 00:11:05.528 | 1.00th=[ 2769], 5.00th=[ 4883], 10.00th=[ 5866], 20.00th=[ 6849], 00:11:05.528 | 30.00th=[ 7046], 40.00th=[ 7373], 50.00th=[ 7635], 60.00th=[ 8094], 00:11:05.528 | 70.00th=[ 8848], 80.00th=[10159], 90.00th=[12256], 95.00th=[22676], 00:11:05.528 | 99.00th=[35914], 99.50th=[37487], 99.90th=[57410], 99.95th=[57410], 00:11:05.528 | 99.99th=[57410] 00:11:05.528 write: IOPS=7146, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1003msec); 0 zone resets 00:11:05.528 slat (nsec): min=1611, max=7885.4k, avg=56277.99, stdev=318146.59 00:11:05.528 clat (usec): min=791, max=85748, avg=9276.78, stdev=8804.97 00:11:05.528 lat (usec): min=805, max=85751, avg=9333.06, stdev=8820.18 00:11:05.528 clat percentiles (usec): 00:11:05.528 | 1.00th=[ 1795], 5.00th=[ 2933], 10.00th=[ 3818], 20.00th=[ 5604], 00:11:05.528 | 30.00th=[ 6652], 40.00th=[ 6980], 50.00th=[ 7111], 60.00th=[ 7242], 00:11:05.528 | 70.00th=[ 7504], 80.00th=[ 9241], 90.00th=[15008], 95.00th=[27657], 00:11:05.528 | 99.00th=[48497], 99.50th=[61080], 99.90th=[81265], 99.95th=[85459], 00:11:05.528 | 99.99th=[85459] 00:11:05.528 bw ( KiB/s): min=23624, max=33720, per=29.60%, avg=28672.00, stdev=7138.95, samples=2 00:11:05.528 iops : min= 5906, max= 8430, avg=7168.00, stdev=1784.74, samples=2 00:11:05.528 lat (usec) : 1000=0.10% 00:11:05.528 lat (msec) : 2=1.06%, 4=6.54%, 10=72.15%, 20=13.81%, 50=5.72% 00:11:05.528 lat (msec) : 100=0.62% 00:11:05.528 cpu : usr=3.59%, sys=6.19%, ctx=793, majf=0, minf=1 00:11:05.528 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:11:05.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:05.528 issued rwts: total=6460,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.528 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:05.528 job2: (groupid=0, jobs=1): err= 0: pid=214481: Fri Sep 27 15:29:45 2024 00:11:05.528 read: IOPS=5232, BW=20.4MiB/s (21.4MB/s)(21.3MiB/1043msec) 00:11:05.528 slat (nsec): min=922, max=14980k, avg=87079.27, stdev=738124.47 00:11:05.528 clat (usec): min=4536, max=52772, avg=12369.27, stdev=6950.11 00:11:05.528 lat (usec): min=4540, max=52778, avg=12456.35, stdev=6985.13 00:11:05.528 clat percentiles (usec): 00:11:05.528 | 1.00th=[ 6849], 5.00th=[ 7701], 10.00th=[ 8160], 20.00th=[ 8455], 00:11:05.528 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9896], 60.00th=[10421], 00:11:05.528 | 70.00th=[13042], 80.00th=[15008], 90.00th=[19268], 95.00th=[20841], 00:11:05.528 | 99.00th=[49546], 99.50th=[49546], 99.90th=[52691], 99.95th=[52691], 00:11:05.528 | 99.99th=[52691] 00:11:05.528 write: IOPS=5399, BW=21.1MiB/s (22.1MB/s)(22.0MiB/1043msec); 0 zone resets 00:11:05.528 slat (nsec): min=1566, max=11895k, avg=77799.22, stdev=507000.56 00:11:05.528 clat (usec): min=1589, max=52722, avg=11432.46, stdev=8130.09 00:11:05.528 lat (usec): min=1597, max=52729, avg=11510.26, stdev=8171.87 00:11:05.528 clat percentiles (usec): 00:11:05.528 | 1.00th=[ 3425], 5.00th=[ 5276], 10.00th=[ 5800], 20.00th=[ 7308], 00:11:05.528 | 30.00th=[ 7898], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[ 8848], 00:11:05.528 | 70.00th=[10028], 80.00th=[14615], 90.00th=[20841], 95.00th=[30016], 00:11:05.528 | 99.00th=[46400], 99.50th=[49546], 99.90th=[50070], 99.95th=[50070], 00:11:05.528 | 99.99th=[52691] 00:11:05.528 bw ( KiB/s): min=20480, max=24576, per=23.26%, avg=22528.00, stdev=2896.31, samples=2 00:11:05.528 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:11:05.528 lat (msec) : 2=0.04%, 4=0.57%, 10=61.11%, 20=28.96%, 50=9.04% 00:11:05.528 lat (msec) : 100=0.28% 00:11:05.528 cpu : usr=3.17%, sys=5.66%, ctx=502, majf=0, minf=2 00:11:05.528 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:11:05.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:05.528 issued rwts: total=5457,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.528 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:05.528 job3: (groupid=0, jobs=1): err= 0: pid=214482: Fri Sep 27 15:29:45 2024 00:11:05.528 read: IOPS=7634, BW=29.8MiB/s (31.3MB/s)(30.0MiB/1006msec) 00:11:05.528 slat (nsec): min=908, max=14435k, avg=70045.75, stdev=546309.76 00:11:05.528 clat (usec): min=3213, max=26567, avg=9056.12, stdev=2649.77 00:11:05.528 lat (usec): min=3221, max=26573, avg=9126.16, stdev=2688.44 00:11:05.528 clat percentiles (usec): 00:11:05.528 | 1.00th=[ 5538], 5.00th=[ 6390], 10.00th=[ 7177], 20.00th=[ 7504], 00:11:05.528 | 30.00th=[ 7898], 40.00th=[ 8029], 50.00th=[ 8356], 60.00th=[ 8717], 00:11:05.528 | 70.00th=[ 9110], 80.00th=[10028], 90.00th=[12125], 95.00th=[13698], 00:11:05.528 | 99.00th=[19006], 99.50th=[26608], 99.90th=[26608], 99.95th=[26608], 00:11:05.528 | 99.99th=[26608] 00:11:05.528 write: IOPS=7835, BW=30.6MiB/s (32.1MB/s)(30.8MiB/1006msec); 0 zone resets 00:11:05.528 slat (nsec): min=1620, max=6733.3k, avg=52280.02, stdev=332147.55 00:11:05.528 clat (usec): min=867, max=20012, avg=7366.18, stdev=2003.37 00:11:05.528 lat (usec): min=1186, max=20020, avg=7418.46, stdev=2024.09 00:11:05.528 clat percentiles (usec): 00:11:05.528 | 1.00th=[ 2802], 5.00th=[ 3916], 10.00th=[ 4686], 20.00th=[ 5932], 00:11:05.528 | 30.00th=[ 6652], 40.00th=[ 7439], 50.00th=[ 7701], 60.00th=[ 7832], 00:11:05.528 | 70.00th=[ 7963], 80.00th=[ 8160], 90.00th=[ 9503], 95.00th=[10945], 00:11:05.528 | 99.00th=[13173], 99.50th=[14484], 99.90th=[15533], 99.95th=[17695], 00:11:05.528 | 99.99th=[20055] 00:11:05.528 bw ( KiB/s): min=29288, max=32752, per=32.02%, avg=31020.00, stdev=2449.42, samples=2 00:11:05.528 iops : min= 7322, max= 8188, avg=7755.00, stdev=612.35, samples=2 00:11:05.528 lat (usec) : 1000=0.01% 00:11:05.528 lat (msec) : 2=0.15%, 4=2.92%, 10=82.31%, 20=14.19%, 50=0.42% 00:11:05.528 cpu : usr=4.98%, sys=7.76%, ctx=678, majf=0, minf=1 00:11:05.528 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:05.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:05.528 issued rwts: total=7680,7883,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.528 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:05.528 00:11:05.528 Run status group 0 (all jobs): 00:11:05.528 READ: bw=88.7MiB/s (93.0MB/s), 16.0MiB/s-29.8MiB/s (16.7MB/s-31.3MB/s), io=92.6MiB (97.0MB), run=1003-1043msec 00:11:05.528 WRITE: bw=94.6MiB/s (99.2MB/s), 17.8MiB/s-30.6MiB/s (18.7MB/s-32.1MB/s), io=98.7MiB (103MB), run=1003-1043msec 00:11:05.528 00:11:05.528 Disk stats (read/write): 00:11:05.528 nvme0n1: ios=3095/3232, merge=0/0, ticks=46421/56699, in_queue=103120, util=96.59% 00:11:05.528 nvme0n2: ios=5938/6661, merge=0/0, ticks=36022/44802, in_queue=80824, util=97.04% 00:11:05.528 nvme0n3: ios=4474/4608, merge=0/0, ticks=42338/41243, in_queue=83581, util=88.38% 00:11:05.528 nvme0n4: ios=6320/6656, merge=0/0, ticks=53292/48073, in_queue=101365, util=91.77% 00:11:05.528 15:29:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:05.528 [global] 00:11:05.528 thread=1 00:11:05.528 invalidate=1 00:11:05.528 rw=randwrite 00:11:05.528 time_based=1 00:11:05.528 runtime=1 00:11:05.528 ioengine=libaio 00:11:05.528 direct=1 00:11:05.528 bs=4096 00:11:05.528 iodepth=128 00:11:05.528 norandommap=0 00:11:05.528 numjobs=1 00:11:05.528 00:11:05.528 verify_dump=1 00:11:05.528 verify_backlog=512 00:11:05.528 verify_state_save=0 00:11:05.528 do_verify=1 00:11:05.528 verify=crc32c-intel 00:11:05.528 [job0] 00:11:05.528 filename=/dev/nvme0n1 00:11:05.528 [job1] 00:11:05.528 filename=/dev/nvme0n2 00:11:05.528 [job2] 00:11:05.528 filename=/dev/nvme0n3 00:11:05.528 [job3] 00:11:05.528 filename=/dev/nvme0n4 00:11:05.528 Could not set queue depth (nvme0n1) 00:11:05.528 Could not set queue depth (nvme0n2) 00:11:05.528 Could not set queue depth (nvme0n3) 00:11:05.528 Could not set queue depth (nvme0n4) 00:11:05.789 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:05.789 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:05.789 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:05.789 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:05.789 fio-3.35 00:11:05.789 Starting 4 threads 00:11:07.211 00:11:07.211 job0: (groupid=0, jobs=1): err= 0: pid=215009: Fri Sep 27 15:29:47 2024 00:11:07.211 read: IOPS=7641, BW=29.9MiB/s (31.3MB/s)(30.0MiB/1005msec) 00:11:07.211 slat (nsec): min=942, max=7072.7k, avg=59023.71, stdev=421903.60 00:11:07.211 clat (usec): min=2754, max=14861, avg=7724.31, stdev=1757.95 00:11:07.211 lat (usec): min=2762, max=16026, avg=7783.33, stdev=1787.29 00:11:07.211 clat percentiles (usec): 00:11:07.211 | 1.00th=[ 3687], 5.00th=[ 5604], 10.00th=[ 6128], 20.00th=[ 6521], 00:11:07.211 | 30.00th=[ 6783], 40.00th=[ 6980], 50.00th=[ 7177], 60.00th=[ 7570], 00:11:07.211 | 70.00th=[ 8160], 80.00th=[ 9110], 90.00th=[10028], 95.00th=[11469], 00:11:07.211 | 99.00th=[13042], 99.50th=[13829], 99.90th=[14877], 99.95th=[14877], 00:11:07.211 | 99.99th=[14877] 00:11:07.211 write: IOPS=8145, BW=31.8MiB/s (33.4MB/s)(32.0MiB/1005msec); 0 zone resets 00:11:07.211 slat (nsec): min=1561, max=44016k, avg=61330.86, stdev=847356.18 00:11:07.211 clat (usec): min=753, max=86453, avg=8300.56, stdev=9068.52 00:11:07.211 lat (usec): min=777, max=86460, avg=8361.89, stdev=9120.76 00:11:07.211 clat percentiles (usec): 00:11:07.211 | 1.00th=[ 2114], 5.00th=[ 3425], 10.00th=[ 4228], 20.00th=[ 5342], 00:11:07.211 | 30.00th=[ 5932], 40.00th=[ 6587], 50.00th=[ 6849], 60.00th=[ 6980], 00:11:07.211 | 70.00th=[ 7242], 80.00th=[ 7570], 90.00th=[ 8717], 95.00th=[16909], 00:11:07.211 | 99.00th=[49546], 99.50th=[49546], 99.90th=[86508], 99.95th=[86508], 00:11:07.211 | 99.99th=[86508] 00:11:07.211 bw ( KiB/s): min=31696, max=32702, per=31.11%, avg=32199.00, stdev=711.35, samples=2 00:11:07.211 iops : min= 7924, max= 8175, avg=8049.50, stdev=177.48, samples=2 00:11:07.211 lat (usec) : 1000=0.03% 00:11:07.211 lat (msec) : 2=0.40%, 4=4.29%, 10=86.76%, 20=6.27%, 50=2.00% 00:11:07.211 lat (msec) : 100=0.25% 00:11:07.211 cpu : usr=5.18%, sys=8.27%, ctx=697, majf=0, minf=1 00:11:07.211 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:07.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:07.211 issued rwts: total=7680,8186,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.211 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:07.211 job1: (groupid=0, jobs=1): err= 0: pid=215010: Fri Sep 27 15:29:47 2024 00:11:07.211 read: IOPS=8347, BW=32.6MiB/s (34.2MB/s)(32.7MiB/1004msec) 00:11:07.211 slat (nsec): min=868, max=6691.2k, avg=60474.34, stdev=389149.24 00:11:07.211 clat (usec): min=1305, max=14133, avg=7593.70, stdev=1143.89 00:11:07.211 lat (usec): min=3638, max=14140, avg=7654.17, stdev=1184.76 00:11:07.211 clat percentiles (usec): 00:11:07.211 | 1.00th=[ 4752], 5.00th=[ 5669], 10.00th=[ 6194], 20.00th=[ 6915], 00:11:07.211 | 30.00th=[ 7242], 40.00th=[ 7373], 50.00th=[ 7504], 60.00th=[ 7767], 00:11:07.211 | 70.00th=[ 8029], 80.00th=[ 8291], 90.00th=[ 8848], 95.00th=[ 9634], 00:11:07.211 | 99.00th=[10945], 99.50th=[11600], 99.90th=[12911], 99.95th=[12911], 00:11:07.211 | 99.99th=[14091] 00:11:07.211 write: IOPS=8669, BW=33.9MiB/s (35.5MB/s)(34.0MiB/1004msec); 0 zone resets 00:11:07.211 slat (nsec): min=1470, max=6915.3k, avg=52922.40, stdev=228856.10 00:11:07.211 clat (usec): min=3496, max=18857, avg=7258.42, stdev=1401.44 00:11:07.211 lat (usec): min=3859, max=18864, avg=7311.34, stdev=1414.43 00:11:07.211 clat percentiles (usec): 00:11:07.211 | 1.00th=[ 4424], 5.00th=[ 5538], 10.00th=[ 5932], 20.00th=[ 6325], 00:11:07.211 | 30.00th=[ 6915], 40.00th=[ 7046], 50.00th=[ 7177], 60.00th=[ 7308], 00:11:07.211 | 70.00th=[ 7439], 80.00th=[ 7898], 90.00th=[ 8455], 95.00th=[ 8848], 00:11:07.211 | 99.00th=[12649], 99.50th=[16450], 99.90th=[18482], 99.95th=[18744], 00:11:07.211 | 99.99th=[18744] 00:11:07.211 bw ( KiB/s): min=32702, max=36864, per=33.61%, avg=34783.00, stdev=2942.98, samples=2 00:11:07.211 iops : min= 8175, max= 9216, avg=8695.50, stdev=736.10, samples=2 00:11:07.211 lat (msec) : 2=0.01%, 4=0.18%, 10=96.49%, 20=3.32% 00:11:07.211 cpu : usr=3.89%, sys=6.58%, ctx=1151, majf=0, minf=2 00:11:07.211 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:07.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:07.211 issued rwts: total=8381,8704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.211 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:07.211 job2: (groupid=0, jobs=1): err= 0: pid=215011: Fri Sep 27 15:29:47 2024 00:11:07.211 read: IOPS=4935, BW=19.3MiB/s (20.2MB/s)(19.5MiB/1009msec) 00:11:07.211 slat (nsec): min=930, max=13712k, avg=95973.15, stdev=701332.49 00:11:07.211 clat (usec): min=3253, max=29472, avg=11736.61, stdev=3609.84 00:11:07.211 lat (usec): min=4474, max=29477, avg=11832.59, stdev=3666.56 00:11:07.211 clat percentiles (usec): 00:11:07.211 | 1.00th=[ 6980], 5.00th=[ 7701], 10.00th=[ 8586], 20.00th=[ 8979], 00:11:07.211 | 30.00th=[ 9503], 40.00th=[ 9896], 50.00th=[10290], 60.00th=[11207], 00:11:07.211 | 70.00th=[13566], 80.00th=[15008], 90.00th=[15926], 95.00th=[17695], 00:11:07.211 | 99.00th=[24773], 99.50th=[27657], 99.90th=[29492], 99.95th=[29492], 00:11:07.211 | 99.99th=[29492] 00:11:07.211 write: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec); 0 zone resets 00:11:07.211 slat (nsec): min=1580, max=7156.7k, avg=97179.28, stdev=508925.39 00:11:07.211 clat (usec): min=3339, max=41599, avg=13500.16, stdev=7111.80 00:11:07.211 lat (usec): min=3346, max=41603, avg=13597.34, stdev=7153.14 00:11:07.211 clat percentiles (usec): 00:11:07.211 | 1.00th=[ 4359], 5.00th=[ 5473], 10.00th=[ 5735], 20.00th=[ 8094], 00:11:07.211 | 30.00th=[ 9241], 40.00th=[ 9896], 50.00th=[12125], 60.00th=[13435], 00:11:07.212 | 70.00th=[15926], 80.00th=[16581], 90.00th=[22938], 95.00th=[28967], 00:11:07.212 | 99.00th=[36439], 99.50th=[40633], 99.90th=[41681], 99.95th=[41681], 00:11:07.212 | 99.99th=[41681] 00:11:07.212 bw ( KiB/s): min=17552, max=23361, per=19.77%, avg=20456.50, stdev=4107.58, samples=2 00:11:07.212 iops : min= 4388, max= 5840, avg=5114.00, stdev=1026.72, samples=2 00:11:07.212 lat (msec) : 4=0.27%, 10=42.52%, 20=48.32%, 50=8.89% 00:11:07.212 cpu : usr=4.17%, sys=4.56%, ctx=506, majf=0, minf=2 00:11:07.212 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:07.212 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.212 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:07.212 issued rwts: total=4980,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.212 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:07.212 job3: (groupid=0, jobs=1): err= 0: pid=215012: Fri Sep 27 15:29:47 2024 00:11:07.212 read: IOPS=3797, BW=14.8MiB/s (15.6MB/s)(15.0MiB/1008msec) 00:11:07.212 slat (nsec): min=944, max=15156k, avg=136182.19, stdev=870769.47 00:11:07.212 clat (usec): min=4025, max=89136, avg=14193.18, stdev=11038.36 00:11:07.212 lat (usec): min=4034, max=89144, avg=14329.36, stdev=11133.08 00:11:07.212 clat percentiles (usec): 00:11:07.212 | 1.00th=[ 4490], 5.00th=[ 7439], 10.00th=[ 7963], 20.00th=[ 8586], 00:11:07.212 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[11600], 00:11:07.212 | 70.00th=[13173], 80.00th=[16909], 90.00th=[25822], 95.00th=[34866], 00:11:07.212 | 99.00th=[70779], 99.50th=[82314], 99.90th=[88605], 99.95th=[89654], 00:11:07.212 | 99.99th=[89654] 00:11:07.212 write: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec); 0 zone resets 00:11:07.212 slat (nsec): min=1604, max=9158.5k, avg=112466.07, stdev=561085.35 00:11:07.212 clat (usec): min=1182, max=89107, avg=17951.78, stdev=16052.76 00:11:07.212 lat (usec): min=1193, max=89111, avg=18064.24, stdev=16142.10 00:11:07.212 clat percentiles (usec): 00:11:07.212 | 1.00th=[ 2933], 5.00th=[ 4817], 10.00th=[ 7046], 20.00th=[ 9241], 00:11:07.212 | 30.00th=[ 9503], 40.00th=[10552], 50.00th=[13435], 60.00th=[15795], 00:11:07.212 | 70.00th=[16057], 80.00th=[18482], 90.00th=[39060], 95.00th=[56886], 00:11:07.212 | 99.00th=[80217], 99.50th=[84411], 99.90th=[85459], 99.95th=[85459], 00:11:07.212 | 99.99th=[88605] 00:11:07.212 bw ( KiB/s): min=14672, max=18059, per=15.81%, avg=16365.50, stdev=2394.97, samples=2 00:11:07.212 iops : min= 3668, max= 4514, avg=4091.00, stdev=598.21, samples=2 00:11:07.212 lat (msec) : 2=0.20%, 4=1.49%, 10=43.50%, 20=38.25%, 50=12.18% 00:11:07.212 lat (msec) : 100=4.38% 00:11:07.212 cpu : usr=2.98%, sys=4.07%, ctx=506, majf=0, minf=2 00:11:07.212 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:07.212 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.212 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:07.212 issued rwts: total=3828,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.212 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:07.212 00:11:07.212 Run status group 0 (all jobs): 00:11:07.212 READ: bw=96.3MiB/s (101MB/s), 14.8MiB/s-32.6MiB/s (15.6MB/s-34.2MB/s), io=97.1MiB (102MB), run=1004-1009msec 00:11:07.212 WRITE: bw=101MiB/s (106MB/s), 15.9MiB/s-33.9MiB/s (16.6MB/s-35.5MB/s), io=102MiB (107MB), run=1004-1009msec 00:11:07.212 00:11:07.212 Disk stats (read/write): 00:11:07.212 nvme0n1: ios=6692/6923, merge=0/0, ticks=50011/42514, in_queue=92525, util=96.79% 00:11:07.212 nvme0n2: ios=6940/7168, merge=0/0, ticks=26711/24686, in_queue=51397, util=87.16% 00:11:07.212 nvme0n3: ios=4096/4519, merge=0/0, ticks=39135/52585, in_queue=91720, util=88.49% 00:11:07.212 nvme0n4: ios=3072/3455, merge=0/0, ticks=39782/62655, in_queue=102437, util=89.53% 00:11:07.212 15:29:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:07.212 15:29:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=215344 00:11:07.212 15:29:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:07.212 15:29:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:07.212 [global] 00:11:07.212 thread=1 00:11:07.212 invalidate=1 00:11:07.212 rw=read 00:11:07.212 time_based=1 00:11:07.212 runtime=10 00:11:07.212 ioengine=libaio 00:11:07.212 direct=1 00:11:07.212 bs=4096 00:11:07.212 iodepth=1 00:11:07.212 norandommap=1 00:11:07.212 numjobs=1 00:11:07.212 00:11:07.212 [job0] 00:11:07.212 filename=/dev/nvme0n1 00:11:07.212 [job1] 00:11:07.212 filename=/dev/nvme0n2 00:11:07.212 [job2] 00:11:07.212 filename=/dev/nvme0n3 00:11:07.212 [job3] 00:11:07.212 filename=/dev/nvme0n4 00:11:07.212 Could not set queue depth (nvme0n1) 00:11:07.212 Could not set queue depth (nvme0n2) 00:11:07.212 Could not set queue depth (nvme0n3) 00:11:07.212 Could not set queue depth (nvme0n4) 00:11:07.473 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:07.473 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:07.473 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:07.473 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:07.473 fio-3.35 00:11:07.473 Starting 4 threads 00:11:10.021 15:29:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:10.282 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=10674176, buflen=4096 00:11:10.282 fio: pid=215533, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:10.282 15:29:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:10.543 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=9519104, buflen=4096 00:11:10.543 fio: pid=215532, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:10.543 15:29:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:10.543 15:29:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:10.543 15:29:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:10.543 15:29:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:10.805 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=14843904, buflen=4096 00:11:10.805 fio: pid=215530, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:10.805 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=12881920, buflen=4096 00:11:10.805 fio: pid=215531, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:10.805 15:29:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:10.805 15:29:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:10.805 00:11:10.805 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=215530: Fri Sep 27 15:29:51 2024 00:11:10.805 read: IOPS=1234, BW=4936KiB/s (5054kB/s)(14.2MiB/2937msec) 00:11:10.805 slat (usec): min=6, max=32852, avg=36.11, stdev=574.80 00:11:10.805 clat (usec): min=346, max=1372, avg=762.14, stdev=133.63 00:11:10.805 lat (usec): min=371, max=33850, avg=798.26, stdev=594.75 00:11:10.805 clat percentiles (usec): 00:11:10.805 | 1.00th=[ 457], 5.00th=[ 537], 10.00th=[ 570], 20.00th=[ 635], 00:11:10.805 | 30.00th=[ 685], 40.00th=[ 734], 50.00th=[ 775], 60.00th=[ 816], 00:11:10.805 | 70.00th=[ 857], 80.00th=[ 889], 90.00th=[ 922], 95.00th=[ 947], 00:11:10.805 | 99.00th=[ 988], 99.50th=[ 1004], 99.90th=[ 1045], 99.95th=[ 1057], 00:11:10.805 | 99.99th=[ 1369] 00:11:10.805 bw ( KiB/s): min= 4856, max= 5128, per=33.45%, avg=5035.20, stdev=111.23, samples=5 00:11:10.805 iops : min= 1214, max= 1282, avg=1258.80, stdev=27.81, samples=5 00:11:10.805 lat (usec) : 500=1.63%, 750=42.43%, 1000=55.23% 00:11:10.805 lat (msec) : 2=0.69% 00:11:10.805 cpu : usr=1.29%, sys=3.51%, ctx=3629, majf=0, minf=1 00:11:10.805 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:10.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.805 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.805 issued rwts: total=3625,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.805 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:10.805 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=215531: Fri Sep 27 15:29:51 2024 00:11:10.805 read: IOPS=1011, BW=4046KiB/s (4143kB/s)(12.3MiB/3109msec) 00:11:10.805 slat (usec): min=6, max=24943, avg=48.92, stdev=601.62 00:11:10.805 clat (usec): min=312, max=4294, avg=925.94, stdev=170.86 00:11:10.805 lat (usec): min=334, max=25887, avg=974.87, stdev=626.27 00:11:10.805 clat percentiles (usec): 00:11:10.805 | 1.00th=[ 498], 5.00th=[ 685], 10.00th=[ 750], 20.00th=[ 816], 00:11:10.806 | 30.00th=[ 865], 40.00th=[ 898], 50.00th=[ 930], 60.00th=[ 955], 00:11:10.806 | 70.00th=[ 988], 80.00th=[ 1037], 90.00th=[ 1106], 95.00th=[ 1156], 00:11:10.806 | 99.00th=[ 1270], 99.50th=[ 1336], 99.90th=[ 1418], 99.95th=[ 4113], 00:11:10.806 | 99.99th=[ 4293] 00:11:10.806 bw ( KiB/s): min= 3656, max= 4256, per=27.13%, avg=4083.00, stdev=239.53, samples=6 00:11:10.806 iops : min= 914, max= 1064, avg=1020.67, stdev=59.94, samples=6 00:11:10.806 lat (usec) : 500=1.02%, 750=8.90%, 1000=62.84% 00:11:10.806 lat (msec) : 2=27.11%, 4=0.03%, 10=0.06% 00:11:10.806 cpu : usr=1.09%, sys=3.09%, ctx=3152, majf=0, minf=2 00:11:10.806 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:10.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.806 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.806 issued rwts: total=3146,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.806 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:10.806 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=215532: Fri Sep 27 15:29:51 2024 00:11:10.806 read: IOPS=842, BW=3367KiB/s (3448kB/s)(9296KiB/2761msec) 00:11:10.806 slat (usec): min=5, max=11912, avg=34.17, stdev=279.04 00:11:10.806 clat (usec): min=305, max=42646, avg=1137.61, stdev=1904.39 00:11:10.806 lat (usec): min=332, max=42671, avg=1171.79, stdev=1924.63 00:11:10.806 clat percentiles (usec): 00:11:10.806 | 1.00th=[ 709], 5.00th=[ 816], 10.00th=[ 873], 20.00th=[ 938], 00:11:10.806 | 30.00th=[ 988], 40.00th=[ 1029], 50.00th=[ 1057], 60.00th=[ 1090], 00:11:10.806 | 70.00th=[ 1123], 80.00th=[ 1156], 90.00th=[ 1205], 95.00th=[ 1254], 00:11:10.806 | 99.00th=[ 1336], 99.50th=[ 1385], 99.90th=[42206], 99.95th=[42206], 00:11:10.806 | 99.99th=[42730] 00:11:10.806 bw ( KiB/s): min= 2888, max= 3808, per=22.56%, avg=3396.80, stdev=380.34, samples=5 00:11:10.806 iops : min= 722, max= 952, avg=849.20, stdev=95.09, samples=5 00:11:10.806 lat (usec) : 500=0.17%, 750=1.98%, 1000=30.58% 00:11:10.806 lat (msec) : 2=66.97%, 10=0.04%, 50=0.22% 00:11:10.806 cpu : usr=1.56%, sys=2.97%, ctx=2328, majf=0, minf=2 00:11:10.806 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:10.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.806 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.806 issued rwts: total=2325,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.806 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:10.806 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=215533: Fri Sep 27 15:29:51 2024 00:11:10.806 read: IOPS=1012, BW=4047KiB/s (4144kB/s)(10.2MiB/2576msec) 00:11:10.806 slat (nsec): min=6562, max=56091, avg=25590.29, stdev=6165.21 00:11:10.806 clat (usec): min=191, max=42935, avg=946.55, stdev=2052.63 00:11:10.806 lat (usec): min=212, max=42960, avg=972.14, stdev=2052.68 00:11:10.806 clat percentiles (usec): 00:11:10.806 | 1.00th=[ 396], 5.00th=[ 498], 10.00th=[ 545], 20.00th=[ 627], 00:11:10.806 | 30.00th=[ 693], 40.00th=[ 758], 50.00th=[ 816], 60.00th=[ 873], 00:11:10.806 | 70.00th=[ 963], 80.00th=[ 1074], 90.00th=[ 1188], 95.00th=[ 1254], 00:11:10.806 | 99.00th=[ 1385], 99.50th=[ 1434], 99.90th=[42730], 99.95th=[42730], 00:11:10.806 | 99.99th=[42730] 00:11:10.806 bw ( KiB/s): min= 2688, max= 5136, per=27.13%, avg=4083.20, stdev=1003.78, samples=5 00:11:10.806 iops : min= 672, max= 1284, avg=1020.80, stdev=250.94, samples=5 00:11:10.806 lat (usec) : 250=0.31%, 500=5.14%, 750=32.83%, 1000=35.17% 00:11:10.806 lat (msec) : 2=26.16%, 4=0.08%, 50=0.27% 00:11:10.806 cpu : usr=1.20%, sys=3.81%, ctx=2609, majf=0, minf=2 00:11:10.806 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:10.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.806 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.806 issued rwts: total=2607,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.806 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:10.806 00:11:10.806 Run status group 0 (all jobs): 00:11:10.806 READ: bw=14.7MiB/s (15.4MB/s), 3367KiB/s-4936KiB/s (3448kB/s-5054kB/s), io=45.7MiB (47.9MB), run=2576-3109msec 00:11:10.806 00:11:10.806 Disk stats (read/write): 00:11:10.806 nvme0n1: ios=3523/0, merge=0/0, ticks=2576/0, in_queue=2576, util=93.39% 00:11:10.806 nvme0n2: ios=3145/0, merge=0/0, ticks=2814/0, in_queue=2814, util=93.53% 00:11:10.806 nvme0n3: ios=2204/0, merge=0/0, ticks=2324/0, in_queue=2324, util=96.03% 00:11:10.806 nvme0n4: ios=2400/0, merge=0/0, ticks=2793/0, in_queue=2793, util=99.10% 00:11:11.067 15:29:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:11.067 15:29:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:11.328 15:29:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:11.328 15:29:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:11.328 15:29:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:11.328 15:29:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:11.587 15:29:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:11.587 15:29:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:11.848 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:11.848 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 215344 00:11:11.848 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:11.848 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:11.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.848 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:11.848 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:11:11.848 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:11.848 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:11.848 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:11.848 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:11.848 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:11:11.848 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:11.848 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:11.848 nvmf hotplug test: fio failed as expected 00:11:11.848 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:12.108 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:12.108 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:12.108 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:12.108 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:12.108 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:12.108 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:12.108 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:12.108 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:12.108 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:12.108 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:12.108 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:12.108 rmmod nvme_tcp 00:11:12.108 rmmod nvme_fabrics 00:11:12.108 rmmod nvme_keyring 00:11:12.108 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:12.108 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:12.108 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:12.108 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 211727 ']' 00:11:12.108 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 211727 00:11:12.108 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 211727 ']' 00:11:12.108 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 211727 00:11:12.108 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:11:12.108 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:12.108 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 211727 00:11:12.108 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:12.108 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:12.108 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 211727' 00:11:12.108 killing process with pid 211727 00:11:12.108 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 211727 00:11:12.108 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 211727 00:11:12.368 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:12.368 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:12.368 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:12.368 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:12.368 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-restore 00:11:12.368 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-save 00:11:12.368 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:12.368 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:12.368 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:12.368 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.368 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:12.368 15:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.279 15:29:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:14.279 00:11:14.279 real 0m29.121s 00:11:14.279 user 2m38.662s 00:11:14.279 sys 0m9.743s 00:11:14.279 15:29:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:14.279 15:29:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.279 ************************************ 00:11:14.279 END TEST nvmf_fio_target 00:11:14.279 ************************************ 00:11:14.540 15:29:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:14.540 15:29:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:14.540 15:29:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:14.540 15:29:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:14.540 ************************************ 00:11:14.540 START TEST nvmf_bdevio 00:11:14.540 ************************************ 00:11:14.540 15:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:14.540 * Looking for test storage... 00:11:14.540 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:14.540 15:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:14.540 15:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:11:14.540 15:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:14.540 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:14.540 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:14.540 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:14.540 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:14.540 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:14.540 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:14.540 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:14.540 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:14.540 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:14.540 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:14.540 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:14.540 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:14.540 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:14.540 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:14.540 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:14.540 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:14.540 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:14.540 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:14.540 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:14.540 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:14.540 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:14.540 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:14.540 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:14.540 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:14.540 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:14.540 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:14.540 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:14.540 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:14.540 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:14.540 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:14.540 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:14.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.540 --rc genhtml_branch_coverage=1 00:11:14.540 --rc genhtml_function_coverage=1 00:11:14.540 --rc genhtml_legend=1 00:11:14.540 --rc geninfo_all_blocks=1 00:11:14.540 --rc geninfo_unexecuted_blocks=1 00:11:14.540 00:11:14.540 ' 00:11:14.540 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:14.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.540 --rc genhtml_branch_coverage=1 00:11:14.540 --rc genhtml_function_coverage=1 00:11:14.540 --rc genhtml_legend=1 00:11:14.540 --rc geninfo_all_blocks=1 00:11:14.540 --rc geninfo_unexecuted_blocks=1 00:11:14.540 00:11:14.540 ' 00:11:14.540 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:14.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.540 --rc genhtml_branch_coverage=1 00:11:14.540 --rc genhtml_function_coverage=1 00:11:14.540 --rc genhtml_legend=1 00:11:14.540 --rc geninfo_all_blocks=1 00:11:14.540 --rc geninfo_unexecuted_blocks=1 00:11:14.540 00:11:14.540 ' 00:11:14.540 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:14.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.540 --rc genhtml_branch_coverage=1 00:11:14.540 --rc genhtml_function_coverage=1 00:11:14.540 --rc genhtml_legend=1 00:11:14.540 --rc geninfo_all_blocks=1 00:11:14.540 --rc geninfo_unexecuted_blocks=1 00:11:14.540 00:11:14.540 ' 00:11:14.801 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:14.801 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:14.801 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:14.801 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:14.801 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:14.801 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:14.801 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:14.801 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:14.801 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:14.801 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:14.801 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:14.801 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:14.801 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:14.801 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:14.801 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:14.801 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:14.801 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:14.801 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:14.801 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:14.801 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:14.801 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:14.801 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:14.801 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:14.801 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.802 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.802 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.802 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:14.802 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.802 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:14.802 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:14.802 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:14.802 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:14.802 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:14.802 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:14.802 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:14.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:14.802 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:14.802 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:14.802 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:14.802 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:14.802 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:14.802 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:14.802 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:14.802 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:14.802 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:14.802 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:14.802 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:14.802 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.802 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:14.802 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.802 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:11:14.802 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:11:14.802 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:14.802 15:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:22.953 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:22.953 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:22.953 Found net devices under 0000:31:00.0: cvl_0_0 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:22.953 Found net devices under 0000:31:00.1: cvl_0_1 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # is_hw=yes 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:22.953 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:22.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:22.954 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.586 ms 00:11:22.954 00:11:22.954 --- 10.0.0.2 ping statistics --- 00:11:22.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.954 rtt min/avg/max/mdev = 0.586/0.586/0.586/0.000 ms 00:11:22.954 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:22.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:22.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:11:22.954 00:11:22.954 --- 10.0.0.1 ping statistics --- 00:11:22.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.954 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:11:22.954 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:22.954 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # return 0 00:11:22.954 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:22.954 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:22.954 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:22.954 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:22.954 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:22.954 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:22.954 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:22.954 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:22.954 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:22.954 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:22.954 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:22.954 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=220944 00:11:22.954 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 220944 00:11:22.954 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:22.954 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 220944 ']' 00:11:22.954 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.954 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:22.954 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.954 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:22.954 15:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:22.954 [2024-09-27 15:30:02.856467] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:11:22.954 [2024-09-27 15:30:02.856535] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:22.954 [2024-09-27 15:30:02.946727] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:22.954 [2024-09-27 15:30:02.995682] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:22.954 [2024-09-27 15:30:02.995742] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:22.954 [2024-09-27 15:30:02.995753] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:22.954 [2024-09-27 15:30:02.995763] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:22.954 [2024-09-27 15:30:02.995772] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:22.954 [2024-09-27 15:30:02.995976] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:11:22.954 [2024-09-27 15:30:02.996114] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:11:22.954 [2024-09-27 15:30:02.996273] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:11:22.954 [2024-09-27 15:30:02.996274] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:23.216 15:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:23.216 15:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:11:23.216 15:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:23.216 15:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:23.216 15:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:23.477 15:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:23.477 15:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:23.477 15:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.477 15:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:23.477 [2024-09-27 15:30:03.730852] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:23.477 15:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.477 15:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:23.477 15:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.477 15:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:23.477 Malloc0 00:11:23.477 15:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.477 15:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:23.477 15:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.477 15:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:23.477 15:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.478 15:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:23.478 15:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.478 15:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:23.478 15:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.478 15:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:23.478 15:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.478 15:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:23.478 [2024-09-27 15:30:03.795604] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:23.478 15:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.478 15:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:23.478 15:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:23.478 15:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:11:23.478 15:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:11:23.478 15:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:11:23.478 15:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:11:23.478 { 00:11:23.478 "params": { 00:11:23.478 "name": "Nvme$subsystem", 00:11:23.478 "trtype": "$TEST_TRANSPORT", 00:11:23.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:23.478 "adrfam": "ipv4", 00:11:23.478 "trsvcid": "$NVMF_PORT", 00:11:23.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:23.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:23.478 "hdgst": ${hdgst:-false}, 00:11:23.478 "ddgst": ${ddgst:-false} 00:11:23.478 }, 00:11:23.478 "method": "bdev_nvme_attach_controller" 00:11:23.478 } 00:11:23.478 EOF 00:11:23.478 )") 00:11:23.478 15:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:11:23.478 15:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:11:23.478 15:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:11:23.478 15:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:11:23.478 "params": { 00:11:23.478 "name": "Nvme1", 00:11:23.478 "trtype": "tcp", 00:11:23.478 "traddr": "10.0.0.2", 00:11:23.478 "adrfam": "ipv4", 00:11:23.478 "trsvcid": "4420", 00:11:23.478 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:23.478 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:23.478 "hdgst": false, 00:11:23.478 "ddgst": false 00:11:23.478 }, 00:11:23.478 "method": "bdev_nvme_attach_controller" 00:11:23.478 }' 00:11:23.478 [2024-09-27 15:30:03.854688] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:11:23.478 [2024-09-27 15:30:03.854761] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid221096 ] 00:11:23.478 [2024-09-27 15:30:03.940213] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:23.740 [2024-09-27 15:30:03.988796] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:23.740 [2024-09-27 15:30:03.988956] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.740 [2024-09-27 15:30:03.988956] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:23.740 I/O targets: 00:11:23.740 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:23.740 00:11:23.740 00:11:23.740 CUnit - A unit testing framework for C - Version 2.1-3 00:11:23.740 http://cunit.sourceforge.net/ 00:11:23.740 00:11:23.740 00:11:23.740 Suite: bdevio tests on: Nvme1n1 00:11:23.740 Test: blockdev write read block ...passed 00:11:24.002 Test: blockdev write zeroes read block ...passed 00:11:24.002 Test: blockdev write zeroes read no split ...passed 00:11:24.002 Test: blockdev write zeroes read split ...passed 00:11:24.002 Test: blockdev write zeroes read split partial ...passed 00:11:24.002 Test: blockdev reset ...[2024-09-27 15:30:04.305012] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:24.002 [2024-09-27 15:30:04.305111] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171f7d0 (9): Bad file descriptor 00:11:24.002 [2024-09-27 15:30:04.455909] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:24.002 passed 00:11:24.002 Test: blockdev write read 8 blocks ...passed 00:11:24.002 Test: blockdev write read size > 128k ...passed 00:11:24.002 Test: blockdev write read invalid size ...passed 00:11:24.264 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:24.264 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:24.264 Test: blockdev write read max offset ...passed 00:11:24.264 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:24.264 Test: blockdev writev readv 8 blocks ...passed 00:11:24.264 Test: blockdev writev readv 30 x 1block ...passed 00:11:24.264 Test: blockdev writev readv block ...passed 00:11:24.264 Test: blockdev writev readv size > 128k ...passed 00:11:24.264 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:24.264 Test: blockdev comparev and writev ...[2024-09-27 15:30:04.642170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:24.264 [2024-09-27 15:30:04.642226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:24.264 [2024-09-27 15:30:04.642244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:24.264 [2024-09-27 15:30:04.642253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:24.264 [2024-09-27 15:30:04.642820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:24.264 [2024-09-27 15:30:04.642834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:24.264 [2024-09-27 15:30:04.642850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:24.264 [2024-09-27 15:30:04.642858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:24.264 [2024-09-27 15:30:04.643418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:24.264 [2024-09-27 15:30:04.643431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:24.264 [2024-09-27 15:30:04.643445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:24.264 [2024-09-27 15:30:04.643454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:24.264 [2024-09-27 15:30:04.644027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:24.264 [2024-09-27 15:30:04.644040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:24.264 [2024-09-27 15:30:04.644054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:24.264 [2024-09-27 15:30:04.644062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:24.264 passed 00:11:24.264 Test: blockdev nvme passthru rw ...passed 00:11:24.264 Test: blockdev nvme passthru vendor specific ...[2024-09-27 15:30:04.728649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:24.264 [2024-09-27 15:30:04.728666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:24.264 [2024-09-27 15:30:04.729058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:24.264 [2024-09-27 15:30:04.729070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:24.264 [2024-09-27 15:30:04.729445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:24.264 [2024-09-27 15:30:04.729457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:24.264 [2024-09-27 15:30:04.729828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:24.264 [2024-09-27 15:30:04.729841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:24.264 passed 00:11:24.264 Test: blockdev nvme admin passthru ...passed 00:11:24.527 Test: blockdev copy ...passed 00:11:24.527 00:11:24.527 Run Summary: Type Total Ran Passed Failed Inactive 00:11:24.527 suites 1 1 n/a 0 0 00:11:24.527 tests 23 23 23 0 0 00:11:24.527 asserts 152 152 152 0 n/a 00:11:24.527 00:11:24.527 Elapsed time = 1.307 seconds 00:11:24.527 15:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:24.527 15:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.527 15:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:24.527 15:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.527 15:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:24.527 15:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:24.527 15:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:24.527 15:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:24.527 15:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:24.527 15:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:24.527 15:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:24.527 15:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:24.527 rmmod nvme_tcp 00:11:24.527 rmmod nvme_fabrics 00:11:24.527 rmmod nvme_keyring 00:11:24.788 15:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:24.788 15:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:24.788 15:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:24.788 15:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 220944 ']' 00:11:24.788 15:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 220944 00:11:24.788 15:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 220944 ']' 00:11:24.788 15:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 220944 00:11:24.788 15:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:11:24.788 15:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:24.788 15:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 220944 00:11:24.788 15:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:11:24.788 15:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:11:24.788 15:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 220944' 00:11:24.788 killing process with pid 220944 00:11:24.788 15:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 220944 00:11:24.788 15:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 220944 00:11:24.788 15:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:24.788 15:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:24.788 15:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:24.788 15:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:24.788 15:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-save 00:11:24.788 15:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:24.788 15:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-restore 00:11:24.788 15:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:24.788 15:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:24.788 15:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.788 15:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:24.788 15:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.340 15:30:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:27.340 00:11:27.340 real 0m12.469s 00:11:27.340 user 0m13.202s 00:11:27.340 sys 0m6.470s 00:11:27.340 15:30:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:27.340 15:30:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:27.340 ************************************ 00:11:27.340 END TEST nvmf_bdevio 00:11:27.340 ************************************ 00:11:27.340 15:30:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:27.340 00:11:27.340 real 5m6.658s 00:11:27.340 user 11m59.455s 00:11:27.340 sys 1m51.977s 00:11:27.340 15:30:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:27.340 15:30:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:27.340 ************************************ 00:11:27.340 END TEST nvmf_target_core 00:11:27.340 ************************************ 00:11:27.340 15:30:07 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:27.340 15:30:07 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:27.340 15:30:07 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:27.340 15:30:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:27.340 ************************************ 00:11:27.340 START TEST nvmf_target_extra 00:11:27.340 ************************************ 00:11:27.340 15:30:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:27.340 * Looking for test storage... 00:11:27.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:27.340 15:30:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:27.340 15:30:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:11:27.340 15:30:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:27.340 15:30:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:27.340 15:30:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:27.340 15:30:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:27.340 15:30:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:27.340 15:30:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:27.340 15:30:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:27.340 15:30:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:27.340 15:30:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:27.340 15:30:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:27.340 15:30:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:27.340 15:30:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:27.340 15:30:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:27.340 15:30:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:27.340 15:30:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:27.340 15:30:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:27.340 15:30:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:27.340 15:30:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:27.340 15:30:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:27.340 15:30:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:27.340 15:30:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:27.340 15:30:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:27.340 15:30:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:27.340 15:30:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:27.340 15:30:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:27.340 15:30:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:27.340 15:30:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:27.340 15:30:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:27.340 15:30:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:27.340 15:30:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:27.340 15:30:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:27.340 15:30:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:27.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.340 --rc genhtml_branch_coverage=1 00:11:27.340 --rc genhtml_function_coverage=1 00:11:27.340 --rc genhtml_legend=1 00:11:27.340 --rc geninfo_all_blocks=1 00:11:27.340 --rc geninfo_unexecuted_blocks=1 00:11:27.340 00:11:27.340 ' 00:11:27.340 15:30:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:27.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.340 --rc genhtml_branch_coverage=1 00:11:27.340 --rc genhtml_function_coverage=1 00:11:27.340 --rc genhtml_legend=1 00:11:27.340 --rc geninfo_all_blocks=1 00:11:27.340 --rc geninfo_unexecuted_blocks=1 00:11:27.340 00:11:27.340 ' 00:11:27.340 15:30:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:27.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.340 --rc genhtml_branch_coverage=1 00:11:27.340 --rc genhtml_function_coverage=1 00:11:27.340 --rc genhtml_legend=1 00:11:27.340 --rc geninfo_all_blocks=1 00:11:27.340 --rc geninfo_unexecuted_blocks=1 00:11:27.340 00:11:27.340 ' 00:11:27.340 15:30:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:27.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.340 --rc genhtml_branch_coverage=1 00:11:27.340 --rc genhtml_function_coverage=1 00:11:27.340 --rc genhtml_legend=1 00:11:27.340 --rc geninfo_all_blocks=1 00:11:27.340 --rc geninfo_unexecuted_blocks=1 00:11:27.340 00:11:27.341 ' 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:27.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:27.341 ************************************ 00:11:27.341 START TEST nvmf_example 00:11:27.341 ************************************ 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:27.341 * Looking for test storage... 00:11:27.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lcov --version 00:11:27.341 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:27.602 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:27.602 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:27.602 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:27.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.603 --rc genhtml_branch_coverage=1 00:11:27.603 --rc genhtml_function_coverage=1 00:11:27.603 --rc genhtml_legend=1 00:11:27.603 --rc geninfo_all_blocks=1 00:11:27.603 --rc geninfo_unexecuted_blocks=1 00:11:27.603 00:11:27.603 ' 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:27.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.603 --rc genhtml_branch_coverage=1 00:11:27.603 --rc genhtml_function_coverage=1 00:11:27.603 --rc genhtml_legend=1 00:11:27.603 --rc geninfo_all_blocks=1 00:11:27.603 --rc geninfo_unexecuted_blocks=1 00:11:27.603 00:11:27.603 ' 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:27.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.603 --rc genhtml_branch_coverage=1 00:11:27.603 --rc genhtml_function_coverage=1 00:11:27.603 --rc genhtml_legend=1 00:11:27.603 --rc geninfo_all_blocks=1 00:11:27.603 --rc geninfo_unexecuted_blocks=1 00:11:27.603 00:11:27.603 ' 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:27.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.603 --rc genhtml_branch_coverage=1 00:11:27.603 --rc genhtml_function_coverage=1 00:11:27.603 --rc genhtml_legend=1 00:11:27.603 --rc geninfo_all_blocks=1 00:11:27.603 --rc geninfo_unexecuted_blocks=1 00:11:27.603 00:11:27.603 ' 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:27.603 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:27.604 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:27.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:27.604 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:27.604 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:27.604 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:27.604 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:27.604 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:27.604 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:27.604 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:27.604 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:27.604 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:27.604 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:27.604 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:27.604 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:27.604 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:27.604 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:27.604 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:27.604 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:27.604 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:27.604 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:27.604 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:27.604 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.604 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.604 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.604 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:11:27.604 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:11:27.604 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:27.604 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:35.754 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:35.754 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:35.754 Found net devices under 0000:31:00.0: cvl_0_0 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:35.754 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:35.755 Found net devices under 0000:31:00.1: cvl_0_1 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # is_hw=yes 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:35.755 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:35.755 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.532 ms 00:11:35.755 00:11:35.755 --- 10.0.0.2 ping statistics --- 00:11:35.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.755 rtt min/avg/max/mdev = 0.532/0.532/0.532/0.000 ms 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:35.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:35.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:11:35.755 00:11:35.755 --- 10.0.0.1 ping statistics --- 00:11:35.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.755 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # return 0 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=226325 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 226325 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 226325 ']' 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:35.755 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:36.329 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:36.329 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:11:36.329 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:36.329 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:36.329 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:36.329 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:36.329 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.329 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:36.329 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.329 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:36.329 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.329 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:36.329 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.329 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:36.329 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:36.329 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.329 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:36.329 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.329 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:36.329 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:36.329 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.329 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:36.329 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.329 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:36.329 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.329 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:36.329 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.329 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:36.329 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:48.571 Initializing NVMe Controllers 00:11:48.571 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:48.571 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:48.571 Initialization complete. Launching workers. 00:11:48.571 ======================================================== 00:11:48.571 Latency(us) 00:11:48.571 Device Information : IOPS MiB/s Average min max 00:11:48.571 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19374.95 75.68 3302.60 615.39 15591.16 00:11:48.571 ======================================================== 00:11:48.571 Total : 19374.95 75.68 3302.60 615.39 15591.16 00:11:48.571 00:11:48.571 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:48.571 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:48.571 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:48.571 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:48.571 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:48.571 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:48.571 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:48.571 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:48.571 rmmod nvme_tcp 00:11:48.571 rmmod nvme_fabrics 00:11:48.571 rmmod nvme_keyring 00:11:48.571 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:48.571 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:48.571 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:48.571 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@513 -- # '[' -n 226325 ']' 00:11:48.571 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # killprocess 226325 00:11:48.571 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 226325 ']' 00:11:48.571 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 226325 00:11:48.571 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:11:48.571 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:48.571 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 226325 00:11:48.571 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:11:48.571 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:11:48.571 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 226325' 00:11:48.571 killing process with pid 226325 00:11:48.571 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 226325 00:11:48.571 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 226325 00:11:48.571 nvmf threads initialize successfully 00:11:48.571 bdev subsystem init successfully 00:11:48.571 created a nvmf target service 00:11:48.571 create targets's poll groups done 00:11:48.571 all subsystems of target started 00:11:48.571 nvmf target is running 00:11:48.571 all subsystems of target stopped 00:11:48.571 destroy targets's poll groups done 00:11:48.571 destroyed the nvmf target service 00:11:48.571 bdev subsystem finish successfully 00:11:48.571 nvmf threads destroy successfully 00:11:48.571 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:48.571 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:48.571 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:48.571 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:48.571 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # iptables-restore 00:11:48.571 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # iptables-save 00:11:48.571 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:48.571 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:48.571 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:48.571 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.571 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:48.571 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.833 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:48.833 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:48.833 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:48.833 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:49.094 00:11:49.094 real 0m21.627s 00:11:49.094 user 0m46.762s 00:11:49.094 sys 0m7.126s 00:11:49.094 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:49.094 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:49.094 ************************************ 00:11:49.094 END TEST nvmf_example 00:11:49.094 ************************************ 00:11:49.094 15:30:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:49.094 15:30:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:49.094 15:30:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:49.094 15:30:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:49.094 ************************************ 00:11:49.094 START TEST nvmf_filesystem 00:11:49.094 ************************************ 00:11:49.094 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:49.094 * Looking for test storage... 00:11:49.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:49.094 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:49.094 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:11:49.094 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:49.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.361 --rc genhtml_branch_coverage=1 00:11:49.361 --rc genhtml_function_coverage=1 00:11:49.361 --rc genhtml_legend=1 00:11:49.361 --rc geninfo_all_blocks=1 00:11:49.361 --rc geninfo_unexecuted_blocks=1 00:11:49.361 00:11:49.361 ' 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:49.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.361 --rc genhtml_branch_coverage=1 00:11:49.361 --rc genhtml_function_coverage=1 00:11:49.361 --rc genhtml_legend=1 00:11:49.361 --rc geninfo_all_blocks=1 00:11:49.361 --rc geninfo_unexecuted_blocks=1 00:11:49.361 00:11:49.361 ' 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:49.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.361 --rc genhtml_branch_coverage=1 00:11:49.361 --rc genhtml_function_coverage=1 00:11:49.361 --rc genhtml_legend=1 00:11:49.361 --rc geninfo_all_blocks=1 00:11:49.361 --rc geninfo_unexecuted_blocks=1 00:11:49.361 00:11:49.361 ' 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:49.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.361 --rc genhtml_branch_coverage=1 00:11:49.361 --rc genhtml_function_coverage=1 00:11:49.361 --rc genhtml_legend=1 00:11:49.361 --rc geninfo_all_blocks=1 00:11:49.361 --rc geninfo_unexecuted_blocks=1 00:11:49.361 00:11:49.361 ' 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:11:49.361 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:49.362 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:49.362 #define SPDK_CONFIG_H 00:11:49.362 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:49.362 #define SPDK_CONFIG_APPS 1 00:11:49.362 #define SPDK_CONFIG_ARCH native 00:11:49.362 #undef SPDK_CONFIG_ASAN 00:11:49.362 #undef SPDK_CONFIG_AVAHI 00:11:49.362 #undef SPDK_CONFIG_CET 00:11:49.362 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:49.362 #define SPDK_CONFIG_COVERAGE 1 00:11:49.362 #define SPDK_CONFIG_CROSS_PREFIX 00:11:49.362 #undef SPDK_CONFIG_CRYPTO 00:11:49.362 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:49.362 #undef SPDK_CONFIG_CUSTOMOCF 00:11:49.362 #undef SPDK_CONFIG_DAOS 00:11:49.362 #define SPDK_CONFIG_DAOS_DIR 00:11:49.362 #define SPDK_CONFIG_DEBUG 1 00:11:49.362 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:49.362 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:49.362 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:49.362 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:49.362 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:49.362 #undef SPDK_CONFIG_DPDK_UADK 00:11:49.362 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:49.362 #define SPDK_CONFIG_EXAMPLES 1 00:11:49.362 #undef SPDK_CONFIG_FC 00:11:49.362 #define SPDK_CONFIG_FC_PATH 00:11:49.362 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:49.362 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:49.362 #define SPDK_CONFIG_FSDEV 1 00:11:49.362 #undef SPDK_CONFIG_FUSE 00:11:49.362 #undef SPDK_CONFIG_FUZZER 00:11:49.362 #define SPDK_CONFIG_FUZZER_LIB 00:11:49.362 #undef SPDK_CONFIG_GOLANG 00:11:49.362 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:49.362 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:49.362 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:49.362 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:49.362 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:49.362 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:49.362 #undef SPDK_CONFIG_HAVE_LZ4 00:11:49.362 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:49.362 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:49.362 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:49.362 #define SPDK_CONFIG_IDXD 1 00:11:49.362 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:49.362 #undef SPDK_CONFIG_IPSEC_MB 00:11:49.362 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:49.362 #define SPDK_CONFIG_ISAL 1 00:11:49.362 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:49.363 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:49.363 #define SPDK_CONFIG_LIBDIR 00:11:49.363 #undef SPDK_CONFIG_LTO 00:11:49.363 #define SPDK_CONFIG_MAX_LCORES 128 00:11:49.363 #define SPDK_CONFIG_NVME_CUSE 1 00:11:49.363 #undef SPDK_CONFIG_OCF 00:11:49.363 #define SPDK_CONFIG_OCF_PATH 00:11:49.363 #define SPDK_CONFIG_OPENSSL_PATH 00:11:49.363 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:49.363 #define SPDK_CONFIG_PGO_DIR 00:11:49.363 #undef SPDK_CONFIG_PGO_USE 00:11:49.363 #define SPDK_CONFIG_PREFIX /usr/local 00:11:49.363 #undef SPDK_CONFIG_RAID5F 00:11:49.363 #undef SPDK_CONFIG_RBD 00:11:49.363 #define SPDK_CONFIG_RDMA 1 00:11:49.363 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:49.363 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:49.363 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:49.363 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:49.363 #define SPDK_CONFIG_SHARED 1 00:11:49.363 #undef SPDK_CONFIG_SMA 00:11:49.363 #define SPDK_CONFIG_TESTS 1 00:11:49.363 #undef SPDK_CONFIG_TSAN 00:11:49.363 #define SPDK_CONFIG_UBLK 1 00:11:49.363 #define SPDK_CONFIG_UBSAN 1 00:11:49.363 #undef SPDK_CONFIG_UNIT_TESTS 00:11:49.363 #undef SPDK_CONFIG_URING 00:11:49.363 #define SPDK_CONFIG_URING_PATH 00:11:49.363 #undef SPDK_CONFIG_URING_ZNS 00:11:49.363 #undef SPDK_CONFIG_USDT 00:11:49.363 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:49.363 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:49.363 #define SPDK_CONFIG_VFIO_USER 1 00:11:49.363 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:49.363 #define SPDK_CONFIG_VHOST 1 00:11:49.363 #define SPDK_CONFIG_VIRTIO 1 00:11:49.363 #undef SPDK_CONFIG_VTUNE 00:11:49.363 #define SPDK_CONFIG_VTUNE_DIR 00:11:49.363 #define SPDK_CONFIG_WERROR 1 00:11:49.363 #define SPDK_CONFIG_WPDK_DIR 00:11:49.363 #undef SPDK_CONFIG_XNVME 00:11:49.363 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:49.363 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v23.11 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:49.364 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j144 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 229112 ]] 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 229112 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:11:49.365 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.KGOesp 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.KGOesp/tests/target /tmp/spdk.KGOesp 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=678309888 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=4606119936 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=122382524416 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=129356562432 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=6974038016 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64668250112 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678281216 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=25847906304 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=25871314944 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23408640 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=efivarfs 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=efivarfs 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=175104 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=507904 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=328704 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64678096896 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678281216 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=184320 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12935643136 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12935655424 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:11:49.366 * Looking for test storage... 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=122382524416 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=9188630528 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:49.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1668 -- # set -o errtrace 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1673 -- # true 00:11:49.366 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1675 -- # xtrace_fd 00:11:49.367 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:49.367 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:49.367 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:49.367 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:49.367 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:49.367 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:49.367 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:49.367 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:49.367 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:49.367 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:11:49.367 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:49.367 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:49.367 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:49.367 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:49.367 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:49.367 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:49.367 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:49.367 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:49.367 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:49.367 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:49.367 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:49.367 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:49.367 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:49.367 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:49.367 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:49.367 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:49.367 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:49.367 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:49.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.630 --rc genhtml_branch_coverage=1 00:11:49.630 --rc genhtml_function_coverage=1 00:11:49.630 --rc genhtml_legend=1 00:11:49.630 --rc geninfo_all_blocks=1 00:11:49.630 --rc geninfo_unexecuted_blocks=1 00:11:49.630 00:11:49.630 ' 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:49.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.630 --rc genhtml_branch_coverage=1 00:11:49.630 --rc genhtml_function_coverage=1 00:11:49.630 --rc genhtml_legend=1 00:11:49.630 --rc geninfo_all_blocks=1 00:11:49.630 --rc geninfo_unexecuted_blocks=1 00:11:49.630 00:11:49.630 ' 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:49.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.630 --rc genhtml_branch_coverage=1 00:11:49.630 --rc genhtml_function_coverage=1 00:11:49.630 --rc genhtml_legend=1 00:11:49.630 --rc geninfo_all_blocks=1 00:11:49.630 --rc geninfo_unexecuted_blocks=1 00:11:49.630 00:11:49.630 ' 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:49.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.630 --rc genhtml_branch_coverage=1 00:11:49.630 --rc genhtml_function_coverage=1 00:11:49.630 --rc genhtml_legend=1 00:11:49.630 --rc geninfo_all_blocks=1 00:11:49.630 --rc geninfo_unexecuted_blocks=1 00:11:49.630 00:11:49.630 ' 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:49.630 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:49.631 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:49.631 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:49.631 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:49.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:49.631 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:49.631 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:49.631 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:49.631 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:49.631 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:49.631 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:49.631 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:49.631 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:49.631 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:49.631 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:49.631 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:49.631 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.631 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:49.631 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.631 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:11:49.631 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:11:49.631 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:49.631 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:57.789 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:57.789 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:57.789 Found net devices under 0000:31:00.0: cvl_0_0 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:57.789 Found net devices under 0000:31:00.1: cvl_0_1 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # is_hw=yes 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:11:57.789 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:57.790 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:57.790 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.570 ms 00:11:57.790 00:11:57.790 --- 10.0.0.2 ping statistics --- 00:11:57.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.790 rtt min/avg/max/mdev = 0.570/0.570/0.570/0.000 ms 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:57.790 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:57.790 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:11:57.790 00:11:57.790 --- 10.0.0.1 ping statistics --- 00:11:57.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.790 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # return 0 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:57.790 ************************************ 00:11:57.790 START TEST nvmf_filesystem_no_in_capsule 00:11:57.790 ************************************ 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # nvmfpid=232981 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # waitforlisten 232981 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 232981 ']' 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:57.790 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.790 [2024-09-27 15:30:37.774614] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:11:57.790 [2024-09-27 15:30:37.774677] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:57.790 [2024-09-27 15:30:37.864772] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:57.790 [2024-09-27 15:30:37.913279] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:57.790 [2024-09-27 15:30:37.913335] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:57.790 [2024-09-27 15:30:37.913344] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:57.790 [2024-09-27 15:30:37.913350] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:57.790 [2024-09-27 15:30:37.913356] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:57.790 [2024-09-27 15:30:37.913509] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:57.790 [2024-09-27 15:30:37.913666] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:57.790 [2024-09-27 15:30:37.913717] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.790 [2024-09-27 15:30:37.913717] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:58.364 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:58.364 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:58.364 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:58.364 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:58.364 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.364 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:58.364 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:58.364 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:58.364 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.364 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.364 [2024-09-27 15:30:38.646935] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:58.364 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.364 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:58.364 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.364 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.364 Malloc1 00:11:58.364 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.364 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:58.364 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.364 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.364 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.364 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:58.364 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.364 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.364 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.364 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:58.364 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.364 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.364 [2024-09-27 15:30:38.798639] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:58.364 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.364 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:58.364 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:58.364 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:58.364 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:58.364 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:58.364 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:58.364 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.364 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.364 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.364 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:58.364 { 00:11:58.364 "name": "Malloc1", 00:11:58.364 "aliases": [ 00:11:58.364 "6690b840-7f50-4181-b103-6f39f7e34ca9" 00:11:58.364 ], 00:11:58.364 "product_name": "Malloc disk", 00:11:58.364 "block_size": 512, 00:11:58.364 "num_blocks": 1048576, 00:11:58.364 "uuid": "6690b840-7f50-4181-b103-6f39f7e34ca9", 00:11:58.364 "assigned_rate_limits": { 00:11:58.364 "rw_ios_per_sec": 0, 00:11:58.364 "rw_mbytes_per_sec": 0, 00:11:58.364 "r_mbytes_per_sec": 0, 00:11:58.364 "w_mbytes_per_sec": 0 00:11:58.364 }, 00:11:58.364 "claimed": true, 00:11:58.364 "claim_type": "exclusive_write", 00:11:58.364 "zoned": false, 00:11:58.364 "supported_io_types": { 00:11:58.364 "read": true, 00:11:58.364 "write": true, 00:11:58.364 "unmap": true, 00:11:58.364 "flush": true, 00:11:58.364 "reset": true, 00:11:58.364 "nvme_admin": false, 00:11:58.364 "nvme_io": false, 00:11:58.364 "nvme_io_md": false, 00:11:58.364 "write_zeroes": true, 00:11:58.364 "zcopy": true, 00:11:58.364 "get_zone_info": false, 00:11:58.364 "zone_management": false, 00:11:58.364 "zone_append": false, 00:11:58.365 "compare": false, 00:11:58.365 "compare_and_write": false, 00:11:58.365 "abort": true, 00:11:58.365 "seek_hole": false, 00:11:58.365 "seek_data": false, 00:11:58.365 "copy": true, 00:11:58.365 "nvme_iov_md": false 00:11:58.365 }, 00:11:58.365 "memory_domains": [ 00:11:58.365 { 00:11:58.365 "dma_device_id": "system", 00:11:58.365 "dma_device_type": 1 00:11:58.365 }, 00:11:58.365 { 00:11:58.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.365 "dma_device_type": 2 00:11:58.365 } 00:11:58.365 ], 00:11:58.365 "driver_specific": {} 00:11:58.365 } 00:11:58.365 ]' 00:11:58.365 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:58.627 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:58.627 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:58.627 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:58.627 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:58.627 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:58.627 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:58.627 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:00.014 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:00.014 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:12:00.015 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:00.015 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:00.015 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:12:01.928 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:01.928 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:01.928 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:02.190 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:02.190 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:02.190 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:12:02.190 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:02.190 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:02.190 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:02.190 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:02.190 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:02.190 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:02.190 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:02.190 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:02.190 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:02.190 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:02.191 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:02.451 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:03.023 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:03.966 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:03.966 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:03.966 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:03.966 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:03.966 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:03.966 ************************************ 00:12:03.966 START TEST filesystem_ext4 00:12:03.966 ************************************ 00:12:03.966 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:03.966 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:03.967 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:03.967 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:03.967 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:12:03.967 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:03.967 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:12:03.967 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:12:03.967 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:12:03.967 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:12:03.967 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:03.967 mke2fs 1.47.0 (5-Feb-2023) 00:12:03.967 Discarding device blocks: 0/522240 done 00:12:03.967 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:03.967 Filesystem UUID: 44aa2ef0-2f7c-4dd9-af26-37c683f790a9 00:12:03.967 Superblock backups stored on blocks: 00:12:03.967 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:03.967 00:12:03.967 Allocating group tables: 0/64 done 00:12:03.967 Writing inode tables: 0/64 done 00:12:07.319 Creating journal (8192 blocks): done 00:12:07.319 Writing superblocks and filesystem accounting information: 0/64 done 00:12:07.319 00:12:07.319 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:12:07.319 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:12.613 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:12.613 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:12.613 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:12.613 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:12.613 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:12.613 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:12.613 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 232981 00:12:12.613 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:12.613 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:12.613 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:12.613 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:12.613 00:12:12.613 real 0m8.467s 00:12:12.613 user 0m0.024s 00:12:12.613 sys 0m0.132s 00:12:12.613 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:12.613 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:12.613 ************************************ 00:12:12.613 END TEST filesystem_ext4 00:12:12.613 ************************************ 00:12:12.613 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:12.613 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:12.613 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:12.613 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.613 ************************************ 00:12:12.613 START TEST filesystem_btrfs 00:12:12.613 ************************************ 00:12:12.613 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:12.613 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:12.613 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:12.613 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:12.613 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:12.613 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:12.613 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:12.613 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:12.613 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:12.613 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:12.613 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:12.875 btrfs-progs v6.8.1 00:12:12.875 See https://btrfs.readthedocs.io for more information. 00:12:12.875 00:12:12.875 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:12.875 NOTE: several default settings have changed in version 5.15, please make sure 00:12:12.875 this does not affect your deployments: 00:12:12.875 - DUP for metadata (-m dup) 00:12:12.875 - enabled no-holes (-O no-holes) 00:12:12.875 - enabled free-space-tree (-R free-space-tree) 00:12:12.875 00:12:12.875 Label: (null) 00:12:12.875 UUID: f0300e7f-6bd5-4302-8392-32a0b6777d33 00:12:12.875 Node size: 16384 00:12:12.875 Sector size: 4096 (CPU page size: 4096) 00:12:12.875 Filesystem size: 510.00MiB 00:12:12.875 Block group profiles: 00:12:12.875 Data: single 8.00MiB 00:12:12.875 Metadata: DUP 32.00MiB 00:12:12.875 System: DUP 8.00MiB 00:12:12.875 SSD detected: yes 00:12:12.875 Zoned device: no 00:12:12.875 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:12.875 Checksum: crc32c 00:12:12.875 Number of devices: 1 00:12:12.875 Devices: 00:12:12.875 ID SIZE PATH 00:12:12.875 1 510.00MiB /dev/nvme0n1p1 00:12:12.875 00:12:12.875 15:30:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:12.875 15:30:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:14.262 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:14.262 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:14.262 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:14.262 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:14.262 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:14.262 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:14.262 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 232981 00:12:14.262 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:14.262 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:14.262 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:14.262 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:14.262 00:12:14.262 real 0m1.548s 00:12:14.262 user 0m0.025s 00:12:14.262 sys 0m0.178s 00:12:14.262 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:14.262 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:14.262 ************************************ 00:12:14.262 END TEST filesystem_btrfs 00:12:14.262 ************************************ 00:12:14.262 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:14.262 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:14.262 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:14.262 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:14.262 ************************************ 00:12:14.262 START TEST filesystem_xfs 00:12:14.262 ************************************ 00:12:14.262 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:14.262 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:14.262 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:14.262 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:14.262 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:14.262 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:14.262 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:14.262 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:12:14.262 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:14.262 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:14.262 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:14.262 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:14.262 = sectsz=512 attr=2, projid32bit=1 00:12:14.262 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:14.263 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:14.263 data = bsize=4096 blocks=130560, imaxpct=25 00:12:14.263 = sunit=0 swidth=0 blks 00:12:14.263 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:14.263 log =internal log bsize=4096 blocks=16384, version=2 00:12:14.263 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:14.263 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:15.207 Discarding blocks...Done. 00:12:15.207 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:15.207 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:18.507 15:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:18.769 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:18.769 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:18.769 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:18.769 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:18.769 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:18.769 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 232981 00:12:18.769 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:18.769 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:18.769 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:18.769 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:18.769 00:12:18.769 real 0m4.613s 00:12:18.769 user 0m0.026s 00:12:18.769 sys 0m0.130s 00:12:18.769 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:18.769 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:18.769 ************************************ 00:12:18.769 END TEST filesystem_xfs 00:12:18.769 ************************************ 00:12:18.769 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:18.769 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:19.341 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:19.341 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.341 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:19.341 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:19.341 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:19.341 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.341 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:19.341 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.341 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:19.341 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:19.341 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.341 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:19.341 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.341 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:19.341 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 232981 00:12:19.341 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 232981 ']' 00:12:19.341 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 232981 00:12:19.341 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:19.341 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:19.341 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 232981 00:12:19.341 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:19.341 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:19.341 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 232981' 00:12:19.341 killing process with pid 232981 00:12:19.342 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 232981 00:12:19.342 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 232981 00:12:19.602 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:19.602 00:12:19.602 real 0m22.308s 00:12:19.602 user 1m28.166s 00:12:19.602 sys 0m1.704s 00:12:19.602 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:19.602 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:19.602 ************************************ 00:12:19.602 END TEST nvmf_filesystem_no_in_capsule 00:12:19.602 ************************************ 00:12:19.602 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:19.602 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:19.602 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:19.602 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:19.864 ************************************ 00:12:19.864 START TEST nvmf_filesystem_in_capsule 00:12:19.864 ************************************ 00:12:19.864 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:12:19.864 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:19.864 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:19.864 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:19.864 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:19.864 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:19.864 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # nvmfpid=237571 00:12:19.864 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # waitforlisten 237571 00:12:19.864 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:19.864 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 237571 ']' 00:12:19.864 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.864 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:19.864 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.864 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:19.864 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:19.864 [2024-09-27 15:31:00.159246] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:12:19.864 [2024-09-27 15:31:00.159306] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:19.864 [2024-09-27 15:31:00.242917] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:19.864 [2024-09-27 15:31:00.273957] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:19.864 [2024-09-27 15:31:00.273992] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:19.864 [2024-09-27 15:31:00.273997] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:19.864 [2024-09-27 15:31:00.274005] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:19.864 [2024-09-27 15:31:00.274009] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:19.864 [2024-09-27 15:31:00.274092] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:19.864 [2024-09-27 15:31:00.274250] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:19.864 [2024-09-27 15:31:00.274291] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.864 [2024-09-27 15:31:00.274293] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:20.808 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:20.808 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:12:20.808 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:20.808 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:20.808 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.808 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:20.808 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:20.808 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:20.808 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.808 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.808 [2024-09-27 15:31:01.004342] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:20.808 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.808 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:20.808 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.808 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.808 Malloc1 00:12:20.808 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.808 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:20.808 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.808 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.808 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.808 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:20.808 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.808 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.808 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.808 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:20.808 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.808 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.808 [2024-09-27 15:31:01.130677] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:20.808 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.808 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:20.808 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:12:20.808 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:12:20.808 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:12:20.808 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:12:20.808 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:20.808 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.808 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.808 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.808 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:12:20.808 { 00:12:20.808 "name": "Malloc1", 00:12:20.808 "aliases": [ 00:12:20.808 "acb23a0d-34fc-4418-99db-0a4458901dc3" 00:12:20.808 ], 00:12:20.808 "product_name": "Malloc disk", 00:12:20.808 "block_size": 512, 00:12:20.808 "num_blocks": 1048576, 00:12:20.808 "uuid": "acb23a0d-34fc-4418-99db-0a4458901dc3", 00:12:20.808 "assigned_rate_limits": { 00:12:20.808 "rw_ios_per_sec": 0, 00:12:20.808 "rw_mbytes_per_sec": 0, 00:12:20.808 "r_mbytes_per_sec": 0, 00:12:20.808 "w_mbytes_per_sec": 0 00:12:20.808 }, 00:12:20.808 "claimed": true, 00:12:20.808 "claim_type": "exclusive_write", 00:12:20.808 "zoned": false, 00:12:20.808 "supported_io_types": { 00:12:20.808 "read": true, 00:12:20.808 "write": true, 00:12:20.808 "unmap": true, 00:12:20.808 "flush": true, 00:12:20.808 "reset": true, 00:12:20.808 "nvme_admin": false, 00:12:20.808 "nvme_io": false, 00:12:20.808 "nvme_io_md": false, 00:12:20.808 "write_zeroes": true, 00:12:20.808 "zcopy": true, 00:12:20.808 "get_zone_info": false, 00:12:20.808 "zone_management": false, 00:12:20.808 "zone_append": false, 00:12:20.808 "compare": false, 00:12:20.808 "compare_and_write": false, 00:12:20.808 "abort": true, 00:12:20.808 "seek_hole": false, 00:12:20.808 "seek_data": false, 00:12:20.808 "copy": true, 00:12:20.808 "nvme_iov_md": false 00:12:20.808 }, 00:12:20.808 "memory_domains": [ 00:12:20.808 { 00:12:20.808 "dma_device_id": "system", 00:12:20.808 "dma_device_type": 1 00:12:20.808 }, 00:12:20.808 { 00:12:20.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.808 "dma_device_type": 2 00:12:20.808 } 00:12:20.808 ], 00:12:20.808 "driver_specific": {} 00:12:20.808 } 00:12:20.808 ]' 00:12:20.808 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:12:20.808 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:12:20.808 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:12:20.808 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:12:20.808 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:12:20.809 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:12:20.809 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:20.809 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:22.724 15:31:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:22.724 15:31:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:12:22.724 15:31:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:22.724 15:31:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:22.724 15:31:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:12:24.640 15:31:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:24.640 15:31:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:24.640 15:31:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:24.640 15:31:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:24.640 15:31:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:24.640 15:31:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:12:24.640 15:31:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:24.640 15:31:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:24.640 15:31:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:24.640 15:31:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:24.640 15:31:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:24.640 15:31:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:24.640 15:31:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:24.640 15:31:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:24.640 15:31:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:24.640 15:31:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:24.640 15:31:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:24.901 15:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:24.901 15:31:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:25.843 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:25.843 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:25.843 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:25.843 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:25.843 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:26.102 ************************************ 00:12:26.102 START TEST filesystem_in_capsule_ext4 00:12:26.102 ************************************ 00:12:26.102 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:26.102 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:26.102 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:26.102 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:26.102 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:12:26.102 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:26.102 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:12:26.102 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:12:26.102 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:12:26.102 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:12:26.102 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:26.102 mke2fs 1.47.0 (5-Feb-2023) 00:12:26.102 Discarding device blocks: 0/522240 done 00:12:26.102 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:26.102 Filesystem UUID: 06c8aa63-de1d-4d95-9558-0c84dc11a383 00:12:26.102 Superblock backups stored on blocks: 00:12:26.102 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:26.102 00:12:26.102 Allocating group tables: 0/64 done 00:12:26.102 Writing inode tables: 0/64 done 00:12:27.488 Creating journal (8192 blocks): done 00:12:27.488 Writing superblocks and filesystem accounting information: 0/64 done 00:12:27.488 00:12:27.488 15:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:12:27.488 15:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:32.778 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:32.778 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:32.778 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:32.778 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:32.778 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:32.778 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:32.778 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 237571 00:12:32.778 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:32.778 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:32.778 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:32.778 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:32.778 00:12:32.778 real 0m6.879s 00:12:32.778 user 0m0.030s 00:12:32.778 sys 0m0.079s 00:12:32.778 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:32.778 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:32.778 ************************************ 00:12:32.778 END TEST filesystem_in_capsule_ext4 00:12:32.778 ************************************ 00:12:33.038 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:33.038 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:33.038 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:33.038 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:33.038 ************************************ 00:12:33.038 START TEST filesystem_in_capsule_btrfs 00:12:33.038 ************************************ 00:12:33.038 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:33.038 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:33.038 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:33.039 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:33.039 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:33.039 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:33.039 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:33.039 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:33.039 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:33.039 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:33.039 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:33.039 btrfs-progs v6.8.1 00:12:33.039 See https://btrfs.readthedocs.io for more information. 00:12:33.039 00:12:33.039 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:33.039 NOTE: several default settings have changed in version 5.15, please make sure 00:12:33.039 this does not affect your deployments: 00:12:33.039 - DUP for metadata (-m dup) 00:12:33.039 - enabled no-holes (-O no-holes) 00:12:33.039 - enabled free-space-tree (-R free-space-tree) 00:12:33.039 00:12:33.039 Label: (null) 00:12:33.039 UUID: ba0a87fd-a034-487f-b999-a54035cf9cdf 00:12:33.039 Node size: 16384 00:12:33.039 Sector size: 4096 (CPU page size: 4096) 00:12:33.039 Filesystem size: 510.00MiB 00:12:33.039 Block group profiles: 00:12:33.039 Data: single 8.00MiB 00:12:33.039 Metadata: DUP 32.00MiB 00:12:33.039 System: DUP 8.00MiB 00:12:33.039 SSD detected: yes 00:12:33.039 Zoned device: no 00:12:33.039 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:33.039 Checksum: crc32c 00:12:33.039 Number of devices: 1 00:12:33.039 Devices: 00:12:33.039 ID SIZE PATH 00:12:33.039 1 510.00MiB /dev/nvme0n1p1 00:12:33.039 00:12:33.039 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:33.039 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:33.980 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:33.980 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:33.980 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:33.980 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:33.980 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:33.980 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:33.980 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 237571 00:12:33.980 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:33.980 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:33.980 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:33.980 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:33.980 00:12:33.980 real 0m0.956s 00:12:33.980 user 0m0.030s 00:12:33.980 sys 0m0.120s 00:12:33.980 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:33.980 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:33.980 ************************************ 00:12:33.980 END TEST filesystem_in_capsule_btrfs 00:12:33.980 ************************************ 00:12:33.980 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:33.980 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:33.980 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:33.980 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:33.980 ************************************ 00:12:33.980 START TEST filesystem_in_capsule_xfs 00:12:33.980 ************************************ 00:12:33.980 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:33.980 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:33.980 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:33.980 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:33.980 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:33.980 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:33.980 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:33.980 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:12:33.980 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:33.980 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:33.980 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:33.980 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:33.980 = sectsz=512 attr=2, projid32bit=1 00:12:33.980 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:33.980 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:33.980 data = bsize=4096 blocks=130560, imaxpct=25 00:12:33.980 = sunit=0 swidth=0 blks 00:12:33.980 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:33.980 log =internal log bsize=4096 blocks=16384, version=2 00:12:33.980 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:33.980 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:34.919 Discarding blocks...Done. 00:12:34.919 15:31:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:34.919 15:31:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:37.463 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:37.463 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:37.463 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:37.463 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:37.463 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:37.463 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:37.463 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 237571 00:12:37.463 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:37.463 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:37.463 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:37.463 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:37.463 00:12:37.463 real 0m3.170s 00:12:37.463 user 0m0.024s 00:12:37.463 sys 0m0.082s 00:12:37.463 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:37.463 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:37.463 ************************************ 00:12:37.463 END TEST filesystem_in_capsule_xfs 00:12:37.463 ************************************ 00:12:37.463 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:37.463 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:37.463 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:37.724 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.724 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:37.724 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:37.724 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:37.724 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.724 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:37.724 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.724 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:37.724 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:37.724 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.724 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:37.724 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.724 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:37.724 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 237571 00:12:37.724 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 237571 ']' 00:12:37.724 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 237571 00:12:37.724 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:37.724 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:37.725 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 237571 00:12:37.725 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:37.725 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:37.725 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 237571' 00:12:37.725 killing process with pid 237571 00:12:37.725 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 237571 00:12:37.725 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 237571 00:12:37.985 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:37.985 00:12:37.985 real 0m18.172s 00:12:37.985 user 1m11.899s 00:12:37.985 sys 0m1.375s 00:12:37.985 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:37.985 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:37.985 ************************************ 00:12:37.985 END TEST nvmf_filesystem_in_capsule 00:12:37.985 ************************************ 00:12:37.985 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:37.986 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:37.986 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:37.986 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:37.986 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:37.986 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:37.986 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:37.986 rmmod nvme_tcp 00:12:37.986 rmmod nvme_fabrics 00:12:37.986 rmmod nvme_keyring 00:12:37.986 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:37.986 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:37.986 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:37.986 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:12:37.986 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:37.986 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:37.986 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:37.986 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:37.986 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # iptables-save 00:12:37.986 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:37.986 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # iptables-restore 00:12:37.986 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:37.986 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:37.986 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.986 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:37.986 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:40.539 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:40.539 00:12:40.539 real 0m51.060s 00:12:40.539 user 2m42.477s 00:12:40.539 sys 0m9.185s 00:12:40.539 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:40.539 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:40.539 ************************************ 00:12:40.539 END TEST nvmf_filesystem 00:12:40.539 ************************************ 00:12:40.539 15:31:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:40.539 15:31:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:40.539 15:31:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:40.539 15:31:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:40.539 ************************************ 00:12:40.539 START TEST nvmf_target_discovery 00:12:40.539 ************************************ 00:12:40.539 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:40.539 * Looking for test storage... 00:12:40.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:40.539 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:40.539 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:12:40.539 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:40.539 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:40.539 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:40.539 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:40.539 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:40.539 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:40.539 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:40.539 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:40.539 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:40.539 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:40.539 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:40.539 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:40.539 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:40.539 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:40.539 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:40.539 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:40.539 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:40.539 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:40.539 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:40.539 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:40.539 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:40.539 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:40.539 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:40.539 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:40.539 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:40.539 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:40.539 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:40.539 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:40.539 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:40.539 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:40.539 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:40.539 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:40.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.539 --rc genhtml_branch_coverage=1 00:12:40.539 --rc genhtml_function_coverage=1 00:12:40.539 --rc genhtml_legend=1 00:12:40.539 --rc geninfo_all_blocks=1 00:12:40.539 --rc geninfo_unexecuted_blocks=1 00:12:40.539 00:12:40.539 ' 00:12:40.539 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:40.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.540 --rc genhtml_branch_coverage=1 00:12:40.540 --rc genhtml_function_coverage=1 00:12:40.540 --rc genhtml_legend=1 00:12:40.540 --rc geninfo_all_blocks=1 00:12:40.540 --rc geninfo_unexecuted_blocks=1 00:12:40.540 00:12:40.540 ' 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:40.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.540 --rc genhtml_branch_coverage=1 00:12:40.540 --rc genhtml_function_coverage=1 00:12:40.540 --rc genhtml_legend=1 00:12:40.540 --rc geninfo_all_blocks=1 00:12:40.540 --rc geninfo_unexecuted_blocks=1 00:12:40.540 00:12:40.540 ' 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:40.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.540 --rc genhtml_branch_coverage=1 00:12:40.540 --rc genhtml_function_coverage=1 00:12:40.540 --rc genhtml_legend=1 00:12:40.540 --rc geninfo_all_blocks=1 00:12:40.540 --rc geninfo_unexecuted_blocks=1 00:12:40.540 00:12:40.540 ' 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:40.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:40.540 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.692 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:48.692 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:48.692 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:48.692 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:48.692 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:48.692 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:48.692 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:48.693 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:48.693 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:48.693 Found net devices under 0000:31:00.0: cvl_0_0 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:48.693 Found net devices under 0000:31:00.1: cvl_0_1 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # is_hw=yes 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:48.693 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:48.694 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:48.694 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:48.694 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:48.694 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.553 ms 00:12:48.694 00:12:48.694 --- 10.0.0.2 ping statistics --- 00:12:48.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.694 rtt min/avg/max/mdev = 0.553/0.553/0.553/0.000 ms 00:12:48.694 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:48.694 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:48.694 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:12:48.694 00:12:48.694 --- 10.0.0.1 ping statistics --- 00:12:48.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.694 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:12:48.694 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:48.694 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # return 0 00:12:48.694 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:48.694 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:48.694 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:48.694 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:48.694 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:48.694 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:48.694 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:48.694 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:48.694 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:48.694 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:48.694 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.694 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # nvmfpid=245716 00:12:48.694 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # waitforlisten 245716 00:12:48.694 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:48.694 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 245716 ']' 00:12:48.694 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.694 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:48.694 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.694 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:48.694 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.694 [2024-09-27 15:31:28.569175] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:12:48.694 [2024-09-27 15:31:28.569241] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:48.694 [2024-09-27 15:31:28.661096] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:48.694 [2024-09-27 15:31:28.708286] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:48.694 [2024-09-27 15:31:28.708343] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:48.694 [2024-09-27 15:31:28.708357] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:48.694 [2024-09-27 15:31:28.708364] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:48.694 [2024-09-27 15:31:28.708369] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:48.694 [2024-09-27 15:31:28.708526] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:48.694 [2024-09-27 15:31:28.708673] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:48.694 [2024-09-27 15:31:28.708723] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.694 [2024-09-27 15:31:28.708723] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:48.956 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:48.956 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:12:48.956 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:48.956 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:48.956 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.956 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:48.956 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:48.956 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.956 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.956 [2024-09-27 15:31:29.441872] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.219 Null1 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.219 [2024-09-27 15:31:29.514459] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.219 Null2 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.219 Null3 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.219 Null4 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.219 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.220 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:49.220 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.220 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.220 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.220 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:49.220 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.220 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.220 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.220 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:49.220 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.220 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.220 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.220 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:49.220 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.220 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.220 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.220 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:12:49.482 00:12:49.482 Discovery Log Number of Records 6, Generation counter 6 00:12:49.482 =====Discovery Log Entry 0====== 00:12:49.482 trtype: tcp 00:12:49.482 adrfam: ipv4 00:12:49.482 subtype: current discovery subsystem 00:12:49.482 treq: not required 00:12:49.482 portid: 0 00:12:49.482 trsvcid: 4420 00:12:49.482 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:49.482 traddr: 10.0.0.2 00:12:49.483 eflags: explicit discovery connections, duplicate discovery information 00:12:49.483 sectype: none 00:12:49.483 =====Discovery Log Entry 1====== 00:12:49.483 trtype: tcp 00:12:49.483 adrfam: ipv4 00:12:49.483 subtype: nvme subsystem 00:12:49.483 treq: not required 00:12:49.483 portid: 0 00:12:49.483 trsvcid: 4420 00:12:49.483 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:49.483 traddr: 10.0.0.2 00:12:49.483 eflags: none 00:12:49.483 sectype: none 00:12:49.483 =====Discovery Log Entry 2====== 00:12:49.483 trtype: tcp 00:12:49.483 adrfam: ipv4 00:12:49.483 subtype: nvme subsystem 00:12:49.483 treq: not required 00:12:49.483 portid: 0 00:12:49.483 trsvcid: 4420 00:12:49.483 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:49.483 traddr: 10.0.0.2 00:12:49.483 eflags: none 00:12:49.483 sectype: none 00:12:49.483 =====Discovery Log Entry 3====== 00:12:49.483 trtype: tcp 00:12:49.483 adrfam: ipv4 00:12:49.483 subtype: nvme subsystem 00:12:49.483 treq: not required 00:12:49.483 portid: 0 00:12:49.483 trsvcid: 4420 00:12:49.483 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:49.483 traddr: 10.0.0.2 00:12:49.483 eflags: none 00:12:49.483 sectype: none 00:12:49.483 =====Discovery Log Entry 4====== 00:12:49.483 trtype: tcp 00:12:49.483 adrfam: ipv4 00:12:49.483 subtype: nvme subsystem 00:12:49.483 treq: not required 00:12:49.483 portid: 0 00:12:49.483 trsvcid: 4420 00:12:49.483 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:49.483 traddr: 10.0.0.2 00:12:49.483 eflags: none 00:12:49.483 sectype: none 00:12:49.483 =====Discovery Log Entry 5====== 00:12:49.483 trtype: tcp 00:12:49.483 adrfam: ipv4 00:12:49.483 subtype: discovery subsystem referral 00:12:49.483 treq: not required 00:12:49.483 portid: 0 00:12:49.483 trsvcid: 4430 00:12:49.483 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:49.483 traddr: 10.0.0.2 00:12:49.483 eflags: none 00:12:49.483 sectype: none 00:12:49.483 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:49.483 Perform nvmf subsystem discovery via RPC 00:12:49.483 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:49.483 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.483 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.483 [ 00:12:49.483 { 00:12:49.483 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:49.483 "subtype": "Discovery", 00:12:49.483 "listen_addresses": [ 00:12:49.483 { 00:12:49.483 "trtype": "TCP", 00:12:49.483 "adrfam": "IPv4", 00:12:49.483 "traddr": "10.0.0.2", 00:12:49.483 "trsvcid": "4420" 00:12:49.483 } 00:12:49.483 ], 00:12:49.483 "allow_any_host": true, 00:12:49.483 "hosts": [] 00:12:49.483 }, 00:12:49.483 { 00:12:49.483 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:49.483 "subtype": "NVMe", 00:12:49.483 "listen_addresses": [ 00:12:49.483 { 00:12:49.483 "trtype": "TCP", 00:12:49.483 "adrfam": "IPv4", 00:12:49.483 "traddr": "10.0.0.2", 00:12:49.483 "trsvcid": "4420" 00:12:49.483 } 00:12:49.483 ], 00:12:49.483 "allow_any_host": true, 00:12:49.483 "hosts": [], 00:12:49.483 "serial_number": "SPDK00000000000001", 00:12:49.483 "model_number": "SPDK bdev Controller", 00:12:49.483 "max_namespaces": 32, 00:12:49.483 "min_cntlid": 1, 00:12:49.483 "max_cntlid": 65519, 00:12:49.483 "namespaces": [ 00:12:49.483 { 00:12:49.483 "nsid": 1, 00:12:49.483 "bdev_name": "Null1", 00:12:49.483 "name": "Null1", 00:12:49.483 "nguid": "228C2B8720E445229666406AC7957122", 00:12:49.483 "uuid": "228c2b87-20e4-4522-9666-406ac7957122" 00:12:49.483 } 00:12:49.483 ] 00:12:49.483 }, 00:12:49.483 { 00:12:49.483 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:49.483 "subtype": "NVMe", 00:12:49.483 "listen_addresses": [ 00:12:49.483 { 00:12:49.483 "trtype": "TCP", 00:12:49.483 "adrfam": "IPv4", 00:12:49.483 "traddr": "10.0.0.2", 00:12:49.483 "trsvcid": "4420" 00:12:49.483 } 00:12:49.483 ], 00:12:49.483 "allow_any_host": true, 00:12:49.483 "hosts": [], 00:12:49.483 "serial_number": "SPDK00000000000002", 00:12:49.483 "model_number": "SPDK bdev Controller", 00:12:49.483 "max_namespaces": 32, 00:12:49.483 "min_cntlid": 1, 00:12:49.483 "max_cntlid": 65519, 00:12:49.483 "namespaces": [ 00:12:49.483 { 00:12:49.483 "nsid": 1, 00:12:49.483 "bdev_name": "Null2", 00:12:49.483 "name": "Null2", 00:12:49.483 "nguid": "C8925597F7FC4AB8A9719A8C2D9BD631", 00:12:49.483 "uuid": "c8925597-f7fc-4ab8-a971-9a8c2d9bd631" 00:12:49.483 } 00:12:49.483 ] 00:12:49.483 }, 00:12:49.483 { 00:12:49.483 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:49.483 "subtype": "NVMe", 00:12:49.483 "listen_addresses": [ 00:12:49.483 { 00:12:49.483 "trtype": "TCP", 00:12:49.483 "adrfam": "IPv4", 00:12:49.483 "traddr": "10.0.0.2", 00:12:49.483 "trsvcid": "4420" 00:12:49.483 } 00:12:49.483 ], 00:12:49.483 "allow_any_host": true, 00:12:49.483 "hosts": [], 00:12:49.483 "serial_number": "SPDK00000000000003", 00:12:49.483 "model_number": "SPDK bdev Controller", 00:12:49.483 "max_namespaces": 32, 00:12:49.483 "min_cntlid": 1, 00:12:49.483 "max_cntlid": 65519, 00:12:49.483 "namespaces": [ 00:12:49.483 { 00:12:49.483 "nsid": 1, 00:12:49.483 "bdev_name": "Null3", 00:12:49.483 "name": "Null3", 00:12:49.483 "nguid": "C1844D3BFF1544859CE518143ABDF145", 00:12:49.483 "uuid": "c1844d3b-ff15-4485-9ce5-18143abdf145" 00:12:49.483 } 00:12:49.483 ] 00:12:49.483 }, 00:12:49.483 { 00:12:49.483 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:49.483 "subtype": "NVMe", 00:12:49.483 "listen_addresses": [ 00:12:49.483 { 00:12:49.483 "trtype": "TCP", 00:12:49.483 "adrfam": "IPv4", 00:12:49.483 "traddr": "10.0.0.2", 00:12:49.483 "trsvcid": "4420" 00:12:49.483 } 00:12:49.483 ], 00:12:49.483 "allow_any_host": true, 00:12:49.483 "hosts": [], 00:12:49.483 "serial_number": "SPDK00000000000004", 00:12:49.483 "model_number": "SPDK bdev Controller", 00:12:49.483 "max_namespaces": 32, 00:12:49.483 "min_cntlid": 1, 00:12:49.483 "max_cntlid": 65519, 00:12:49.483 "namespaces": [ 00:12:49.483 { 00:12:49.483 "nsid": 1, 00:12:49.483 "bdev_name": "Null4", 00:12:49.483 "name": "Null4", 00:12:49.483 "nguid": "4E52DF89E3EE4610B3C361C4F32211DF", 00:12:49.483 "uuid": "4e52df89-e3ee-4610-b3c3-61c4f32211df" 00:12:49.483 } 00:12:49.483 ] 00:12:49.483 } 00:12:49.483 ] 00:12:49.483 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.483 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:49.483 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:49.483 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:49.483 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.483 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.483 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.483 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:49.483 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.483 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.483 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.483 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:49.483 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:49.483 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.484 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.746 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.746 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:49.746 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.746 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.746 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.746 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:49.746 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:49.746 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.746 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.746 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.746 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:49.746 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.746 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.746 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.746 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:49.746 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:49.746 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.746 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.746 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.746 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:49.746 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.746 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.746 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.746 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:49.746 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.746 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.746 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.746 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:49.746 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.746 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:49.746 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.746 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.746 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:49.746 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:49.746 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:49.746 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:49.746 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:49.746 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:49.746 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:49.746 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:49.746 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:49.746 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:49.747 rmmod nvme_tcp 00:12:49.747 rmmod nvme_fabrics 00:12:49.747 rmmod nvme_keyring 00:12:49.747 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:49.747 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:49.747 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:49.747 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@513 -- # '[' -n 245716 ']' 00:12:49.747 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # killprocess 245716 00:12:49.747 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 245716 ']' 00:12:49.747 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 245716 00:12:49.747 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:12:49.747 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:49.747 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 245716 00:12:50.009 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:50.009 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:50.009 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 245716' 00:12:50.009 killing process with pid 245716 00:12:50.009 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 245716 00:12:50.009 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 245716 00:12:50.009 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:50.009 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:50.009 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:50.009 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:50.009 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # iptables-save 00:12:50.009 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:50.009 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:12:50.009 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:50.009 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:50.009 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.009 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:50.009 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.560 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:52.560 00:12:52.560 real 0m11.950s 00:12:52.560 user 0m9.138s 00:12:52.560 sys 0m6.300s 00:12:52.560 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:52.560 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:52.560 ************************************ 00:12:52.560 END TEST nvmf_target_discovery 00:12:52.560 ************************************ 00:12:52.560 15:31:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:52.560 15:31:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:52.560 15:31:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:52.560 15:31:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:52.560 ************************************ 00:12:52.560 START TEST nvmf_referrals 00:12:52.560 ************************************ 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:52.561 * Looking for test storage... 00:12:52.561 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lcov --version 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:52.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.561 --rc genhtml_branch_coverage=1 00:12:52.561 --rc genhtml_function_coverage=1 00:12:52.561 --rc genhtml_legend=1 00:12:52.561 --rc geninfo_all_blocks=1 00:12:52.561 --rc geninfo_unexecuted_blocks=1 00:12:52.561 00:12:52.561 ' 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:52.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.561 --rc genhtml_branch_coverage=1 00:12:52.561 --rc genhtml_function_coverage=1 00:12:52.561 --rc genhtml_legend=1 00:12:52.561 --rc geninfo_all_blocks=1 00:12:52.561 --rc geninfo_unexecuted_blocks=1 00:12:52.561 00:12:52.561 ' 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:52.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.561 --rc genhtml_branch_coverage=1 00:12:52.561 --rc genhtml_function_coverage=1 00:12:52.561 --rc genhtml_legend=1 00:12:52.561 --rc geninfo_all_blocks=1 00:12:52.561 --rc geninfo_unexecuted_blocks=1 00:12:52.561 00:12:52.561 ' 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:52.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.561 --rc genhtml_branch_coverage=1 00:12:52.561 --rc genhtml_function_coverage=1 00:12:52.561 --rc genhtml_legend=1 00:12:52.561 --rc geninfo_all_blocks=1 00:12:52.561 --rc geninfo_unexecuted_blocks=1 00:12:52.561 00:12:52.561 ' 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:52.561 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:52.562 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:52.562 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:52.562 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.562 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.562 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.562 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:52.562 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.562 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:52.562 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:52.562 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:52.562 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:52.562 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:52.562 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:52.562 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:52.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:52.562 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:52.562 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:52.562 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:52.562 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:52.562 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:52.562 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:52.562 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:52.562 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:52.562 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:52.562 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:52.562 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:52.562 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:52.562 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:52.562 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:52.562 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:52.562 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.562 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:52.562 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.562 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:12:52.562 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:12:52.562 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:52.562 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:00.718 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:00.718 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:13:00.718 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:00.718 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:00.718 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:00.719 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:00.719 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ up == up ]] 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:00.719 Found net devices under 0000:31:00.0: cvl_0_0 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ up == up ]] 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:00.719 Found net devices under 0000:31:00.1: cvl_0_1 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # is_hw=yes 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:00.719 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:00.720 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:00.720 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:00.720 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:00.720 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:00.720 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:00.720 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:00.720 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:00.720 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:00.720 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:00.720 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:00.720 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:00.720 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:00.720 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:00.720 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:00.720 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.537 ms 00:13:00.720 00:13:00.720 --- 10.0.0.2 ping statistics --- 00:13:00.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.720 rtt min/avg/max/mdev = 0.537/0.537/0.537/0.000 ms 00:13:00.720 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:00.720 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:00.720 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:13:00.720 00:13:00.720 --- 10.0.0.1 ping statistics --- 00:13:00.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.720 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:13:00.720 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:00.720 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # return 0 00:13:00.720 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:13:00.720 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:00.720 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:13:00.720 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:13:00.720 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:00.720 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:13:00.720 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:13:00.720 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:13:00.720 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:13:00.720 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:00.720 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:00.720 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # nvmfpid=250247 00:13:00.720 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # waitforlisten 250247 00:13:00.720 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:00.720 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 250247 ']' 00:13:00.720 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.720 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:00.720 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.720 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:00.720 15:31:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:00.720 [2024-09-27 15:31:40.510324] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:13:00.720 [2024-09-27 15:31:40.510388] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:00.720 [2024-09-27 15:31:40.602143] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:00.720 [2024-09-27 15:31:40.650028] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:00.720 [2024-09-27 15:31:40.650084] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:00.720 [2024-09-27 15:31:40.650093] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:00.720 [2024-09-27 15:31:40.650100] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:00.720 [2024-09-27 15:31:40.650106] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:00.720 [2024-09-27 15:31:40.650171] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:00.720 [2024-09-27 15:31:40.650297] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:00.720 [2024-09-27 15:31:40.650346] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.720 [2024-09-27 15:31:40.650347] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:13:00.981 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:00.981 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:13:00.981 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:13:00.981 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:00.981 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:00.981 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:00.981 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:00.981 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.981 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:00.981 [2024-09-27 15:31:41.386070] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:00.981 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.981 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:13:00.981 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.981 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:00.981 [2024-09-27 15:31:41.402446] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:13:00.981 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.981 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:13:00.981 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.981 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:00.981 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.981 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:13:00.981 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.981 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:00.981 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.981 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:13:00.981 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.981 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:00.981 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.981 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:00.981 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:13:00.981 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.981 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:00.981 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.242 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:13:01.242 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:13:01.242 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:01.242 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:01.242 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:01.242 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.242 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:01.242 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:01.242 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.242 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:01.242 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:01.242 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:13:01.242 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:01.242 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:01.242 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:01.242 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:01.242 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:01.503 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:01.504 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:01.504 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:13:01.504 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.504 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:01.504 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.504 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:13:01.504 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.504 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:01.504 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.504 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:13:01.504 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.504 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:01.504 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.504 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:01.504 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:13:01.504 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.504 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:01.504 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.504 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:13:01.504 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:13:01.504 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:01.504 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:01.504 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:01.504 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:01.504 15:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:01.765 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:01.765 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:13:01.765 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:13:01.765 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.765 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:01.765 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.765 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:01.765 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.765 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:01.765 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.765 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:13:01.765 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:01.765 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:01.765 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:01.765 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.765 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:01.765 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:01.765 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.765 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:13:01.765 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:01.765 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:13:01.765 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:01.765 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:01.765 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:01.765 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:01.765 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:02.026 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:13:02.026 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:02.026 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:13:02.026 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:13:02.026 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:02.026 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:02.026 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:02.288 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:02.288 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:13:02.288 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:13:02.288 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:02.288 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:02.288 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:02.288 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:02.288 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:02.288 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.288 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:02.288 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.288 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:13:02.288 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:02.288 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:02.288 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:02.288 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.288 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:02.288 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:02.288 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.549 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:13:02.549 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:02.549 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:13:02.549 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:02.549 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:02.549 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:02.549 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:02.549 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:02.549 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:13:02.549 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:02.549 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:13:02.549 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:13:02.549 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:02.549 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:02.549 15:31:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:02.809 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:13:02.809 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:13:02.809 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:13:02.809 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:02.809 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:02.809 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:03.069 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:03.069 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:13:03.069 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.069 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:03.069 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.069 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:03.069 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:13:03.069 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.069 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:03.069 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.069 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:13:03.069 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:13:03.069 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:03.069 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:03.069 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:03.069 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:03.069 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:03.330 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:03.330 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:13:03.330 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:13:03.330 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:13:03.330 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # nvmfcleanup 00:13:03.330 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:13:03.330 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:03.330 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:13:03.330 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:03.330 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:03.330 rmmod nvme_tcp 00:13:03.330 rmmod nvme_fabrics 00:13:03.330 rmmod nvme_keyring 00:13:03.330 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:03.330 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:13:03.330 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:13:03.330 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@513 -- # '[' -n 250247 ']' 00:13:03.330 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # killprocess 250247 00:13:03.330 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 250247 ']' 00:13:03.330 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 250247 00:13:03.330 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:13:03.330 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:03.330 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 250247 00:13:03.330 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:03.330 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:03.330 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 250247' 00:13:03.330 killing process with pid 250247 00:13:03.330 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 250247 00:13:03.330 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 250247 00:13:03.591 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:13:03.591 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:13:03.591 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:13:03.591 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:13:03.591 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # iptables-restore 00:13:03.591 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # iptables-save 00:13:03.591 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:13:03.591 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:03.591 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:03.591 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:03.591 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:03.591 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.142 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:06.143 00:13:06.143 real 0m13.443s 00:13:06.143 user 0m16.352s 00:13:06.143 sys 0m6.511s 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:06.143 ************************************ 00:13:06.143 END TEST nvmf_referrals 00:13:06.143 ************************************ 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:06.143 ************************************ 00:13:06.143 START TEST nvmf_connect_disconnect 00:13:06.143 ************************************ 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:06.143 * Looking for test storage... 00:13:06.143 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:06.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.143 --rc genhtml_branch_coverage=1 00:13:06.143 --rc genhtml_function_coverage=1 00:13:06.143 --rc genhtml_legend=1 00:13:06.143 --rc geninfo_all_blocks=1 00:13:06.143 --rc geninfo_unexecuted_blocks=1 00:13:06.143 00:13:06.143 ' 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:06.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.143 --rc genhtml_branch_coverage=1 00:13:06.143 --rc genhtml_function_coverage=1 00:13:06.143 --rc genhtml_legend=1 00:13:06.143 --rc geninfo_all_blocks=1 00:13:06.143 --rc geninfo_unexecuted_blocks=1 00:13:06.143 00:13:06.143 ' 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:06.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.143 --rc genhtml_branch_coverage=1 00:13:06.143 --rc genhtml_function_coverage=1 00:13:06.143 --rc genhtml_legend=1 00:13:06.143 --rc geninfo_all_blocks=1 00:13:06.143 --rc geninfo_unexecuted_blocks=1 00:13:06.143 00:13:06.143 ' 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:06.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.143 --rc genhtml_branch_coverage=1 00:13:06.143 --rc genhtml_function_coverage=1 00:13:06.143 --rc genhtml_legend=1 00:13:06.143 --rc geninfo_all_blocks=1 00:13:06.143 --rc geninfo_unexecuted_blocks=1 00:13:06.143 00:13:06.143 ' 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:06.143 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:06.144 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:06.144 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:06.144 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:06.144 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:13:06.144 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:06.144 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:06.144 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:06.144 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.144 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.144 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.144 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:13:06.144 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.144 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:13:06.144 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:06.144 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:06.144 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:06.144 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:06.144 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:06.144 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:06.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:06.144 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:06.144 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:06.144 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:06.144 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:06.144 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:06.144 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:06.144 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:13:06.144 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:06.144 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # prepare_net_devs 00:13:06.144 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@434 -- # local -g is_hw=no 00:13:06.144 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # remove_spdk_ns 00:13:06.144 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.144 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:06.144 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.144 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:13:06.144 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:13:06.144 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:13:06.144 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:14.296 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:14.296 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:13:14.296 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:14.296 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:14.296 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:14.296 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:14.296 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:14.296 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:13:14.296 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:14.296 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:13:14.296 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:13:14.296 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:13:14.296 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:13:14.296 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:13:14.296 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:13:14.296 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:14.296 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:14.296 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:14.296 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:14.296 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:14.296 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:14.296 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:14.296 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:14.296 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:14.296 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:14.296 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:14.296 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:13:14.296 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:13:14.296 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:13:14.296 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:13:14.296 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:13:14.296 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:14.297 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:14.297 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:14.297 Found net devices under 0000:31:00.0: cvl_0_0 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:14.297 Found net devices under 0000:31:00.1: cvl_0_1 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # is_hw=yes 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:14.297 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:14.297 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.569 ms 00:13:14.297 00:13:14.297 --- 10.0.0.2 ping statistics --- 00:13:14.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.297 rtt min/avg/max/mdev = 0.569/0.569/0.569/0.000 ms 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:14.297 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:14.297 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:13:14.297 00:13:14.297 --- 10.0.0.1 ping statistics --- 00:13:14.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.297 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # return 0 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:13:14.297 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:13:14.297 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:14.297 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:13:14.297 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:14.297 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:14.297 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # nvmfpid=255314 00:13:14.297 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # waitforlisten 255314 00:13:14.297 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:14.297 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 255314 ']' 00:13:14.297 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.297 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:14.297 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.297 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:14.297 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:14.297 [2024-09-27 15:31:54.116073] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:13:14.297 [2024-09-27 15:31:54.116138] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:14.297 [2024-09-27 15:31:54.206537] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:14.298 [2024-09-27 15:31:54.253583] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:14.298 [2024-09-27 15:31:54.253642] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:14.298 [2024-09-27 15:31:54.253650] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:14.298 [2024-09-27 15:31:54.253657] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:14.298 [2024-09-27 15:31:54.253664] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:14.298 [2024-09-27 15:31:54.253814] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:14.298 [2024-09-27 15:31:54.253957] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:14.298 [2024-09-27 15:31:54.254026] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:13:14.298 [2024-09-27 15:31:54.254027] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.559 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:14.559 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:13:14.559 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:13:14.559 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:14.559 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:14.559 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:14.559 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:14.559 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.559 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:14.559 [2024-09-27 15:31:54.988824] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:14.559 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.559 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:14.559 15:31:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.559 15:31:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:14.559 15:31:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.559 15:31:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:14.559 15:31:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:14.559 15:31:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.560 15:31:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:14.560 15:31:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.560 15:31:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:14.560 15:31:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.560 15:31:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:14.820 15:31:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.820 15:31:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:14.820 15:31:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.820 15:31:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:14.820 [2024-09-27 15:31:55.058607] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:14.820 15:31:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.821 15:31:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:13:14.821 15:31:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:13:14.821 15:31:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:13:14.821 15:31:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:17.365 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.910 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.821 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.363 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.367 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.281 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.830 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.373 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.288 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.835 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.387 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.300 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.846 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.392 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.859 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.408 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.324 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.870 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.873 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.415 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.877 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.424 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.432 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.978 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.891 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.435 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.978 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:52.887 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:57.976 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.891 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.441 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:04.992 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:06.906 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.017 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:13.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.485 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.584 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.497 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.580 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.493 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.040 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.589 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.504 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.514 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.059 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.609 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:54.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:56.110 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:58.748 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:03.377 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:06.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:07.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:10.645 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.332 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:20.154 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:22.064 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:24.606 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:27.158 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:29.067 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:31.611 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:34.152 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:36.060 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:38.600 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:41.144 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:43.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:45.595 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:48.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:50.681 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:52.589 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:55.129 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:57.673 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:59.584 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:02.121 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:04.664 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:06.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:09.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:09.117 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:17:09.117 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:17:09.118 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:09.118 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:17:09.118 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:09.118 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:17:09.118 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:09.118 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:09.118 rmmod nvme_tcp 00:17:09.118 rmmod nvme_fabrics 00:17:09.118 rmmod nvme_keyring 00:17:09.118 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:09.118 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:17:09.118 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:17:09.118 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@513 -- # '[' -n 255314 ']' 00:17:09.118 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # killprocess 255314 00:17:09.118 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 255314 ']' 00:17:09.118 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 255314 00:17:09.118 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:17:09.118 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:09.118 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 255314 00:17:09.118 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:09.118 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:09.118 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 255314' 00:17:09.118 killing process with pid 255314 00:17:09.118 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 255314 00:17:09.118 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 255314 00:17:09.118 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:09.118 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:09.118 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:09.118 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:17:09.118 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:09.118 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # iptables-save 00:17:09.118 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # iptables-restore 00:17:09.118 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:09.118 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:09.118 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.118 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:09.118 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.660 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:11.660 00:17:11.660 real 4m5.514s 00:17:11.660 user 15m33.889s 00:17:11.660 sys 0m25.751s 00:17:11.660 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:11.660 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:11.660 ************************************ 00:17:11.660 END TEST nvmf_connect_disconnect 00:17:11.660 ************************************ 00:17:11.660 15:35:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:11.660 15:35:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:11.660 15:35:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:11.660 15:35:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:11.660 ************************************ 00:17:11.660 START TEST nvmf_multitarget 00:17:11.660 ************************************ 00:17:11.660 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:11.660 * Looking for test storage... 00:17:11.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:11.660 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lcov --version 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:11.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.661 --rc genhtml_branch_coverage=1 00:17:11.661 --rc genhtml_function_coverage=1 00:17:11.661 --rc genhtml_legend=1 00:17:11.661 --rc geninfo_all_blocks=1 00:17:11.661 --rc geninfo_unexecuted_blocks=1 00:17:11.661 00:17:11.661 ' 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:11.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.661 --rc genhtml_branch_coverage=1 00:17:11.661 --rc genhtml_function_coverage=1 00:17:11.661 --rc genhtml_legend=1 00:17:11.661 --rc geninfo_all_blocks=1 00:17:11.661 --rc geninfo_unexecuted_blocks=1 00:17:11.661 00:17:11.661 ' 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:11.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.661 --rc genhtml_branch_coverage=1 00:17:11.661 --rc genhtml_function_coverage=1 00:17:11.661 --rc genhtml_legend=1 00:17:11.661 --rc geninfo_all_blocks=1 00:17:11.661 --rc geninfo_unexecuted_blocks=1 00:17:11.661 00:17:11.661 ' 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:11.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.661 --rc genhtml_branch_coverage=1 00:17:11.661 --rc genhtml_function_coverage=1 00:17:11.661 --rc genhtml_legend=1 00:17:11.661 --rc geninfo_all_blocks=1 00:17:11.661 --rc geninfo_unexecuted_blocks=1 00:17:11.661 00:17:11.661 ' 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:11.661 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:11.661 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:11.662 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:11.662 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:11.662 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:11.662 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:11.662 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:17:11.662 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:11.662 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:11.662 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:11.662 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:11.662 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:11.662 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.662 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:11.662 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.662 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:17:11.662 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:17:11.662 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:17:11.662 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:19.798 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:19.798 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:17:19.798 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:19.798 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:19.798 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:19.798 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:19.798 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:19.798 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:17:19.798 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:19.798 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:17:19.798 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:17:19.798 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:17:19.798 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:17:19.798 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:17:19.798 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:17:19.798 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:19.798 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:19.798 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:19.798 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:19.798 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:19.798 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:19.798 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:19.798 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:19.798 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:19.798 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:19.798 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:19.798 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:17:19.798 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:17:19.798 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:17:19.798 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:17:19.798 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:17:19.798 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:17:19.798 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:19.798 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:19.798 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:19.799 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:19.799 Found net devices under 0000:31:00.0: cvl_0_0 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:19.799 Found net devices under 0000:31:00.1: cvl_0_1 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # is_hw=yes 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:19.799 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:19.799 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.492 ms 00:17:19.799 00:17:19.799 --- 10.0.0.2 ping statistics --- 00:17:19.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.799 rtt min/avg/max/mdev = 0.492/0.492/0.492/0.000 ms 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:19.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:19.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:17:19.799 00:17:19.799 --- 10.0.0.1 ping statistics --- 00:17:19.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.799 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # return 0 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # nvmfpid=307075 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # waitforlisten 307075 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 307075 ']' 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:19.799 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:19.799 [2024-09-27 15:35:59.685188] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:17:19.799 [2024-09-27 15:35:59.685252] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:19.799 [2024-09-27 15:35:59.773892] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:19.799 [2024-09-27 15:35:59.821320] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:19.799 [2024-09-27 15:35:59.821373] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:19.799 [2024-09-27 15:35:59.821382] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:19.800 [2024-09-27 15:35:59.821388] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:19.800 [2024-09-27 15:35:59.821394] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:19.800 [2024-09-27 15:35:59.821539] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:19.800 [2024-09-27 15:35:59.821697] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:19.800 [2024-09-27 15:35:59.821852] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.800 [2024-09-27 15:35:59.821854] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:20.060 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:20.060 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:17:20.060 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:20.060 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:20.060 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:20.321 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:20.321 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:20.321 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:20.321 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:17:20.321 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:17:20.321 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:17:20.321 "nvmf_tgt_1" 00:17:20.321 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:17:20.582 "nvmf_tgt_2" 00:17:20.582 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:20.582 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:17:20.582 15:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:17:20.582 15:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:17:20.842 true 00:17:20.842 15:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:17:20.842 true 00:17:20.842 15:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:20.842 15:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:17:21.102 15:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:17:21.102 15:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:21.102 15:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:17:21.102 15:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:21.102 15:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:17:21.102 15:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:21.102 15:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:17:21.102 15:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:21.102 15:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:21.102 rmmod nvme_tcp 00:17:21.102 rmmod nvme_fabrics 00:17:21.102 rmmod nvme_keyring 00:17:21.102 15:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:21.102 15:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:17:21.102 15:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:17:21.102 15:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@513 -- # '[' -n 307075 ']' 00:17:21.102 15:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # killprocess 307075 00:17:21.103 15:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 307075 ']' 00:17:21.103 15:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 307075 00:17:21.103 15:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:17:21.103 15:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:21.103 15:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 307075 00:17:21.103 15:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:21.103 15:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:21.103 15:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 307075' 00:17:21.103 killing process with pid 307075 00:17:21.103 15:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 307075 00:17:21.103 15:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 307075 00:17:21.363 15:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:21.363 15:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:21.363 15:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:21.363 15:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:17:21.363 15:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # iptables-save 00:17:21.363 15:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:21.363 15:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # iptables-restore 00:17:21.363 15:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:21.363 15:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:21.363 15:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.363 15:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:21.363 15:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.277 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:23.277 00:17:23.277 real 0m12.031s 00:17:23.277 user 0m10.263s 00:17:23.277 sys 0m6.295s 00:17:23.277 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:23.277 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:23.277 ************************************ 00:17:23.277 END TEST nvmf_multitarget 00:17:23.277 ************************************ 00:17:23.538 15:36:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:23.538 15:36:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:23.538 15:36:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:23.538 15:36:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:23.538 ************************************ 00:17:23.538 START TEST nvmf_rpc 00:17:23.538 ************************************ 00:17:23.538 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:23.538 * Looking for test storage... 00:17:23.538 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:23.538 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:23.538 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:17:23.538 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:23.538 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:23.538 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:23.538 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:23.538 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:23.538 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:23.538 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:23.538 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:23.538 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:23.538 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:23.538 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:23.538 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:23.538 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:23.538 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:23.539 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:17:23.539 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:23.539 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:23.539 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:23.539 15:36:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:17:23.539 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:23.539 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:17:23.539 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:23.539 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:23.539 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:17:23.539 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:23.539 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:17:23.539 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:23.539 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:23.539 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:23.539 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:17:23.539 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:23.539 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:23.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.539 --rc genhtml_branch_coverage=1 00:17:23.539 --rc genhtml_function_coverage=1 00:17:23.539 --rc genhtml_legend=1 00:17:23.539 --rc geninfo_all_blocks=1 00:17:23.539 --rc geninfo_unexecuted_blocks=1 00:17:23.539 00:17:23.539 ' 00:17:23.539 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:23.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.539 --rc genhtml_branch_coverage=1 00:17:23.539 --rc genhtml_function_coverage=1 00:17:23.539 --rc genhtml_legend=1 00:17:23.539 --rc geninfo_all_blocks=1 00:17:23.539 --rc geninfo_unexecuted_blocks=1 00:17:23.539 00:17:23.539 ' 00:17:23.539 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:23.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.539 --rc genhtml_branch_coverage=1 00:17:23.539 --rc genhtml_function_coverage=1 00:17:23.539 --rc genhtml_legend=1 00:17:23.539 --rc geninfo_all_blocks=1 00:17:23.539 --rc geninfo_unexecuted_blocks=1 00:17:23.539 00:17:23.539 ' 00:17:23.539 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:23.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.539 --rc genhtml_branch_coverage=1 00:17:23.539 --rc genhtml_function_coverage=1 00:17:23.539 --rc genhtml_legend=1 00:17:23.539 --rc geninfo_all_blocks=1 00:17:23.539 --rc geninfo_unexecuted_blocks=1 00:17:23.539 00:17:23.539 ' 00:17:23.539 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:23.539 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:17:23.539 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:23.539 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:23.539 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:23.539 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:23.539 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:23.539 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:23.539 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:23.539 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:23.539 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:23.539 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:23.801 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:23.801 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:23.801 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:23.801 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:23.801 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:23.801 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:23.801 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:23.801 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:17:23.801 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:23.801 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:23.801 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:23.801 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.801 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.801 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.801 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:17:23.801 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.801 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:17:23.801 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:23.801 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:23.801 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:23.801 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:23.801 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:23.801 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:23.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:23.801 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:23.801 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:23.801 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:23.801 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:17:23.801 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:17:23.801 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:23.801 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:23.801 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:23.801 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:23.801 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:23.801 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.801 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:23.801 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.801 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:17:23.801 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:17:23.801 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:17:23.801 15:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.983 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:31.983 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:17:31.983 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:31.983 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:31.983 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:31.983 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:31.983 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:31.983 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:17:31.983 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:31.983 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:17:31.983 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:17:31.983 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:17:31.983 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:17:31.983 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:17:31.983 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:17:31.983 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:31.983 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:31.983 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:31.983 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:31.983 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:31.983 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:31.983 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:31.983 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:31.983 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:31.983 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:31.983 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:31.983 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:17:31.983 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:17:31.983 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:17:31.983 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:17:31.983 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:17:31.983 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:17:31.983 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:31.983 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:31.983 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:31.983 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:31.983 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:31.983 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:31.984 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:31.984 Found net devices under 0000:31:00.0: cvl_0_0 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:31.984 Found net devices under 0000:31:00.1: cvl_0_1 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # is_hw=yes 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:31.984 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:31.984 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.561 ms 00:17:31.984 00:17:31.984 --- 10.0.0.2 ping statistics --- 00:17:31.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.984 rtt min/avg/max/mdev = 0.561/0.561/0.561/0.000 ms 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:31.984 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:31.984 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:17:31.984 00:17:31.984 --- 10.0.0.1 ping statistics --- 00:17:31.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.984 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # return 0 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # nvmfpid=312366 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # waitforlisten 312366 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 312366 ']' 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:31.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:31.984 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.984 [2024-09-27 15:36:11.871664] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:17:31.984 [2024-09-27 15:36:11.871748] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:31.984 [2024-09-27 15:36:11.960906] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:31.984 [2024-09-27 15:36:12.008593] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:31.984 [2024-09-27 15:36:12.008648] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:31.984 [2024-09-27 15:36:12.008656] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:31.984 [2024-09-27 15:36:12.008664] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:31.984 [2024-09-27 15:36:12.008670] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:31.984 [2024-09-27 15:36:12.008819] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:31.984 [2024-09-27 15:36:12.008961] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:31.984 [2024-09-27 15:36:12.009039] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.984 [2024-09-27 15:36:12.009040] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:32.247 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:32.247 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:17:32.247 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:32.247 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:32.247 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.247 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:32.247 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:17:32.247 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.247 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.509 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.509 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:17:32.509 "tick_rate": 2400000000, 00:17:32.509 "poll_groups": [ 00:17:32.509 { 00:17:32.509 "name": "nvmf_tgt_poll_group_000", 00:17:32.509 "admin_qpairs": 0, 00:17:32.509 "io_qpairs": 0, 00:17:32.509 "current_admin_qpairs": 0, 00:17:32.509 "current_io_qpairs": 0, 00:17:32.509 "pending_bdev_io": 0, 00:17:32.509 "completed_nvme_io": 0, 00:17:32.509 "transports": [] 00:17:32.509 }, 00:17:32.509 { 00:17:32.509 "name": "nvmf_tgt_poll_group_001", 00:17:32.509 "admin_qpairs": 0, 00:17:32.509 "io_qpairs": 0, 00:17:32.509 "current_admin_qpairs": 0, 00:17:32.509 "current_io_qpairs": 0, 00:17:32.509 "pending_bdev_io": 0, 00:17:32.509 "completed_nvme_io": 0, 00:17:32.509 "transports": [] 00:17:32.509 }, 00:17:32.509 { 00:17:32.509 "name": "nvmf_tgt_poll_group_002", 00:17:32.509 "admin_qpairs": 0, 00:17:32.509 "io_qpairs": 0, 00:17:32.509 "current_admin_qpairs": 0, 00:17:32.509 "current_io_qpairs": 0, 00:17:32.509 "pending_bdev_io": 0, 00:17:32.509 "completed_nvme_io": 0, 00:17:32.509 "transports": [] 00:17:32.509 }, 00:17:32.509 { 00:17:32.509 "name": "nvmf_tgt_poll_group_003", 00:17:32.509 "admin_qpairs": 0, 00:17:32.509 "io_qpairs": 0, 00:17:32.509 "current_admin_qpairs": 0, 00:17:32.509 "current_io_qpairs": 0, 00:17:32.509 "pending_bdev_io": 0, 00:17:32.509 "completed_nvme_io": 0, 00:17:32.509 "transports": [] 00:17:32.509 } 00:17:32.509 ] 00:17:32.509 }' 00:17:32.509 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:17:32.509 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:17:32.509 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:17:32.509 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:17:32.509 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:17:32.509 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:17:32.509 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:17:32.509 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:32.509 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.509 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.509 [2024-09-27 15:36:12.853164] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:32.509 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.509 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:17:32.509 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.509 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.509 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.509 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:17:32.509 "tick_rate": 2400000000, 00:17:32.509 "poll_groups": [ 00:17:32.509 { 00:17:32.509 "name": "nvmf_tgt_poll_group_000", 00:17:32.509 "admin_qpairs": 0, 00:17:32.509 "io_qpairs": 0, 00:17:32.509 "current_admin_qpairs": 0, 00:17:32.509 "current_io_qpairs": 0, 00:17:32.509 "pending_bdev_io": 0, 00:17:32.509 "completed_nvme_io": 0, 00:17:32.509 "transports": [ 00:17:32.509 { 00:17:32.509 "trtype": "TCP" 00:17:32.509 } 00:17:32.509 ] 00:17:32.509 }, 00:17:32.509 { 00:17:32.509 "name": "nvmf_tgt_poll_group_001", 00:17:32.509 "admin_qpairs": 0, 00:17:32.509 "io_qpairs": 0, 00:17:32.509 "current_admin_qpairs": 0, 00:17:32.509 "current_io_qpairs": 0, 00:17:32.509 "pending_bdev_io": 0, 00:17:32.509 "completed_nvme_io": 0, 00:17:32.509 "transports": [ 00:17:32.509 { 00:17:32.509 "trtype": "TCP" 00:17:32.509 } 00:17:32.509 ] 00:17:32.509 }, 00:17:32.509 { 00:17:32.509 "name": "nvmf_tgt_poll_group_002", 00:17:32.509 "admin_qpairs": 0, 00:17:32.509 "io_qpairs": 0, 00:17:32.509 "current_admin_qpairs": 0, 00:17:32.509 "current_io_qpairs": 0, 00:17:32.509 "pending_bdev_io": 0, 00:17:32.509 "completed_nvme_io": 0, 00:17:32.509 "transports": [ 00:17:32.509 { 00:17:32.509 "trtype": "TCP" 00:17:32.509 } 00:17:32.509 ] 00:17:32.509 }, 00:17:32.509 { 00:17:32.509 "name": "nvmf_tgt_poll_group_003", 00:17:32.509 "admin_qpairs": 0, 00:17:32.509 "io_qpairs": 0, 00:17:32.510 "current_admin_qpairs": 0, 00:17:32.510 "current_io_qpairs": 0, 00:17:32.510 "pending_bdev_io": 0, 00:17:32.510 "completed_nvme_io": 0, 00:17:32.510 "transports": [ 00:17:32.510 { 00:17:32.510 "trtype": "TCP" 00:17:32.510 } 00:17:32.510 ] 00:17:32.510 } 00:17:32.510 ] 00:17:32.510 }' 00:17:32.510 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:17:32.510 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:32.510 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:32.510 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:32.510 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:17:32.510 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:17:32.510 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:32.510 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:32.510 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:32.510 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:17:32.510 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:17:32.510 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:17:32.510 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:17:32.510 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:32.510 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.510 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.773 Malloc1 00:17:32.773 15:36:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.773 15:36:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:32.773 15:36:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.773 15:36:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.773 15:36:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.773 15:36:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:32.773 15:36:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.773 15:36:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.773 15:36:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.773 15:36:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:17:32.773 15:36:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.773 15:36:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.773 15:36:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.773 15:36:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:32.773 15:36:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.773 15:36:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.773 [2024-09-27 15:36:13.035431] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:32.773 15:36:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.773 15:36:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:17:32.773 15:36:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:17:32.773 15:36:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:17:32.773 15:36:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:17:32.773 15:36:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:32.773 15:36:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:17:32.773 15:36:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:32.773 15:36:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:17:32.773 15:36:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:32.773 15:36:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:17:32.773 15:36:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:17:32.773 15:36:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:17:32.774 [2024-09-27 15:36:13.068620] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:17:32.774 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:32.774 could not add new controller: failed to write to nvme-fabrics device 00:17:32.774 15:36:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:17:32.774 15:36:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:32.774 15:36:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:32.774 15:36:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:32.774 15:36:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:32.774 15:36:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.774 15:36:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.774 15:36:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.774 15:36:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:34.160 15:36:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:17:34.160 15:36:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:34.160 15:36:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:34.160 15:36:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:34.160 15:36:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:36.709 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:36.709 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:36.709 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:36.709 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:36.709 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:36.709 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:36.709 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:36.709 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:36.709 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:36.709 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:36.709 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:36.709 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:36.709 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:36.709 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:36.709 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:36.709 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:36.709 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.709 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:36.709 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.709 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:36.709 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:17:36.709 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:36.709 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:17:36.709 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:36.709 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:17:36.709 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:36.709 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:17:36.709 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:36.709 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:17:36.709 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:17:36.709 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:36.709 [2024-09-27 15:36:16.814933] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:17:36.709 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:36.709 could not add new controller: failed to write to nvme-fabrics device 00:17:36.709 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:17:36.709 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:36.709 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:36.709 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:36.709 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:17:36.709 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.710 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:36.710 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.710 15:36:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:38.092 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:17:38.093 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:38.093 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:38.093 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:38.093 15:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:40.002 15:36:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:40.002 15:36:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:40.002 15:36:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:40.002 15:36:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:40.002 15:36:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:40.002 15:36:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:40.002 15:36:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:40.002 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:40.002 15:36:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:40.002 15:36:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:40.002 15:36:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:40.002 15:36:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:40.002 15:36:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:40.002 15:36:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:40.002 15:36:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:40.002 15:36:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:40.002 15:36:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.002 15:36:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.262 15:36:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.262 15:36:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:17:40.262 15:36:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:40.262 15:36:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:40.262 15:36:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.262 15:36:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.262 15:36:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.262 15:36:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:40.262 15:36:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.262 15:36:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.262 [2024-09-27 15:36:20.517137] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:40.262 15:36:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.262 15:36:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:40.262 15:36:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.262 15:36:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.262 15:36:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.262 15:36:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:40.262 15:36:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.262 15:36:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.262 15:36:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.262 15:36:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:41.645 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:41.645 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:41.645 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:41.645 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:41.645 15:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:43.554 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:43.554 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:43.554 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:43.554 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:43.554 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:43.554 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:43.554 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:43.814 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:43.815 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:43.815 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:43.815 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:43.815 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:43.815 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:43.815 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:43.815 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:43.815 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:43.815 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.815 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.815 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.815 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:43.815 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.815 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.815 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.815 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:43.815 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:43.815 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.815 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.815 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.815 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:43.815 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.815 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.815 [2024-09-27 15:36:24.210169] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:43.815 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.815 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:43.815 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.815 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.815 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.815 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:43.815 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.815 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.815 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.815 15:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:45.727 15:36:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:45.727 15:36:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:45.727 15:36:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:45.727 15:36:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:45.727 15:36:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:47.641 15:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:47.641 15:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:47.641 15:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:47.641 15:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:47.641 15:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:47.641 15:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:47.641 15:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:47.641 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:47.641 15:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:47.641 15:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:47.641 15:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:47.641 15:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:47.641 15:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:47.641 15:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:47.641 15:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:47.641 15:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:47.641 15:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.641 15:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.641 15:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.641 15:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:47.641 15:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.641 15:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.641 15:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.641 15:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:47.641 15:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:47.641 15:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.641 15:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.641 15:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.641 15:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:47.641 15:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.641 15:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.641 [2024-09-27 15:36:27.910793] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:47.641 15:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.641 15:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:47.641 15:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.641 15:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.641 15:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.641 15:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:47.641 15:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.641 15:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.641 15:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.641 15:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:49.028 15:36:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:49.028 15:36:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:49.028 15:36:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:49.028 15:36:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:49.028 15:36:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:51.575 15:36:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:51.575 15:36:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:51.575 15:36:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:51.575 15:36:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:51.575 15:36:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:51.575 15:36:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:51.575 15:36:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:51.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:51.575 15:36:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:51.575 15:36:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:51.575 15:36:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:51.575 15:36:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:51.575 15:36:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:51.575 15:36:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:51.575 15:36:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:51.575 15:36:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:51.575 15:36:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.575 15:36:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:51.575 15:36:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.575 15:36:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:51.575 15:36:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.575 15:36:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:51.575 15:36:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.575 15:36:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:51.575 15:36:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:51.575 15:36:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.575 15:36:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:51.575 15:36:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.575 15:36:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:51.575 15:36:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.575 15:36:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:51.575 [2024-09-27 15:36:31.644236] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:51.575 15:36:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.575 15:36:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:51.575 15:36:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.575 15:36:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:51.575 15:36:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.575 15:36:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:51.575 15:36:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.575 15:36:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:51.575 15:36:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.575 15:36:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:52.961 15:36:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:52.961 15:36:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:52.961 15:36:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:52.961 15:36:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:52.961 15:36:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:54.875 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:54.875 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:54.875 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:54.875 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:54.875 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:54.875 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:54.875 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:54.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:54.875 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:54.875 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:54.875 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:54.875 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:54.875 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:54.875 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:54.875 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:54.875 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:54.875 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.875 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.875 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.875 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:54.875 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.875 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:55.135 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.135 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:55.135 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:55.135 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.135 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:55.135 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.135 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:55.135 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.135 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:55.135 [2024-09-27 15:36:35.376646] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:55.135 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.135 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:55.135 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.135 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:55.135 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.135 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:55.135 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.135 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:55.135 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.135 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:56.518 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:56.518 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:56.518 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:56.518 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:56.518 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:58.431 15:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:58.431 15:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:58.431 15:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:58.431 15:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:58.431 15:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:58.431 15:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:58.431 15:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:58.692 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:58.692 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:58.692 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:58.692 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:58.692 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:58.692 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:58.692 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:58.692 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:58.692 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:58.692 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.692 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.692 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.692 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:58.692 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.692 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.692 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.692 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:58.692 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:58.692 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:58.692 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.692 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.692 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.692 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:58.692 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.692 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.692 [2024-09-27 15:36:39.081062] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:58.692 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.692 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:58.692 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.692 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.692 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.692 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:58.692 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.692 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.692 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.692 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:58.692 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.692 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.692 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.692 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:58.692 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.692 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.692 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.692 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:58.693 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:58.693 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.693 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.693 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.693 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:58.693 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.693 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.693 [2024-09-27 15:36:39.129167] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:58.693 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.693 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:58.693 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.693 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.693 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.693 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:58.693 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.693 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.693 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.693 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:58.693 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.693 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.693 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.693 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:58.693 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.693 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.693 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.693 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:58.693 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:58.693 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.693 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.693 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.693 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:58.693 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.693 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.693 [2024-09-27 15:36:39.177307] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:58.954 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.954 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:58.954 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.954 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.954 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.954 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:58.954 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.955 [2024-09-27 15:36:39.225444] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.955 [2024-09-27 15:36:39.273575] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:58.955 "tick_rate": 2400000000, 00:17:58.955 "poll_groups": [ 00:17:58.955 { 00:17:58.955 "name": "nvmf_tgt_poll_group_000", 00:17:58.955 "admin_qpairs": 0, 00:17:58.955 "io_qpairs": 224, 00:17:58.955 "current_admin_qpairs": 0, 00:17:58.955 "current_io_qpairs": 0, 00:17:58.955 "pending_bdev_io": 0, 00:17:58.955 "completed_nvme_io": 393, 00:17:58.955 "transports": [ 00:17:58.955 { 00:17:58.955 "trtype": "TCP" 00:17:58.955 } 00:17:58.955 ] 00:17:58.955 }, 00:17:58.955 { 00:17:58.955 "name": "nvmf_tgt_poll_group_001", 00:17:58.955 "admin_qpairs": 1, 00:17:58.955 "io_qpairs": 223, 00:17:58.955 "current_admin_qpairs": 0, 00:17:58.955 "current_io_qpairs": 0, 00:17:58.955 "pending_bdev_io": 0, 00:17:58.955 "completed_nvme_io": 223, 00:17:58.955 "transports": [ 00:17:58.955 { 00:17:58.955 "trtype": "TCP" 00:17:58.955 } 00:17:58.955 ] 00:17:58.955 }, 00:17:58.955 { 00:17:58.955 "name": "nvmf_tgt_poll_group_002", 00:17:58.955 "admin_qpairs": 6, 00:17:58.955 "io_qpairs": 218, 00:17:58.955 "current_admin_qpairs": 0, 00:17:58.955 "current_io_qpairs": 0, 00:17:58.955 "pending_bdev_io": 0, 00:17:58.955 "completed_nvme_io": 344, 00:17:58.955 "transports": [ 00:17:58.955 { 00:17:58.955 "trtype": "TCP" 00:17:58.955 } 00:17:58.955 ] 00:17:58.955 }, 00:17:58.955 { 00:17:58.955 "name": "nvmf_tgt_poll_group_003", 00:17:58.955 "admin_qpairs": 0, 00:17:58.955 "io_qpairs": 224, 00:17:58.955 "current_admin_qpairs": 0, 00:17:58.955 "current_io_qpairs": 0, 00:17:58.955 "pending_bdev_io": 0, 00:17:58.955 "completed_nvme_io": 279, 00:17:58.955 "transports": [ 00:17:58.955 { 00:17:58.955 "trtype": "TCP" 00:17:58.955 } 00:17:58.955 ] 00:17:58.955 } 00:17:58.955 ] 00:17:58.955 }' 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:58.955 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:58.955 rmmod nvme_tcp 00:17:59.217 rmmod nvme_fabrics 00:17:59.217 rmmod nvme_keyring 00:17:59.217 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:59.217 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:17:59.217 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:17:59.217 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@513 -- # '[' -n 312366 ']' 00:17:59.217 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # killprocess 312366 00:17:59.217 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 312366 ']' 00:17:59.217 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 312366 00:17:59.217 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:17:59.217 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:59.217 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 312366 00:17:59.217 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:59.217 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:59.217 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 312366' 00:17:59.217 killing process with pid 312366 00:17:59.217 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 312366 00:17:59.217 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 312366 00:17:59.217 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:59.217 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:59.217 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:59.217 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:17:59.217 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # iptables-save 00:17:59.217 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:59.217 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # iptables-restore 00:17:59.217 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:59.217 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:59.217 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:59.217 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:59.217 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:01.764 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:01.764 00:18:01.764 real 0m37.958s 00:18:01.764 user 1m52.745s 00:18:01.764 sys 0m7.989s 00:18:01.764 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:01.764 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:01.764 ************************************ 00:18:01.764 END TEST nvmf_rpc 00:18:01.764 ************************************ 00:18:01.764 15:36:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:18:01.764 15:36:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:01.764 15:36:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:01.764 15:36:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:01.764 ************************************ 00:18:01.764 START TEST nvmf_invalid 00:18:01.765 ************************************ 00:18:01.765 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:18:01.765 * Looking for test storage... 00:18:01.765 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:01.765 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:01.765 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lcov --version 00:18:01.765 15:36:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:01.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.765 --rc genhtml_branch_coverage=1 00:18:01.765 --rc genhtml_function_coverage=1 00:18:01.765 --rc genhtml_legend=1 00:18:01.765 --rc geninfo_all_blocks=1 00:18:01.765 --rc geninfo_unexecuted_blocks=1 00:18:01.765 00:18:01.765 ' 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:01.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.765 --rc genhtml_branch_coverage=1 00:18:01.765 --rc genhtml_function_coverage=1 00:18:01.765 --rc genhtml_legend=1 00:18:01.765 --rc geninfo_all_blocks=1 00:18:01.765 --rc geninfo_unexecuted_blocks=1 00:18:01.765 00:18:01.765 ' 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:01.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.765 --rc genhtml_branch_coverage=1 00:18:01.765 --rc genhtml_function_coverage=1 00:18:01.765 --rc genhtml_legend=1 00:18:01.765 --rc geninfo_all_blocks=1 00:18:01.765 --rc geninfo_unexecuted_blocks=1 00:18:01.765 00:18:01.765 ' 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:01.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.765 --rc genhtml_branch_coverage=1 00:18:01.765 --rc genhtml_function_coverage=1 00:18:01.765 --rc genhtml_legend=1 00:18:01.765 --rc geninfo_all_blocks=1 00:18:01.765 --rc geninfo_unexecuted_blocks=1 00:18:01.765 00:18:01.765 ' 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:01.765 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:01.766 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:01.766 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:01.766 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:01.766 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:01.766 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:18:01.766 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:01.766 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:01.766 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:18:01.766 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:18:01.766 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:18:01.766 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:01.766 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:01.766 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:01.766 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:01.766 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:01.766 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:01.766 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:01.766 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:01.766 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:18:01.766 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:18:01.766 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:18:01.766 15:36:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:09.907 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:09.907 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:09.907 Found net devices under 0000:31:00.0: cvl_0_0 00:18:09.907 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:09.908 Found net devices under 0000:31:00.1: cvl_0_1 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # is_hw=yes 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:09.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:09.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.554 ms 00:18:09.908 00:18:09.908 --- 10.0.0.2 ping statistics --- 00:18:09.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.908 rtt min/avg/max/mdev = 0.554/0.554/0.554/0.000 ms 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:09.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:09.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:18:09.908 00:18:09.908 --- 10.0.0.1 ping statistics --- 00:18:09.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.908 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # return 0 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # nvmfpid=321992 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # waitforlisten 321992 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 321992 ']' 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:09.908 15:36:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:09.908 [2024-09-27 15:36:49.787074] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:18:09.908 [2024-09-27 15:36:49.787166] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:09.908 [2024-09-27 15:36:49.880531] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:09.908 [2024-09-27 15:36:49.929108] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:09.908 [2024-09-27 15:36:49.929163] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:09.908 [2024-09-27 15:36:49.929172] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:09.908 [2024-09-27 15:36:49.929179] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:09.908 [2024-09-27 15:36:49.929185] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:09.908 [2024-09-27 15:36:49.929299] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:09.908 [2024-09-27 15:36:49.929456] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:09.908 [2024-09-27 15:36:49.929609] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:09.908 [2024-09-27 15:36:49.929611] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.169 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:10.169 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:18:10.169 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:10.169 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:10.169 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:10.169 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:10.169 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:10.430 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode22883 00:18:10.430 [2024-09-27 15:36:50.823365] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:18:10.430 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:18:10.430 { 00:18:10.430 "nqn": "nqn.2016-06.io.spdk:cnode22883", 00:18:10.430 "tgt_name": "foobar", 00:18:10.430 "method": "nvmf_create_subsystem", 00:18:10.430 "req_id": 1 00:18:10.430 } 00:18:10.430 Got JSON-RPC error response 00:18:10.430 response: 00:18:10.430 { 00:18:10.430 "code": -32603, 00:18:10.430 "message": "Unable to find target foobar" 00:18:10.430 }' 00:18:10.430 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:18:10.430 { 00:18:10.430 "nqn": "nqn.2016-06.io.spdk:cnode22883", 00:18:10.430 "tgt_name": "foobar", 00:18:10.430 "method": "nvmf_create_subsystem", 00:18:10.430 "req_id": 1 00:18:10.430 } 00:18:10.430 Got JSON-RPC error response 00:18:10.430 response: 00:18:10.430 { 00:18:10.430 "code": -32603, 00:18:10.430 "message": "Unable to find target foobar" 00:18:10.430 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:18:10.430 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:18:10.430 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode8145 00:18:10.691 [2024-09-27 15:36:51.032236] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8145: invalid serial number 'SPDKISFASTANDAWESOME' 00:18:10.691 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:18:10.691 { 00:18:10.691 "nqn": "nqn.2016-06.io.spdk:cnode8145", 00:18:10.691 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:18:10.691 "method": "nvmf_create_subsystem", 00:18:10.691 "req_id": 1 00:18:10.691 } 00:18:10.691 Got JSON-RPC error response 00:18:10.691 response: 00:18:10.691 { 00:18:10.691 "code": -32602, 00:18:10.691 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:18:10.691 }' 00:18:10.692 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:18:10.692 { 00:18:10.692 "nqn": "nqn.2016-06.io.spdk:cnode8145", 00:18:10.692 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:18:10.692 "method": "nvmf_create_subsystem", 00:18:10.692 "req_id": 1 00:18:10.692 } 00:18:10.692 Got JSON-RPC error response 00:18:10.692 response: 00:18:10.692 { 00:18:10.692 "code": -32602, 00:18:10.692 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:18:10.692 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:10.692 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:18:10.692 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode13545 00:18:10.954 [2024-09-27 15:36:51.237008] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13545: invalid model number 'SPDK_Controller' 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:18:10.954 { 00:18:10.954 "nqn": "nqn.2016-06.io.spdk:cnode13545", 00:18:10.954 "model_number": "SPDK_Controller\u001f", 00:18:10.954 "method": "nvmf_create_subsystem", 00:18:10.954 "req_id": 1 00:18:10.954 } 00:18:10.954 Got JSON-RPC error response 00:18:10.954 response: 00:18:10.954 { 00:18:10.954 "code": -32602, 00:18:10.954 "message": "Invalid MN SPDK_Controller\u001f" 00:18:10.954 }' 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:18:10.954 { 00:18:10.954 "nqn": "nqn.2016-06.io.spdk:cnode13545", 00:18:10.954 "model_number": "SPDK_Controller\u001f", 00:18:10.954 "method": "nvmf_create_subsystem", 00:18:10.954 "req_id": 1 00:18:10.954 } 00:18:10.954 Got JSON-RPC error response 00:18:10.954 response: 00:18:10.954 { 00:18:10.954 "code": -32602, 00:18:10.954 "message": "Invalid MN SPDK_Controller\u001f" 00:18:10.954 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:18:10.954 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:18:10.955 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:18:10.955 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:10.955 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:10.955 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:18:10.955 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:18:10.955 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:18:10.955 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:10.955 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:10.955 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:18:10.955 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:18:10.955 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:18:10.955 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:10.955 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:10.955 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:18:10.955 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:18:10.955 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:18:10.955 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:10.955 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:10.955 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:18:10.955 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:18:10.955 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:18:10.955 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:10.955 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:10.955 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:18:10.955 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:18:10.955 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:18:10.955 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:10.955 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:10.955 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:18:10.955 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:18:10.955 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:18:10.955 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:10.955 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:10.955 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:18:10.955 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:18:10.955 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:18:10.955 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:10.955 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ! == \- ]] 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '!|Lux"1zb1PB"'\''*9OpTt' 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '!|Lux"1zb1PB"'\''*9OpTt' nqn.2016-06.io.spdk:cnode27023 00:18:11.217 [2024-09-27 15:36:51.610563] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27023: invalid serial number '!|Lux"1zb1PB"'*9OpTt' 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:18:11.217 { 00:18:11.217 "nqn": "nqn.2016-06.io.spdk:cnode27023", 00:18:11.217 "serial_number": "!|Lux\"1zb1PB\"'\''\u007f*9OpTt", 00:18:11.217 "method": "nvmf_create_subsystem", 00:18:11.217 "req_id": 1 00:18:11.217 } 00:18:11.217 Got JSON-RPC error response 00:18:11.217 response: 00:18:11.217 { 00:18:11.217 "code": -32602, 00:18:11.217 "message": "Invalid SN !|Lux\"1zb1PB\"'\''\u007f*9OpTt" 00:18:11.217 }' 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:18:11.217 { 00:18:11.217 "nqn": "nqn.2016-06.io.spdk:cnode27023", 00:18:11.217 "serial_number": "!|Lux\"1zb1PB\"'\u007f*9OpTt", 00:18:11.217 "method": "nvmf_create_subsystem", 00:18:11.217 "req_id": 1 00:18:11.217 } 00:18:11.217 Got JSON-RPC error response 00:18:11.217 response: 00:18:11.217 { 00:18:11.217 "code": -32602, 00:18:11.217 "message": "Invalid SN !|Lux\"1zb1PB\"'\u007f*9OpTt" 00:18:11.217 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:11.217 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:11.480 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:18:11.481 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:18:11.743 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:11.743 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:11.743 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:18:11.743 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:18:11.743 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:18:11.743 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:11.743 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:11.743 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:18:11.743 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:18:11.743 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:18:11.743 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:11.743 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:11.743 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ M == \- ]] 00:18:11.743 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'M;^.Iu-j?nB/>P3,);[?J[a9mxW[U{|Bj1wW|La$' 00:18:11.743 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'M;^.Iu-j?nB/>P3,);[?J[a9mxW[U{|Bj1wW|La$' nqn.2016-06.io.spdk:cnode6683 00:18:11.744 [2024-09-27 15:36:52.152577] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6683: invalid model number 'M;^.Iu-j?nB/>P3,);[?J[a9mxW[U{|Bj1wW|La$' 00:18:11.744 15:36:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:18:11.744 { 00:18:11.744 "nqn": "nqn.2016-06.io.spdk:cnode6683", 00:18:11.744 "model_number": "M;^.Iu-j?nB/>P3,);[?J[\u007fa9mxW[U{|Bj1wW|La$", 00:18:11.744 "method": "nvmf_create_subsystem", 00:18:11.744 "req_id": 1 00:18:11.744 } 00:18:11.744 Got JSON-RPC error response 00:18:11.744 response: 00:18:11.744 { 00:18:11.744 "code": -32602, 00:18:11.744 "message": "Invalid MN M;^.Iu-j?nB/>P3,);[?J[\u007fa9mxW[U{|Bj1wW|La$" 00:18:11.744 }' 00:18:11.744 15:36:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:18:11.744 { 00:18:11.744 "nqn": "nqn.2016-06.io.spdk:cnode6683", 00:18:11.744 "model_number": "M;^.Iu-j?nB/>P3,);[?J[\u007fa9mxW[U{|Bj1wW|La$", 00:18:11.744 "method": "nvmf_create_subsystem", 00:18:11.744 "req_id": 1 00:18:11.744 } 00:18:11.744 Got JSON-RPC error response 00:18:11.744 response: 00:18:11.744 { 00:18:11.744 "code": -32602, 00:18:11.744 "message": "Invalid MN M;^.Iu-j?nB/>P3,);[?J[\u007fa9mxW[U{|Bj1wW|La$" 00:18:11.744 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:11.744 15:36:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:18:12.005 [2024-09-27 15:36:52.349446] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:12.005 15:36:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:18:12.266 15:36:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:18:12.267 15:36:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:18:12.267 15:36:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:18:12.267 15:36:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:18:12.267 15:36:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:18:12.528 [2024-09-27 15:36:52.763024] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:18:12.528 15:36:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:18:12.528 { 00:18:12.528 "nqn": "nqn.2016-06.io.spdk:cnode", 00:18:12.528 "listen_address": { 00:18:12.528 "trtype": "tcp", 00:18:12.528 "traddr": "", 00:18:12.528 "trsvcid": "4421" 00:18:12.528 }, 00:18:12.528 "method": "nvmf_subsystem_remove_listener", 00:18:12.528 "req_id": 1 00:18:12.528 } 00:18:12.528 Got JSON-RPC error response 00:18:12.528 response: 00:18:12.528 { 00:18:12.528 "code": -32602, 00:18:12.528 "message": "Invalid parameters" 00:18:12.528 }' 00:18:12.528 15:36:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:18:12.528 { 00:18:12.528 "nqn": "nqn.2016-06.io.spdk:cnode", 00:18:12.528 "listen_address": { 00:18:12.528 "trtype": "tcp", 00:18:12.528 "traddr": "", 00:18:12.528 "trsvcid": "4421" 00:18:12.528 }, 00:18:12.528 "method": "nvmf_subsystem_remove_listener", 00:18:12.528 "req_id": 1 00:18:12.528 } 00:18:12.528 Got JSON-RPC error response 00:18:12.528 response: 00:18:12.528 { 00:18:12.528 "code": -32602, 00:18:12.528 "message": "Invalid parameters" 00:18:12.528 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:18:12.528 15:36:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12392 -i 0 00:18:12.528 [2024-09-27 15:36:52.943609] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12392: invalid cntlid range [0-65519] 00:18:12.528 15:36:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:18:12.528 { 00:18:12.528 "nqn": "nqn.2016-06.io.spdk:cnode12392", 00:18:12.528 "min_cntlid": 0, 00:18:12.528 "method": "nvmf_create_subsystem", 00:18:12.528 "req_id": 1 00:18:12.528 } 00:18:12.528 Got JSON-RPC error response 00:18:12.528 response: 00:18:12.528 { 00:18:12.528 "code": -32602, 00:18:12.528 "message": "Invalid cntlid range [0-65519]" 00:18:12.528 }' 00:18:12.528 15:36:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:18:12.528 { 00:18:12.528 "nqn": "nqn.2016-06.io.spdk:cnode12392", 00:18:12.528 "min_cntlid": 0, 00:18:12.528 "method": "nvmf_create_subsystem", 00:18:12.528 "req_id": 1 00:18:12.528 } 00:18:12.528 Got JSON-RPC error response 00:18:12.528 response: 00:18:12.528 { 00:18:12.528 "code": -32602, 00:18:12.528 "message": "Invalid cntlid range [0-65519]" 00:18:12.528 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:12.528 15:36:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21595 -i 65520 00:18:12.790 [2024-09-27 15:36:53.128247] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21595: invalid cntlid range [65520-65519] 00:18:12.790 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:18:12.790 { 00:18:12.790 "nqn": "nqn.2016-06.io.spdk:cnode21595", 00:18:12.790 "min_cntlid": 65520, 00:18:12.790 "method": "nvmf_create_subsystem", 00:18:12.790 "req_id": 1 00:18:12.790 } 00:18:12.790 Got JSON-RPC error response 00:18:12.790 response: 00:18:12.790 { 00:18:12.790 "code": -32602, 00:18:12.790 "message": "Invalid cntlid range [65520-65519]" 00:18:12.790 }' 00:18:12.790 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:18:12.790 { 00:18:12.790 "nqn": "nqn.2016-06.io.spdk:cnode21595", 00:18:12.790 "min_cntlid": 65520, 00:18:12.790 "method": "nvmf_create_subsystem", 00:18:12.790 "req_id": 1 00:18:12.790 } 00:18:12.790 Got JSON-RPC error response 00:18:12.790 response: 00:18:12.790 { 00:18:12.790 "code": -32602, 00:18:12.790 "message": "Invalid cntlid range [65520-65519]" 00:18:12.790 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:12.790 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3049 -I 0 00:18:13.052 [2024-09-27 15:36:53.308810] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3049: invalid cntlid range [1-0] 00:18:13.052 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:18:13.052 { 00:18:13.052 "nqn": "nqn.2016-06.io.spdk:cnode3049", 00:18:13.052 "max_cntlid": 0, 00:18:13.052 "method": "nvmf_create_subsystem", 00:18:13.052 "req_id": 1 00:18:13.052 } 00:18:13.052 Got JSON-RPC error response 00:18:13.052 response: 00:18:13.052 { 00:18:13.052 "code": -32602, 00:18:13.052 "message": "Invalid cntlid range [1-0]" 00:18:13.052 }' 00:18:13.052 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:18:13.052 { 00:18:13.052 "nqn": "nqn.2016-06.io.spdk:cnode3049", 00:18:13.052 "max_cntlid": 0, 00:18:13.052 "method": "nvmf_create_subsystem", 00:18:13.052 "req_id": 1 00:18:13.052 } 00:18:13.052 Got JSON-RPC error response 00:18:13.052 response: 00:18:13.052 { 00:18:13.052 "code": -32602, 00:18:13.052 "message": "Invalid cntlid range [1-0]" 00:18:13.052 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:13.052 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22705 -I 65520 00:18:13.052 [2024-09-27 15:36:53.493405] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22705: invalid cntlid range [1-65520] 00:18:13.052 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:18:13.052 { 00:18:13.052 "nqn": "nqn.2016-06.io.spdk:cnode22705", 00:18:13.052 "max_cntlid": 65520, 00:18:13.052 "method": "nvmf_create_subsystem", 00:18:13.052 "req_id": 1 00:18:13.052 } 00:18:13.052 Got JSON-RPC error response 00:18:13.052 response: 00:18:13.052 { 00:18:13.052 "code": -32602, 00:18:13.052 "message": "Invalid cntlid range [1-65520]" 00:18:13.052 }' 00:18:13.052 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:18:13.052 { 00:18:13.052 "nqn": "nqn.2016-06.io.spdk:cnode22705", 00:18:13.052 "max_cntlid": 65520, 00:18:13.052 "method": "nvmf_create_subsystem", 00:18:13.052 "req_id": 1 00:18:13.052 } 00:18:13.052 Got JSON-RPC error response 00:18:13.052 response: 00:18:13.052 { 00:18:13.052 "code": -32602, 00:18:13.052 "message": "Invalid cntlid range [1-65520]" 00:18:13.052 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:13.052 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7541 -i 6 -I 5 00:18:13.312 [2024-09-27 15:36:53.682015] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7541: invalid cntlid range [6-5] 00:18:13.312 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:18:13.312 { 00:18:13.312 "nqn": "nqn.2016-06.io.spdk:cnode7541", 00:18:13.312 "min_cntlid": 6, 00:18:13.312 "max_cntlid": 5, 00:18:13.312 "method": "nvmf_create_subsystem", 00:18:13.312 "req_id": 1 00:18:13.312 } 00:18:13.312 Got JSON-RPC error response 00:18:13.312 response: 00:18:13.312 { 00:18:13.312 "code": -32602, 00:18:13.312 "message": "Invalid cntlid range [6-5]" 00:18:13.312 }' 00:18:13.312 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:18:13.312 { 00:18:13.312 "nqn": "nqn.2016-06.io.spdk:cnode7541", 00:18:13.312 "min_cntlid": 6, 00:18:13.312 "max_cntlid": 5, 00:18:13.312 "method": "nvmf_create_subsystem", 00:18:13.312 "req_id": 1 00:18:13.312 } 00:18:13.312 Got JSON-RPC error response 00:18:13.312 response: 00:18:13.312 { 00:18:13.312 "code": -32602, 00:18:13.312 "message": "Invalid cntlid range [6-5]" 00:18:13.312 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:13.312 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:18:13.573 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:18:13.573 { 00:18:13.573 "name": "foobar", 00:18:13.573 "method": "nvmf_delete_target", 00:18:13.573 "req_id": 1 00:18:13.573 } 00:18:13.573 Got JSON-RPC error response 00:18:13.573 response: 00:18:13.573 { 00:18:13.573 "code": -32602, 00:18:13.573 "message": "The specified target doesn'\''t exist, cannot delete it." 00:18:13.573 }' 00:18:13.573 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:18:13.573 { 00:18:13.573 "name": "foobar", 00:18:13.573 "method": "nvmf_delete_target", 00:18:13.573 "req_id": 1 00:18:13.573 } 00:18:13.573 Got JSON-RPC error response 00:18:13.573 response: 00:18:13.573 { 00:18:13.573 "code": -32602, 00:18:13.573 "message": "The specified target doesn't exist, cannot delete it." 00:18:13.573 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:18:13.573 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:18:13.573 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:18:13.573 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:13.573 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:18:13.573 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:13.573 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:18:13.573 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:13.573 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:13.573 rmmod nvme_tcp 00:18:13.573 rmmod nvme_fabrics 00:18:13.573 rmmod nvme_keyring 00:18:13.573 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:13.573 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:18:13.573 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:18:13.573 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@513 -- # '[' -n 321992 ']' 00:18:13.573 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # killprocess 321992 00:18:13.573 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 321992 ']' 00:18:13.574 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 321992 00:18:13.574 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:18:13.574 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:13.574 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 321992 00:18:13.574 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:13.574 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:13.574 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 321992' 00:18:13.574 killing process with pid 321992 00:18:13.574 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 321992 00:18:13.574 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 321992 00:18:13.835 15:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:13.835 15:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:13.835 15:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:13.835 15:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:18:13.835 15:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # iptables-save 00:18:13.835 15:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:13.835 15:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # iptables-restore 00:18:13.835 15:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:13.836 15:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:13.836 15:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.836 15:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:13.836 15:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.749 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:15.749 00:18:15.749 real 0m14.314s 00:18:15.749 user 0m21.217s 00:18:15.749 sys 0m6.799s 00:18:15.749 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:15.749 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:15.749 ************************************ 00:18:15.749 END TEST nvmf_invalid 00:18:15.749 ************************************ 00:18:15.749 15:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:18:15.749 15:36:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:15.749 15:36:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:15.749 15:36:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:16.011 ************************************ 00:18:16.011 START TEST nvmf_connect_stress 00:18:16.011 ************************************ 00:18:16.011 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:18:16.011 * Looking for test storage... 00:18:16.011 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:16.011 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:16.011 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:18:16.011 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:16.011 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:16.011 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:16.011 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:16.011 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:16.011 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:18:16.011 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:18:16.011 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:18:16.011 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:18:16.011 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:18:16.011 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:18:16.011 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:18:16.011 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:16.011 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:18:16.011 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:18:16.011 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:16.011 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:16.011 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:18:16.011 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:18:16.011 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:16.011 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:18:16.011 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:18:16.011 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:18:16.011 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:18:16.011 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:16.011 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:18:16.011 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:18:16.011 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:16.011 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:16.011 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:18:16.011 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:16.011 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:16.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.011 --rc genhtml_branch_coverage=1 00:18:16.011 --rc genhtml_function_coverage=1 00:18:16.011 --rc genhtml_legend=1 00:18:16.011 --rc geninfo_all_blocks=1 00:18:16.011 --rc geninfo_unexecuted_blocks=1 00:18:16.011 00:18:16.011 ' 00:18:16.011 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:16.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.011 --rc genhtml_branch_coverage=1 00:18:16.011 --rc genhtml_function_coverage=1 00:18:16.011 --rc genhtml_legend=1 00:18:16.011 --rc geninfo_all_blocks=1 00:18:16.011 --rc geninfo_unexecuted_blocks=1 00:18:16.011 00:18:16.011 ' 00:18:16.011 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:16.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.011 --rc genhtml_branch_coverage=1 00:18:16.011 --rc genhtml_function_coverage=1 00:18:16.011 --rc genhtml_legend=1 00:18:16.011 --rc geninfo_all_blocks=1 00:18:16.011 --rc geninfo_unexecuted_blocks=1 00:18:16.011 00:18:16.011 ' 00:18:16.011 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:16.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.011 --rc genhtml_branch_coverage=1 00:18:16.011 --rc genhtml_function_coverage=1 00:18:16.011 --rc genhtml_legend=1 00:18:16.011 --rc geninfo_all_blocks=1 00:18:16.012 --rc geninfo_unexecuted_blocks=1 00:18:16.012 00:18:16.012 ' 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:16.012 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:18:16.012 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:24.159 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:24.159 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:18:24.159 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:24.159 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:24.159 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:24.159 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:24.159 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:24.159 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:18:24.159 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:24.159 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:18:24.159 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:18:24.159 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:18:24.159 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:18:24.159 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:18:24.159 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:18:24.159 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:24.159 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:24.159 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:24.159 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:24.159 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:24.159 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:24.160 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:24.160 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:24.160 Found net devices under 0000:31:00.0: cvl_0_0 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:24.160 Found net devices under 0000:31:00.1: cvl_0_1 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:24.160 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:24.160 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.557 ms 00:18:24.160 00:18:24.160 --- 10.0.0.2 ping statistics --- 00:18:24.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.160 rtt min/avg/max/mdev = 0.557/0.557/0.557/0.000 ms 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:24.160 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:24.160 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:18:24.160 00:18:24.160 --- 10.0.0.1 ping statistics --- 00:18:24.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.160 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # return 0 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:24.160 15:37:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:24.160 15:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:18:24.160 15:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:24.160 15:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:24.160 15:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:24.160 15:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # nvmfpid=327317 00:18:24.161 15:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # waitforlisten 327317 00:18:24.161 15:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:24.161 15:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 327317 ']' 00:18:24.161 15:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.161 15:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:24.161 15:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.161 15:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:24.161 15:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:24.161 [2024-09-27 15:37:04.103748] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:18:24.161 [2024-09-27 15:37:04.103816] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:24.161 [2024-09-27 15:37:04.192614] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:24.161 [2024-09-27 15:37:04.225737] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:24.161 [2024-09-27 15:37:04.225776] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:24.161 [2024-09-27 15:37:04.225782] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:24.161 [2024-09-27 15:37:04.225787] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:24.161 [2024-09-27 15:37:04.225792] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:24.161 [2024-09-27 15:37:04.225962] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:24.161 [2024-09-27 15:37:04.226123] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:24.161 [2024-09-27 15:37:04.226124] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:24.733 15:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:24.733 15:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:18:24.733 15:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:24.733 15:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:24.733 15:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:24.733 15:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:24.733 15:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:24.733 15:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.733 15:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:24.733 [2024-09-27 15:37:04.962804] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:24.733 15:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.733 15:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:24.733 15:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.733 15:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:24.733 15:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.733 15:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:24.733 15:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.733 15:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:24.733 [2024-09-27 15:37:04.995324] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:24.733 15:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.733 15:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:24.733 15:37:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.733 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:24.733 NULL1 00:18:24.733 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.733 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=327577 00:18:24.733 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:24.733 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:18:24.734 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:24.734 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:18:24.734 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:24.734 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:24.734 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:24.734 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:24.734 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:24.734 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:24.734 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:24.734 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:24.734 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:24.734 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:24.734 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:24.734 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:24.734 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:24.734 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:24.734 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:24.734 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:24.734 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:24.734 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:24.734 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:24.734 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:24.734 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:24.734 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:24.734 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:24.734 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:24.734 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:24.734 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:24.734 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:24.734 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:24.734 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:24.734 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:24.734 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:24.734 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:24.734 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:24.734 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:24.734 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:24.734 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:24.734 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:24.734 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:24.734 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:24.734 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:24.734 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 327577 00:18:24.734 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:24.734 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.734 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:24.994 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.995 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 327577 00:18:24.995 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:24.995 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.995 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:25.568 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.568 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 327577 00:18:25.568 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:25.568 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.568 15:37:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:25.827 15:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.827 15:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 327577 00:18:25.827 15:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:25.827 15:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.827 15:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:26.089 15:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.089 15:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 327577 00:18:26.089 15:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:26.089 15:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.089 15:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:26.351 15:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.351 15:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 327577 00:18:26.351 15:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:26.351 15:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.351 15:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:26.611 15:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.611 15:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 327577 00:18:26.611 15:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:26.611 15:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.612 15:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:27.185 15:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.185 15:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 327577 00:18:27.185 15:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:27.185 15:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.185 15:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:27.445 15:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.446 15:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 327577 00:18:27.446 15:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:27.446 15:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.446 15:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:27.706 15:37:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.706 15:37:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 327577 00:18:27.706 15:37:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:27.706 15:37:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.706 15:37:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:27.967 15:37:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.967 15:37:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 327577 00:18:27.967 15:37:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:27.967 15:37:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.967 15:37:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:28.229 15:37:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.229 15:37:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 327577 00:18:28.229 15:37:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:28.229 15:37:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.229 15:37:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:28.801 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.801 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 327577 00:18:28.801 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:28.801 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.801 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:29.062 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.062 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 327577 00:18:29.062 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:29.062 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.062 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:29.322 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.322 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 327577 00:18:29.322 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:29.322 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.322 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:29.583 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.583 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 327577 00:18:29.583 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:29.583 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.583 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:29.842 15:37:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.842 15:37:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 327577 00:18:29.842 15:37:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:29.842 15:37:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.842 15:37:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:30.414 15:37:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.414 15:37:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 327577 00:18:30.414 15:37:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:30.414 15:37:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.414 15:37:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:30.674 15:37:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.674 15:37:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 327577 00:18:30.674 15:37:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:30.674 15:37:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.674 15:37:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:30.934 15:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.934 15:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 327577 00:18:30.934 15:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:30.934 15:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.934 15:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:31.194 15:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.194 15:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 327577 00:18:31.194 15:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:31.194 15:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.194 15:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:31.454 15:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.454 15:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 327577 00:18:31.454 15:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:31.454 15:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.454 15:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:32.024 15:37:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.024 15:37:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 327577 00:18:32.024 15:37:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:32.024 15:37:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.024 15:37:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:32.285 15:37:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.285 15:37:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 327577 00:18:32.285 15:37:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:32.285 15:37:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.285 15:37:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:32.545 15:37:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.545 15:37:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 327577 00:18:32.545 15:37:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:32.545 15:37:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.545 15:37:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:32.805 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.805 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 327577 00:18:32.805 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:32.805 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.805 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:33.374 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.374 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 327577 00:18:33.374 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:33.374 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.374 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:33.634 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.634 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 327577 00:18:33.634 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:33.634 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.634 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:33.894 15:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.894 15:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 327577 00:18:33.894 15:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:33.894 15:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.894 15:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:34.154 15:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.155 15:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 327577 00:18:34.155 15:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:34.155 15:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.155 15:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:34.415 15:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.415 15:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 327577 00:18:34.415 15:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:34.415 15:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.415 15:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:34.985 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:34.985 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.985 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 327577 00:18:34.985 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (327577) - No such process 00:18:34.985 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 327577 00:18:34.985 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:34.985 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:34.985 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:18:34.985 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:34.985 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:18:34.985 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:34.985 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:18:34.985 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:34.985 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:34.985 rmmod nvme_tcp 00:18:34.985 rmmod nvme_fabrics 00:18:34.985 rmmod nvme_keyring 00:18:34.985 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:34.985 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:18:34.985 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:18:34.985 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@513 -- # '[' -n 327317 ']' 00:18:34.985 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # killprocess 327317 00:18:34.985 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 327317 ']' 00:18:34.985 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 327317 00:18:34.985 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:18:34.985 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:34.985 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 327317 00:18:34.985 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:34.985 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:34.985 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 327317' 00:18:34.985 killing process with pid 327317 00:18:34.985 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 327317 00:18:34.985 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 327317 00:18:34.985 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:34.985 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:34.985 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:34.985 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:18:34.985 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:34.985 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # iptables-save 00:18:34.985 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # iptables-restore 00:18:34.985 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:34.985 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:34.985 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.985 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:34.985 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:37.531 00:18:37.531 real 0m21.295s 00:18:37.531 user 0m43.911s 00:18:37.531 sys 0m7.799s 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:37.531 ************************************ 00:18:37.531 END TEST nvmf_connect_stress 00:18:37.531 ************************************ 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:37.531 ************************************ 00:18:37.531 START TEST nvmf_fused_ordering 00:18:37.531 ************************************ 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:37.531 * Looking for test storage... 00:18:37.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lcov --version 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:37.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.531 --rc genhtml_branch_coverage=1 00:18:37.531 --rc genhtml_function_coverage=1 00:18:37.531 --rc genhtml_legend=1 00:18:37.531 --rc geninfo_all_blocks=1 00:18:37.531 --rc geninfo_unexecuted_blocks=1 00:18:37.531 00:18:37.531 ' 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:37.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.531 --rc genhtml_branch_coverage=1 00:18:37.531 --rc genhtml_function_coverage=1 00:18:37.531 --rc genhtml_legend=1 00:18:37.531 --rc geninfo_all_blocks=1 00:18:37.531 --rc geninfo_unexecuted_blocks=1 00:18:37.531 00:18:37.531 ' 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:37.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.531 --rc genhtml_branch_coverage=1 00:18:37.531 --rc genhtml_function_coverage=1 00:18:37.531 --rc genhtml_legend=1 00:18:37.531 --rc geninfo_all_blocks=1 00:18:37.531 --rc geninfo_unexecuted_blocks=1 00:18:37.531 00:18:37.531 ' 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:37.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.531 --rc genhtml_branch_coverage=1 00:18:37.531 --rc genhtml_function_coverage=1 00:18:37.531 --rc genhtml_legend=1 00:18:37.531 --rc geninfo_all_blocks=1 00:18:37.531 --rc geninfo_unexecuted_blocks=1 00:18:37.531 00:18:37.531 ' 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:37.531 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:37.532 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.532 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.532 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.532 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:18:37.532 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.532 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:18:37.532 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:37.532 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:37.532 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:37.532 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:37.532 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:37.532 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:37.532 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:37.532 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:37.532 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:37.532 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:37.532 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:18:37.532 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:37.532 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:37.532 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:37.532 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:37.532 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:37.532 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.532 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:37.532 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.532 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:18:37.532 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:18:37.532 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:18:37.532 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:45.683 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:45.683 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:18:45.683 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:45.683 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:45.683 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:45.683 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:45.683 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:45.683 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:18:45.683 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:45.683 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:18:45.683 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:18:45.683 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:18:45.683 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:18:45.683 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:18:45.683 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:18:45.683 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:45.683 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:45.683 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:45.683 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:45.683 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:45.683 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:45.683 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:45.683 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:45.683 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:45.683 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:45.683 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:45.683 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:45.684 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:45.684 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:45.684 Found net devices under 0000:31:00.0: cvl_0_0 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:45.684 Found net devices under 0000:31:00.1: cvl_0_1 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # is_hw=yes 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:45.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:45.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:18:45.684 00:18:45.684 --- 10.0.0.2 ping statistics --- 00:18:45.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.684 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:45.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:45.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:18:45.684 00:18:45.684 --- 10.0.0.1 ping statistics --- 00:18:45.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.684 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # return 0 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:45.684 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:45.685 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # nvmfpid=333914 00:18:45.685 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # waitforlisten 333914 00:18:45.685 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:45.685 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 333914 ']' 00:18:45.685 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.685 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:45.685 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.685 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:45.685 15:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:45.685 [2024-09-27 15:37:25.613520] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:18:45.685 [2024-09-27 15:37:25.613585] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:45.685 [2024-09-27 15:37:25.704900] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.685 [2024-09-27 15:37:25.751037] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:45.685 [2024-09-27 15:37:25.751093] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:45.685 [2024-09-27 15:37:25.751102] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:45.685 [2024-09-27 15:37:25.751109] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:45.685 [2024-09-27 15:37:25.751120] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:45.685 [2024-09-27 15:37:25.751151] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:46.258 15:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:46.258 15:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:18:46.258 15:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:46.258 15:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:46.258 15:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:46.258 15:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:46.258 15:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:46.258 15:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.258 15:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:46.258 [2024-09-27 15:37:26.493327] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:46.258 15:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.258 15:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:46.258 15:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.258 15:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:46.258 15:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.258 15:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:46.258 15:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.258 15:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:46.258 [2024-09-27 15:37:26.517622] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:46.258 15:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.258 15:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:46.258 15:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.258 15:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:46.258 NULL1 00:18:46.258 15:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.258 15:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:18:46.258 15:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.258 15:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:46.258 15:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.258 15:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:18:46.258 15:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.258 15:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:46.258 15:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.258 15:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:46.258 [2024-09-27 15:37:26.586635] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:18:46.258 [2024-09-27 15:37:26.586685] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid334039 ] 00:18:46.833 Attached to nqn.2016-06.io.spdk:cnode1 00:18:46.833 Namespace ID: 1 size: 1GB 00:18:46.833 fused_ordering(0) 00:18:46.833 fused_ordering(1) 00:18:46.833 fused_ordering(2) 00:18:46.833 fused_ordering(3) 00:18:46.833 fused_ordering(4) 00:18:46.833 fused_ordering(5) 00:18:46.833 fused_ordering(6) 00:18:46.833 fused_ordering(7) 00:18:46.833 fused_ordering(8) 00:18:46.833 fused_ordering(9) 00:18:46.833 fused_ordering(10) 00:18:46.833 fused_ordering(11) 00:18:46.833 fused_ordering(12) 00:18:46.833 fused_ordering(13) 00:18:46.833 fused_ordering(14) 00:18:46.833 fused_ordering(15) 00:18:46.833 fused_ordering(16) 00:18:46.833 fused_ordering(17) 00:18:46.834 fused_ordering(18) 00:18:46.834 fused_ordering(19) 00:18:46.834 fused_ordering(20) 00:18:46.834 fused_ordering(21) 00:18:46.834 fused_ordering(22) 00:18:46.834 fused_ordering(23) 00:18:46.834 fused_ordering(24) 00:18:46.834 fused_ordering(25) 00:18:46.834 fused_ordering(26) 00:18:46.834 fused_ordering(27) 00:18:46.834 fused_ordering(28) 00:18:46.834 fused_ordering(29) 00:18:46.834 fused_ordering(30) 00:18:46.834 fused_ordering(31) 00:18:46.834 fused_ordering(32) 00:18:46.834 fused_ordering(33) 00:18:46.834 fused_ordering(34) 00:18:46.834 fused_ordering(35) 00:18:46.834 fused_ordering(36) 00:18:46.834 fused_ordering(37) 00:18:46.834 fused_ordering(38) 00:18:46.834 fused_ordering(39) 00:18:46.834 fused_ordering(40) 00:18:46.834 fused_ordering(41) 00:18:46.834 fused_ordering(42) 00:18:46.834 fused_ordering(43) 00:18:46.834 fused_ordering(44) 00:18:46.834 fused_ordering(45) 00:18:46.834 fused_ordering(46) 00:18:46.834 fused_ordering(47) 00:18:46.834 fused_ordering(48) 00:18:46.834 fused_ordering(49) 00:18:46.834 fused_ordering(50) 00:18:46.834 fused_ordering(51) 00:18:46.834 fused_ordering(52) 00:18:46.834 fused_ordering(53) 00:18:46.834 fused_ordering(54) 00:18:46.834 fused_ordering(55) 00:18:46.834 fused_ordering(56) 00:18:46.834 fused_ordering(57) 00:18:46.834 fused_ordering(58) 00:18:46.834 fused_ordering(59) 00:18:46.834 fused_ordering(60) 00:18:46.834 fused_ordering(61) 00:18:46.834 fused_ordering(62) 00:18:46.834 fused_ordering(63) 00:18:46.834 fused_ordering(64) 00:18:46.834 fused_ordering(65) 00:18:46.834 fused_ordering(66) 00:18:46.834 fused_ordering(67) 00:18:46.834 fused_ordering(68) 00:18:46.834 fused_ordering(69) 00:18:46.834 fused_ordering(70) 00:18:46.834 fused_ordering(71) 00:18:46.834 fused_ordering(72) 00:18:46.834 fused_ordering(73) 00:18:46.834 fused_ordering(74) 00:18:46.834 fused_ordering(75) 00:18:46.834 fused_ordering(76) 00:18:46.834 fused_ordering(77) 00:18:46.834 fused_ordering(78) 00:18:46.834 fused_ordering(79) 00:18:46.834 fused_ordering(80) 00:18:46.834 fused_ordering(81) 00:18:46.834 fused_ordering(82) 00:18:46.834 fused_ordering(83) 00:18:46.834 fused_ordering(84) 00:18:46.834 fused_ordering(85) 00:18:46.834 fused_ordering(86) 00:18:46.834 fused_ordering(87) 00:18:46.834 fused_ordering(88) 00:18:46.834 fused_ordering(89) 00:18:46.834 fused_ordering(90) 00:18:46.834 fused_ordering(91) 00:18:46.834 fused_ordering(92) 00:18:46.834 fused_ordering(93) 00:18:46.834 fused_ordering(94) 00:18:46.834 fused_ordering(95) 00:18:46.834 fused_ordering(96) 00:18:46.834 fused_ordering(97) 00:18:46.834 fused_ordering(98) 00:18:46.834 fused_ordering(99) 00:18:46.834 fused_ordering(100) 00:18:46.834 fused_ordering(101) 00:18:46.834 fused_ordering(102) 00:18:46.834 fused_ordering(103) 00:18:46.834 fused_ordering(104) 00:18:46.834 fused_ordering(105) 00:18:46.834 fused_ordering(106) 00:18:46.834 fused_ordering(107) 00:18:46.834 fused_ordering(108) 00:18:46.834 fused_ordering(109) 00:18:46.834 fused_ordering(110) 00:18:46.834 fused_ordering(111) 00:18:46.834 fused_ordering(112) 00:18:46.834 fused_ordering(113) 00:18:46.834 fused_ordering(114) 00:18:46.834 fused_ordering(115) 00:18:46.834 fused_ordering(116) 00:18:46.834 fused_ordering(117) 00:18:46.834 fused_ordering(118) 00:18:46.834 fused_ordering(119) 00:18:46.834 fused_ordering(120) 00:18:46.834 fused_ordering(121) 00:18:46.834 fused_ordering(122) 00:18:46.834 fused_ordering(123) 00:18:46.834 fused_ordering(124) 00:18:46.834 fused_ordering(125) 00:18:46.834 fused_ordering(126) 00:18:46.834 fused_ordering(127) 00:18:46.834 fused_ordering(128) 00:18:46.834 fused_ordering(129) 00:18:46.834 fused_ordering(130) 00:18:46.834 fused_ordering(131) 00:18:46.834 fused_ordering(132) 00:18:46.834 fused_ordering(133) 00:18:46.834 fused_ordering(134) 00:18:46.834 fused_ordering(135) 00:18:46.834 fused_ordering(136) 00:18:46.834 fused_ordering(137) 00:18:46.834 fused_ordering(138) 00:18:46.834 fused_ordering(139) 00:18:46.834 fused_ordering(140) 00:18:46.834 fused_ordering(141) 00:18:46.834 fused_ordering(142) 00:18:46.834 fused_ordering(143) 00:18:46.834 fused_ordering(144) 00:18:46.834 fused_ordering(145) 00:18:46.834 fused_ordering(146) 00:18:46.834 fused_ordering(147) 00:18:46.834 fused_ordering(148) 00:18:46.834 fused_ordering(149) 00:18:46.834 fused_ordering(150) 00:18:46.834 fused_ordering(151) 00:18:46.834 fused_ordering(152) 00:18:46.834 fused_ordering(153) 00:18:46.834 fused_ordering(154) 00:18:46.834 fused_ordering(155) 00:18:46.834 fused_ordering(156) 00:18:46.834 fused_ordering(157) 00:18:46.834 fused_ordering(158) 00:18:46.834 fused_ordering(159) 00:18:46.834 fused_ordering(160) 00:18:46.834 fused_ordering(161) 00:18:46.834 fused_ordering(162) 00:18:46.834 fused_ordering(163) 00:18:46.834 fused_ordering(164) 00:18:46.834 fused_ordering(165) 00:18:46.834 fused_ordering(166) 00:18:46.834 fused_ordering(167) 00:18:46.834 fused_ordering(168) 00:18:46.834 fused_ordering(169) 00:18:46.834 fused_ordering(170) 00:18:46.834 fused_ordering(171) 00:18:46.834 fused_ordering(172) 00:18:46.834 fused_ordering(173) 00:18:46.834 fused_ordering(174) 00:18:46.834 fused_ordering(175) 00:18:46.834 fused_ordering(176) 00:18:46.834 fused_ordering(177) 00:18:46.834 fused_ordering(178) 00:18:46.834 fused_ordering(179) 00:18:46.834 fused_ordering(180) 00:18:46.834 fused_ordering(181) 00:18:46.834 fused_ordering(182) 00:18:46.834 fused_ordering(183) 00:18:46.834 fused_ordering(184) 00:18:46.834 fused_ordering(185) 00:18:46.834 fused_ordering(186) 00:18:46.834 fused_ordering(187) 00:18:46.834 fused_ordering(188) 00:18:46.834 fused_ordering(189) 00:18:46.834 fused_ordering(190) 00:18:46.834 fused_ordering(191) 00:18:46.834 fused_ordering(192) 00:18:46.834 fused_ordering(193) 00:18:46.834 fused_ordering(194) 00:18:46.834 fused_ordering(195) 00:18:46.834 fused_ordering(196) 00:18:46.834 fused_ordering(197) 00:18:46.834 fused_ordering(198) 00:18:46.834 fused_ordering(199) 00:18:46.834 fused_ordering(200) 00:18:46.834 fused_ordering(201) 00:18:46.834 fused_ordering(202) 00:18:46.834 fused_ordering(203) 00:18:46.834 fused_ordering(204) 00:18:46.834 fused_ordering(205) 00:18:47.408 fused_ordering(206) 00:18:47.408 fused_ordering(207) 00:18:47.408 fused_ordering(208) 00:18:47.408 fused_ordering(209) 00:18:47.408 fused_ordering(210) 00:18:47.408 fused_ordering(211) 00:18:47.408 fused_ordering(212) 00:18:47.408 fused_ordering(213) 00:18:47.408 fused_ordering(214) 00:18:47.408 fused_ordering(215) 00:18:47.408 fused_ordering(216) 00:18:47.408 fused_ordering(217) 00:18:47.408 fused_ordering(218) 00:18:47.408 fused_ordering(219) 00:18:47.408 fused_ordering(220) 00:18:47.408 fused_ordering(221) 00:18:47.408 fused_ordering(222) 00:18:47.408 fused_ordering(223) 00:18:47.408 fused_ordering(224) 00:18:47.408 fused_ordering(225) 00:18:47.408 fused_ordering(226) 00:18:47.408 fused_ordering(227) 00:18:47.408 fused_ordering(228) 00:18:47.408 fused_ordering(229) 00:18:47.408 fused_ordering(230) 00:18:47.408 fused_ordering(231) 00:18:47.408 fused_ordering(232) 00:18:47.408 fused_ordering(233) 00:18:47.408 fused_ordering(234) 00:18:47.409 fused_ordering(235) 00:18:47.409 fused_ordering(236) 00:18:47.409 fused_ordering(237) 00:18:47.409 fused_ordering(238) 00:18:47.409 fused_ordering(239) 00:18:47.409 fused_ordering(240) 00:18:47.409 fused_ordering(241) 00:18:47.409 fused_ordering(242) 00:18:47.409 fused_ordering(243) 00:18:47.409 fused_ordering(244) 00:18:47.409 fused_ordering(245) 00:18:47.409 fused_ordering(246) 00:18:47.409 fused_ordering(247) 00:18:47.409 fused_ordering(248) 00:18:47.409 fused_ordering(249) 00:18:47.409 fused_ordering(250) 00:18:47.409 fused_ordering(251) 00:18:47.409 fused_ordering(252) 00:18:47.409 fused_ordering(253) 00:18:47.409 fused_ordering(254) 00:18:47.409 fused_ordering(255) 00:18:47.409 fused_ordering(256) 00:18:47.409 fused_ordering(257) 00:18:47.409 fused_ordering(258) 00:18:47.409 fused_ordering(259) 00:18:47.409 fused_ordering(260) 00:18:47.409 fused_ordering(261) 00:18:47.409 fused_ordering(262) 00:18:47.409 fused_ordering(263) 00:18:47.409 fused_ordering(264) 00:18:47.409 fused_ordering(265) 00:18:47.409 fused_ordering(266) 00:18:47.409 fused_ordering(267) 00:18:47.409 fused_ordering(268) 00:18:47.409 fused_ordering(269) 00:18:47.409 fused_ordering(270) 00:18:47.409 fused_ordering(271) 00:18:47.409 fused_ordering(272) 00:18:47.409 fused_ordering(273) 00:18:47.409 fused_ordering(274) 00:18:47.409 fused_ordering(275) 00:18:47.409 fused_ordering(276) 00:18:47.409 fused_ordering(277) 00:18:47.409 fused_ordering(278) 00:18:47.409 fused_ordering(279) 00:18:47.409 fused_ordering(280) 00:18:47.409 fused_ordering(281) 00:18:47.409 fused_ordering(282) 00:18:47.409 fused_ordering(283) 00:18:47.409 fused_ordering(284) 00:18:47.409 fused_ordering(285) 00:18:47.409 fused_ordering(286) 00:18:47.409 fused_ordering(287) 00:18:47.409 fused_ordering(288) 00:18:47.409 fused_ordering(289) 00:18:47.409 fused_ordering(290) 00:18:47.409 fused_ordering(291) 00:18:47.409 fused_ordering(292) 00:18:47.409 fused_ordering(293) 00:18:47.409 fused_ordering(294) 00:18:47.409 fused_ordering(295) 00:18:47.409 fused_ordering(296) 00:18:47.409 fused_ordering(297) 00:18:47.409 fused_ordering(298) 00:18:47.409 fused_ordering(299) 00:18:47.409 fused_ordering(300) 00:18:47.409 fused_ordering(301) 00:18:47.409 fused_ordering(302) 00:18:47.409 fused_ordering(303) 00:18:47.409 fused_ordering(304) 00:18:47.409 fused_ordering(305) 00:18:47.409 fused_ordering(306) 00:18:47.409 fused_ordering(307) 00:18:47.409 fused_ordering(308) 00:18:47.409 fused_ordering(309) 00:18:47.409 fused_ordering(310) 00:18:47.409 fused_ordering(311) 00:18:47.409 fused_ordering(312) 00:18:47.409 fused_ordering(313) 00:18:47.409 fused_ordering(314) 00:18:47.409 fused_ordering(315) 00:18:47.409 fused_ordering(316) 00:18:47.409 fused_ordering(317) 00:18:47.409 fused_ordering(318) 00:18:47.409 fused_ordering(319) 00:18:47.409 fused_ordering(320) 00:18:47.409 fused_ordering(321) 00:18:47.409 fused_ordering(322) 00:18:47.409 fused_ordering(323) 00:18:47.409 fused_ordering(324) 00:18:47.409 fused_ordering(325) 00:18:47.409 fused_ordering(326) 00:18:47.409 fused_ordering(327) 00:18:47.409 fused_ordering(328) 00:18:47.409 fused_ordering(329) 00:18:47.409 fused_ordering(330) 00:18:47.409 fused_ordering(331) 00:18:47.409 fused_ordering(332) 00:18:47.409 fused_ordering(333) 00:18:47.409 fused_ordering(334) 00:18:47.409 fused_ordering(335) 00:18:47.409 fused_ordering(336) 00:18:47.409 fused_ordering(337) 00:18:47.409 fused_ordering(338) 00:18:47.409 fused_ordering(339) 00:18:47.409 fused_ordering(340) 00:18:47.409 fused_ordering(341) 00:18:47.409 fused_ordering(342) 00:18:47.409 fused_ordering(343) 00:18:47.409 fused_ordering(344) 00:18:47.409 fused_ordering(345) 00:18:47.409 fused_ordering(346) 00:18:47.409 fused_ordering(347) 00:18:47.409 fused_ordering(348) 00:18:47.409 fused_ordering(349) 00:18:47.409 fused_ordering(350) 00:18:47.409 fused_ordering(351) 00:18:47.409 fused_ordering(352) 00:18:47.409 fused_ordering(353) 00:18:47.409 fused_ordering(354) 00:18:47.409 fused_ordering(355) 00:18:47.409 fused_ordering(356) 00:18:47.409 fused_ordering(357) 00:18:47.409 fused_ordering(358) 00:18:47.409 fused_ordering(359) 00:18:47.409 fused_ordering(360) 00:18:47.409 fused_ordering(361) 00:18:47.409 fused_ordering(362) 00:18:47.409 fused_ordering(363) 00:18:47.409 fused_ordering(364) 00:18:47.409 fused_ordering(365) 00:18:47.409 fused_ordering(366) 00:18:47.409 fused_ordering(367) 00:18:47.409 fused_ordering(368) 00:18:47.409 fused_ordering(369) 00:18:47.409 fused_ordering(370) 00:18:47.409 fused_ordering(371) 00:18:47.409 fused_ordering(372) 00:18:47.409 fused_ordering(373) 00:18:47.409 fused_ordering(374) 00:18:47.409 fused_ordering(375) 00:18:47.409 fused_ordering(376) 00:18:47.409 fused_ordering(377) 00:18:47.409 fused_ordering(378) 00:18:47.409 fused_ordering(379) 00:18:47.409 fused_ordering(380) 00:18:47.409 fused_ordering(381) 00:18:47.409 fused_ordering(382) 00:18:47.409 fused_ordering(383) 00:18:47.409 fused_ordering(384) 00:18:47.409 fused_ordering(385) 00:18:47.409 fused_ordering(386) 00:18:47.409 fused_ordering(387) 00:18:47.409 fused_ordering(388) 00:18:47.409 fused_ordering(389) 00:18:47.409 fused_ordering(390) 00:18:47.409 fused_ordering(391) 00:18:47.409 fused_ordering(392) 00:18:47.409 fused_ordering(393) 00:18:47.409 fused_ordering(394) 00:18:47.409 fused_ordering(395) 00:18:47.409 fused_ordering(396) 00:18:47.409 fused_ordering(397) 00:18:47.409 fused_ordering(398) 00:18:47.409 fused_ordering(399) 00:18:47.409 fused_ordering(400) 00:18:47.409 fused_ordering(401) 00:18:47.409 fused_ordering(402) 00:18:47.409 fused_ordering(403) 00:18:47.409 fused_ordering(404) 00:18:47.409 fused_ordering(405) 00:18:47.409 fused_ordering(406) 00:18:47.409 fused_ordering(407) 00:18:47.409 fused_ordering(408) 00:18:47.409 fused_ordering(409) 00:18:47.409 fused_ordering(410) 00:18:47.984 fused_ordering(411) 00:18:47.984 fused_ordering(412) 00:18:47.984 fused_ordering(413) 00:18:47.984 fused_ordering(414) 00:18:47.984 fused_ordering(415) 00:18:47.984 fused_ordering(416) 00:18:47.984 fused_ordering(417) 00:18:47.984 fused_ordering(418) 00:18:47.984 fused_ordering(419) 00:18:47.984 fused_ordering(420) 00:18:47.984 fused_ordering(421) 00:18:47.984 fused_ordering(422) 00:18:47.984 fused_ordering(423) 00:18:47.984 fused_ordering(424) 00:18:47.984 fused_ordering(425) 00:18:47.984 fused_ordering(426) 00:18:47.984 fused_ordering(427) 00:18:47.984 fused_ordering(428) 00:18:47.984 fused_ordering(429) 00:18:47.984 fused_ordering(430) 00:18:47.984 fused_ordering(431) 00:18:47.984 fused_ordering(432) 00:18:47.984 fused_ordering(433) 00:18:47.984 fused_ordering(434) 00:18:47.984 fused_ordering(435) 00:18:47.984 fused_ordering(436) 00:18:47.984 fused_ordering(437) 00:18:47.984 fused_ordering(438) 00:18:47.984 fused_ordering(439) 00:18:47.984 fused_ordering(440) 00:18:47.984 fused_ordering(441) 00:18:47.984 fused_ordering(442) 00:18:47.984 fused_ordering(443) 00:18:47.984 fused_ordering(444) 00:18:47.984 fused_ordering(445) 00:18:47.984 fused_ordering(446) 00:18:47.984 fused_ordering(447) 00:18:47.984 fused_ordering(448) 00:18:47.984 fused_ordering(449) 00:18:47.984 fused_ordering(450) 00:18:47.984 fused_ordering(451) 00:18:47.984 fused_ordering(452) 00:18:47.984 fused_ordering(453) 00:18:47.984 fused_ordering(454) 00:18:47.984 fused_ordering(455) 00:18:47.984 fused_ordering(456) 00:18:47.984 fused_ordering(457) 00:18:47.984 fused_ordering(458) 00:18:47.984 fused_ordering(459) 00:18:47.984 fused_ordering(460) 00:18:47.984 fused_ordering(461) 00:18:47.984 fused_ordering(462) 00:18:47.984 fused_ordering(463) 00:18:47.984 fused_ordering(464) 00:18:47.984 fused_ordering(465) 00:18:47.984 fused_ordering(466) 00:18:47.984 fused_ordering(467) 00:18:47.984 fused_ordering(468) 00:18:47.984 fused_ordering(469) 00:18:47.984 fused_ordering(470) 00:18:47.984 fused_ordering(471) 00:18:47.984 fused_ordering(472) 00:18:47.984 fused_ordering(473) 00:18:47.984 fused_ordering(474) 00:18:47.984 fused_ordering(475) 00:18:47.984 fused_ordering(476) 00:18:47.984 fused_ordering(477) 00:18:47.984 fused_ordering(478) 00:18:47.984 fused_ordering(479) 00:18:47.984 fused_ordering(480) 00:18:47.984 fused_ordering(481) 00:18:47.984 fused_ordering(482) 00:18:47.984 fused_ordering(483) 00:18:47.984 fused_ordering(484) 00:18:47.984 fused_ordering(485) 00:18:47.984 fused_ordering(486) 00:18:47.984 fused_ordering(487) 00:18:47.984 fused_ordering(488) 00:18:47.984 fused_ordering(489) 00:18:47.984 fused_ordering(490) 00:18:47.984 fused_ordering(491) 00:18:47.984 fused_ordering(492) 00:18:47.984 fused_ordering(493) 00:18:47.984 fused_ordering(494) 00:18:47.984 fused_ordering(495) 00:18:47.984 fused_ordering(496) 00:18:47.984 fused_ordering(497) 00:18:47.984 fused_ordering(498) 00:18:47.984 fused_ordering(499) 00:18:47.984 fused_ordering(500) 00:18:47.984 fused_ordering(501) 00:18:47.984 fused_ordering(502) 00:18:47.984 fused_ordering(503) 00:18:47.984 fused_ordering(504) 00:18:47.984 fused_ordering(505) 00:18:47.984 fused_ordering(506) 00:18:47.984 fused_ordering(507) 00:18:47.984 fused_ordering(508) 00:18:47.984 fused_ordering(509) 00:18:47.984 fused_ordering(510) 00:18:47.984 fused_ordering(511) 00:18:47.984 fused_ordering(512) 00:18:47.984 fused_ordering(513) 00:18:47.984 fused_ordering(514) 00:18:47.984 fused_ordering(515) 00:18:47.984 fused_ordering(516) 00:18:47.984 fused_ordering(517) 00:18:47.984 fused_ordering(518) 00:18:47.984 fused_ordering(519) 00:18:47.984 fused_ordering(520) 00:18:47.984 fused_ordering(521) 00:18:47.984 fused_ordering(522) 00:18:47.984 fused_ordering(523) 00:18:47.984 fused_ordering(524) 00:18:47.984 fused_ordering(525) 00:18:47.984 fused_ordering(526) 00:18:47.984 fused_ordering(527) 00:18:47.984 fused_ordering(528) 00:18:47.984 fused_ordering(529) 00:18:47.984 fused_ordering(530) 00:18:47.984 fused_ordering(531) 00:18:47.984 fused_ordering(532) 00:18:47.984 fused_ordering(533) 00:18:47.984 fused_ordering(534) 00:18:47.984 fused_ordering(535) 00:18:47.984 fused_ordering(536) 00:18:47.984 fused_ordering(537) 00:18:47.984 fused_ordering(538) 00:18:47.984 fused_ordering(539) 00:18:47.984 fused_ordering(540) 00:18:47.984 fused_ordering(541) 00:18:47.985 fused_ordering(542) 00:18:47.985 fused_ordering(543) 00:18:47.985 fused_ordering(544) 00:18:47.985 fused_ordering(545) 00:18:47.985 fused_ordering(546) 00:18:47.985 fused_ordering(547) 00:18:47.985 fused_ordering(548) 00:18:47.985 fused_ordering(549) 00:18:47.985 fused_ordering(550) 00:18:47.985 fused_ordering(551) 00:18:47.985 fused_ordering(552) 00:18:47.985 fused_ordering(553) 00:18:47.985 fused_ordering(554) 00:18:47.985 fused_ordering(555) 00:18:47.985 fused_ordering(556) 00:18:47.985 fused_ordering(557) 00:18:47.985 fused_ordering(558) 00:18:47.985 fused_ordering(559) 00:18:47.985 fused_ordering(560) 00:18:47.985 fused_ordering(561) 00:18:47.985 fused_ordering(562) 00:18:47.985 fused_ordering(563) 00:18:47.985 fused_ordering(564) 00:18:47.985 fused_ordering(565) 00:18:47.985 fused_ordering(566) 00:18:47.985 fused_ordering(567) 00:18:47.985 fused_ordering(568) 00:18:47.985 fused_ordering(569) 00:18:47.985 fused_ordering(570) 00:18:47.985 fused_ordering(571) 00:18:47.985 fused_ordering(572) 00:18:47.985 fused_ordering(573) 00:18:47.985 fused_ordering(574) 00:18:47.985 fused_ordering(575) 00:18:47.985 fused_ordering(576) 00:18:47.985 fused_ordering(577) 00:18:47.985 fused_ordering(578) 00:18:47.985 fused_ordering(579) 00:18:47.985 fused_ordering(580) 00:18:47.985 fused_ordering(581) 00:18:47.985 fused_ordering(582) 00:18:47.985 fused_ordering(583) 00:18:47.985 fused_ordering(584) 00:18:47.985 fused_ordering(585) 00:18:47.985 fused_ordering(586) 00:18:47.985 fused_ordering(587) 00:18:47.985 fused_ordering(588) 00:18:47.985 fused_ordering(589) 00:18:47.985 fused_ordering(590) 00:18:47.985 fused_ordering(591) 00:18:47.985 fused_ordering(592) 00:18:47.985 fused_ordering(593) 00:18:47.985 fused_ordering(594) 00:18:47.985 fused_ordering(595) 00:18:47.985 fused_ordering(596) 00:18:47.985 fused_ordering(597) 00:18:47.985 fused_ordering(598) 00:18:47.985 fused_ordering(599) 00:18:47.985 fused_ordering(600) 00:18:47.985 fused_ordering(601) 00:18:47.985 fused_ordering(602) 00:18:47.985 fused_ordering(603) 00:18:47.985 fused_ordering(604) 00:18:47.985 fused_ordering(605) 00:18:47.985 fused_ordering(606) 00:18:47.985 fused_ordering(607) 00:18:47.985 fused_ordering(608) 00:18:47.985 fused_ordering(609) 00:18:47.985 fused_ordering(610) 00:18:47.985 fused_ordering(611) 00:18:47.985 fused_ordering(612) 00:18:47.985 fused_ordering(613) 00:18:47.985 fused_ordering(614) 00:18:47.985 fused_ordering(615) 00:18:48.558 fused_ordering(616) 00:18:48.558 fused_ordering(617) 00:18:48.558 fused_ordering(618) 00:18:48.558 fused_ordering(619) 00:18:48.558 fused_ordering(620) 00:18:48.558 fused_ordering(621) 00:18:48.558 fused_ordering(622) 00:18:48.558 fused_ordering(623) 00:18:48.558 fused_ordering(624) 00:18:48.558 fused_ordering(625) 00:18:48.558 fused_ordering(626) 00:18:48.558 fused_ordering(627) 00:18:48.558 fused_ordering(628) 00:18:48.558 fused_ordering(629) 00:18:48.558 fused_ordering(630) 00:18:48.558 fused_ordering(631) 00:18:48.558 fused_ordering(632) 00:18:48.558 fused_ordering(633) 00:18:48.558 fused_ordering(634) 00:18:48.558 fused_ordering(635) 00:18:48.558 fused_ordering(636) 00:18:48.558 fused_ordering(637) 00:18:48.558 fused_ordering(638) 00:18:48.558 fused_ordering(639) 00:18:48.558 fused_ordering(640) 00:18:48.558 fused_ordering(641) 00:18:48.558 fused_ordering(642) 00:18:48.558 fused_ordering(643) 00:18:48.558 fused_ordering(644) 00:18:48.558 fused_ordering(645) 00:18:48.558 fused_ordering(646) 00:18:48.558 fused_ordering(647) 00:18:48.558 fused_ordering(648) 00:18:48.558 fused_ordering(649) 00:18:48.558 fused_ordering(650) 00:18:48.558 fused_ordering(651) 00:18:48.558 fused_ordering(652) 00:18:48.558 fused_ordering(653) 00:18:48.558 fused_ordering(654) 00:18:48.558 fused_ordering(655) 00:18:48.558 fused_ordering(656) 00:18:48.558 fused_ordering(657) 00:18:48.558 fused_ordering(658) 00:18:48.558 fused_ordering(659) 00:18:48.558 fused_ordering(660) 00:18:48.558 fused_ordering(661) 00:18:48.558 fused_ordering(662) 00:18:48.558 fused_ordering(663) 00:18:48.558 fused_ordering(664) 00:18:48.558 fused_ordering(665) 00:18:48.558 fused_ordering(666) 00:18:48.558 fused_ordering(667) 00:18:48.558 fused_ordering(668) 00:18:48.558 fused_ordering(669) 00:18:48.558 fused_ordering(670) 00:18:48.558 fused_ordering(671) 00:18:48.558 fused_ordering(672) 00:18:48.558 fused_ordering(673) 00:18:48.558 fused_ordering(674) 00:18:48.558 fused_ordering(675) 00:18:48.558 fused_ordering(676) 00:18:48.558 fused_ordering(677) 00:18:48.558 fused_ordering(678) 00:18:48.558 fused_ordering(679) 00:18:48.558 fused_ordering(680) 00:18:48.558 fused_ordering(681) 00:18:48.558 fused_ordering(682) 00:18:48.558 fused_ordering(683) 00:18:48.558 fused_ordering(684) 00:18:48.558 fused_ordering(685) 00:18:48.558 fused_ordering(686) 00:18:48.558 fused_ordering(687) 00:18:48.558 fused_ordering(688) 00:18:48.558 fused_ordering(689) 00:18:48.558 fused_ordering(690) 00:18:48.558 fused_ordering(691) 00:18:48.558 fused_ordering(692) 00:18:48.558 fused_ordering(693) 00:18:48.558 fused_ordering(694) 00:18:48.558 fused_ordering(695) 00:18:48.558 fused_ordering(696) 00:18:48.558 fused_ordering(697) 00:18:48.558 fused_ordering(698) 00:18:48.558 fused_ordering(699) 00:18:48.558 fused_ordering(700) 00:18:48.558 fused_ordering(701) 00:18:48.558 fused_ordering(702) 00:18:48.558 fused_ordering(703) 00:18:48.558 fused_ordering(704) 00:18:48.558 fused_ordering(705) 00:18:48.558 fused_ordering(706) 00:18:48.558 fused_ordering(707) 00:18:48.558 fused_ordering(708) 00:18:48.558 fused_ordering(709) 00:18:48.558 fused_ordering(710) 00:18:48.558 fused_ordering(711) 00:18:48.558 fused_ordering(712) 00:18:48.558 fused_ordering(713) 00:18:48.558 fused_ordering(714) 00:18:48.558 fused_ordering(715) 00:18:48.558 fused_ordering(716) 00:18:48.558 fused_ordering(717) 00:18:48.558 fused_ordering(718) 00:18:48.558 fused_ordering(719) 00:18:48.558 fused_ordering(720) 00:18:48.558 fused_ordering(721) 00:18:48.558 fused_ordering(722) 00:18:48.558 fused_ordering(723) 00:18:48.558 fused_ordering(724) 00:18:48.558 fused_ordering(725) 00:18:48.558 fused_ordering(726) 00:18:48.558 fused_ordering(727) 00:18:48.558 fused_ordering(728) 00:18:48.558 fused_ordering(729) 00:18:48.558 fused_ordering(730) 00:18:48.558 fused_ordering(731) 00:18:48.558 fused_ordering(732) 00:18:48.558 fused_ordering(733) 00:18:48.558 fused_ordering(734) 00:18:48.558 fused_ordering(735) 00:18:48.558 fused_ordering(736) 00:18:48.558 fused_ordering(737) 00:18:48.558 fused_ordering(738) 00:18:48.558 fused_ordering(739) 00:18:48.558 fused_ordering(740) 00:18:48.558 fused_ordering(741) 00:18:48.558 fused_ordering(742) 00:18:48.558 fused_ordering(743) 00:18:48.558 fused_ordering(744) 00:18:48.558 fused_ordering(745) 00:18:48.558 fused_ordering(746) 00:18:48.558 fused_ordering(747) 00:18:48.558 fused_ordering(748) 00:18:48.558 fused_ordering(749) 00:18:48.558 fused_ordering(750) 00:18:48.558 fused_ordering(751) 00:18:48.558 fused_ordering(752) 00:18:48.558 fused_ordering(753) 00:18:48.558 fused_ordering(754) 00:18:48.558 fused_ordering(755) 00:18:48.558 fused_ordering(756) 00:18:48.558 fused_ordering(757) 00:18:48.558 fused_ordering(758) 00:18:48.558 fused_ordering(759) 00:18:48.558 fused_ordering(760) 00:18:48.558 fused_ordering(761) 00:18:48.558 fused_ordering(762) 00:18:48.558 fused_ordering(763) 00:18:48.558 fused_ordering(764) 00:18:48.558 fused_ordering(765) 00:18:48.558 fused_ordering(766) 00:18:48.558 fused_ordering(767) 00:18:48.558 fused_ordering(768) 00:18:48.558 fused_ordering(769) 00:18:48.558 fused_ordering(770) 00:18:48.558 fused_ordering(771) 00:18:48.558 fused_ordering(772) 00:18:48.558 fused_ordering(773) 00:18:48.558 fused_ordering(774) 00:18:48.558 fused_ordering(775) 00:18:48.558 fused_ordering(776) 00:18:48.558 fused_ordering(777) 00:18:48.558 fused_ordering(778) 00:18:48.558 fused_ordering(779) 00:18:48.558 fused_ordering(780) 00:18:48.558 fused_ordering(781) 00:18:48.558 fused_ordering(782) 00:18:48.558 fused_ordering(783) 00:18:48.558 fused_ordering(784) 00:18:48.558 fused_ordering(785) 00:18:48.558 fused_ordering(786) 00:18:48.558 fused_ordering(787) 00:18:48.558 fused_ordering(788) 00:18:48.558 fused_ordering(789) 00:18:48.558 fused_ordering(790) 00:18:48.558 fused_ordering(791) 00:18:48.558 fused_ordering(792) 00:18:48.558 fused_ordering(793) 00:18:48.558 fused_ordering(794) 00:18:48.558 fused_ordering(795) 00:18:48.558 fused_ordering(796) 00:18:48.558 fused_ordering(797) 00:18:48.559 fused_ordering(798) 00:18:48.559 fused_ordering(799) 00:18:48.559 fused_ordering(800) 00:18:48.559 fused_ordering(801) 00:18:48.559 fused_ordering(802) 00:18:48.559 fused_ordering(803) 00:18:48.559 fused_ordering(804) 00:18:48.559 fused_ordering(805) 00:18:48.559 fused_ordering(806) 00:18:48.559 fused_ordering(807) 00:18:48.559 fused_ordering(808) 00:18:48.559 fused_ordering(809) 00:18:48.559 fused_ordering(810) 00:18:48.559 fused_ordering(811) 00:18:48.559 fused_ordering(812) 00:18:48.559 fused_ordering(813) 00:18:48.559 fused_ordering(814) 00:18:48.559 fused_ordering(815) 00:18:48.559 fused_ordering(816) 00:18:48.559 fused_ordering(817) 00:18:48.559 fused_ordering(818) 00:18:48.559 fused_ordering(819) 00:18:48.559 fused_ordering(820) 00:18:49.207 fused_ordering(821) 00:18:49.207 fused_ordering(822) 00:18:49.207 fused_ordering(823) 00:18:49.207 fused_ordering(824) 00:18:49.207 fused_ordering(825) 00:18:49.207 fused_ordering(826) 00:18:49.207 fused_ordering(827) 00:18:49.207 fused_ordering(828) 00:18:49.207 fused_ordering(829) 00:18:49.207 fused_ordering(830) 00:18:49.207 fused_ordering(831) 00:18:49.207 fused_ordering(832) 00:18:49.207 fused_ordering(833) 00:18:49.207 fused_ordering(834) 00:18:49.207 fused_ordering(835) 00:18:49.207 fused_ordering(836) 00:18:49.207 fused_ordering(837) 00:18:49.207 fused_ordering(838) 00:18:49.207 fused_ordering(839) 00:18:49.207 fused_ordering(840) 00:18:49.207 fused_ordering(841) 00:18:49.207 fused_ordering(842) 00:18:49.207 fused_ordering(843) 00:18:49.207 fused_ordering(844) 00:18:49.207 fused_ordering(845) 00:18:49.207 fused_ordering(846) 00:18:49.207 fused_ordering(847) 00:18:49.207 fused_ordering(848) 00:18:49.207 fused_ordering(849) 00:18:49.207 fused_ordering(850) 00:18:49.207 fused_ordering(851) 00:18:49.207 fused_ordering(852) 00:18:49.207 fused_ordering(853) 00:18:49.207 fused_ordering(854) 00:18:49.207 fused_ordering(855) 00:18:49.207 fused_ordering(856) 00:18:49.207 fused_ordering(857) 00:18:49.207 fused_ordering(858) 00:18:49.207 fused_ordering(859) 00:18:49.207 fused_ordering(860) 00:18:49.207 fused_ordering(861) 00:18:49.207 fused_ordering(862) 00:18:49.207 fused_ordering(863) 00:18:49.207 fused_ordering(864) 00:18:49.207 fused_ordering(865) 00:18:49.207 fused_ordering(866) 00:18:49.207 fused_ordering(867) 00:18:49.207 fused_ordering(868) 00:18:49.207 fused_ordering(869) 00:18:49.207 fused_ordering(870) 00:18:49.207 fused_ordering(871) 00:18:49.207 fused_ordering(872) 00:18:49.207 fused_ordering(873) 00:18:49.207 fused_ordering(874) 00:18:49.207 fused_ordering(875) 00:18:49.207 fused_ordering(876) 00:18:49.207 fused_ordering(877) 00:18:49.207 fused_ordering(878) 00:18:49.207 fused_ordering(879) 00:18:49.207 fused_ordering(880) 00:18:49.207 fused_ordering(881) 00:18:49.207 fused_ordering(882) 00:18:49.207 fused_ordering(883) 00:18:49.207 fused_ordering(884) 00:18:49.207 fused_ordering(885) 00:18:49.207 fused_ordering(886) 00:18:49.207 fused_ordering(887) 00:18:49.207 fused_ordering(888) 00:18:49.207 fused_ordering(889) 00:18:49.207 fused_ordering(890) 00:18:49.207 fused_ordering(891) 00:18:49.207 fused_ordering(892) 00:18:49.207 fused_ordering(893) 00:18:49.207 fused_ordering(894) 00:18:49.207 fused_ordering(895) 00:18:49.207 fused_ordering(896) 00:18:49.207 fused_ordering(897) 00:18:49.207 fused_ordering(898) 00:18:49.207 fused_ordering(899) 00:18:49.207 fused_ordering(900) 00:18:49.207 fused_ordering(901) 00:18:49.207 fused_ordering(902) 00:18:49.207 fused_ordering(903) 00:18:49.207 fused_ordering(904) 00:18:49.207 fused_ordering(905) 00:18:49.207 fused_ordering(906) 00:18:49.207 fused_ordering(907) 00:18:49.207 fused_ordering(908) 00:18:49.207 fused_ordering(909) 00:18:49.207 fused_ordering(910) 00:18:49.207 fused_ordering(911) 00:18:49.207 fused_ordering(912) 00:18:49.207 fused_ordering(913) 00:18:49.207 fused_ordering(914) 00:18:49.207 fused_ordering(915) 00:18:49.207 fused_ordering(916) 00:18:49.207 fused_ordering(917) 00:18:49.207 fused_ordering(918) 00:18:49.207 fused_ordering(919) 00:18:49.207 fused_ordering(920) 00:18:49.207 fused_ordering(921) 00:18:49.207 fused_ordering(922) 00:18:49.207 fused_ordering(923) 00:18:49.207 fused_ordering(924) 00:18:49.207 fused_ordering(925) 00:18:49.207 fused_ordering(926) 00:18:49.207 fused_ordering(927) 00:18:49.207 fused_ordering(928) 00:18:49.207 fused_ordering(929) 00:18:49.207 fused_ordering(930) 00:18:49.207 fused_ordering(931) 00:18:49.207 fused_ordering(932) 00:18:49.207 fused_ordering(933) 00:18:49.207 fused_ordering(934) 00:18:49.207 fused_ordering(935) 00:18:49.207 fused_ordering(936) 00:18:49.207 fused_ordering(937) 00:18:49.207 fused_ordering(938) 00:18:49.207 fused_ordering(939) 00:18:49.207 fused_ordering(940) 00:18:49.207 fused_ordering(941) 00:18:49.207 fused_ordering(942) 00:18:49.207 fused_ordering(943) 00:18:49.207 fused_ordering(944) 00:18:49.207 fused_ordering(945) 00:18:49.207 fused_ordering(946) 00:18:49.207 fused_ordering(947) 00:18:49.207 fused_ordering(948) 00:18:49.207 fused_ordering(949) 00:18:49.207 fused_ordering(950) 00:18:49.207 fused_ordering(951) 00:18:49.207 fused_ordering(952) 00:18:49.207 fused_ordering(953) 00:18:49.207 fused_ordering(954) 00:18:49.207 fused_ordering(955) 00:18:49.207 fused_ordering(956) 00:18:49.207 fused_ordering(957) 00:18:49.207 fused_ordering(958) 00:18:49.207 fused_ordering(959) 00:18:49.207 fused_ordering(960) 00:18:49.207 fused_ordering(961) 00:18:49.207 fused_ordering(962) 00:18:49.207 fused_ordering(963) 00:18:49.207 fused_ordering(964) 00:18:49.207 fused_ordering(965) 00:18:49.207 fused_ordering(966) 00:18:49.207 fused_ordering(967) 00:18:49.207 fused_ordering(968) 00:18:49.207 fused_ordering(969) 00:18:49.207 fused_ordering(970) 00:18:49.207 fused_ordering(971) 00:18:49.207 fused_ordering(972) 00:18:49.207 fused_ordering(973) 00:18:49.207 fused_ordering(974) 00:18:49.207 fused_ordering(975) 00:18:49.207 fused_ordering(976) 00:18:49.207 fused_ordering(977) 00:18:49.207 fused_ordering(978) 00:18:49.207 fused_ordering(979) 00:18:49.207 fused_ordering(980) 00:18:49.207 fused_ordering(981) 00:18:49.207 fused_ordering(982) 00:18:49.207 fused_ordering(983) 00:18:49.207 fused_ordering(984) 00:18:49.207 fused_ordering(985) 00:18:49.207 fused_ordering(986) 00:18:49.207 fused_ordering(987) 00:18:49.207 fused_ordering(988) 00:18:49.207 fused_ordering(989) 00:18:49.207 fused_ordering(990) 00:18:49.207 fused_ordering(991) 00:18:49.207 fused_ordering(992) 00:18:49.207 fused_ordering(993) 00:18:49.207 fused_ordering(994) 00:18:49.207 fused_ordering(995) 00:18:49.207 fused_ordering(996) 00:18:49.207 fused_ordering(997) 00:18:49.207 fused_ordering(998) 00:18:49.207 fused_ordering(999) 00:18:49.207 fused_ordering(1000) 00:18:49.207 fused_ordering(1001) 00:18:49.207 fused_ordering(1002) 00:18:49.207 fused_ordering(1003) 00:18:49.207 fused_ordering(1004) 00:18:49.207 fused_ordering(1005) 00:18:49.207 fused_ordering(1006) 00:18:49.207 fused_ordering(1007) 00:18:49.207 fused_ordering(1008) 00:18:49.207 fused_ordering(1009) 00:18:49.207 fused_ordering(1010) 00:18:49.207 fused_ordering(1011) 00:18:49.207 fused_ordering(1012) 00:18:49.207 fused_ordering(1013) 00:18:49.207 fused_ordering(1014) 00:18:49.207 fused_ordering(1015) 00:18:49.207 fused_ordering(1016) 00:18:49.207 fused_ordering(1017) 00:18:49.207 fused_ordering(1018) 00:18:49.207 fused_ordering(1019) 00:18:49.207 fused_ordering(1020) 00:18:49.207 fused_ordering(1021) 00:18:49.207 fused_ordering(1022) 00:18:49.207 fused_ordering(1023) 00:18:49.207 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:18:49.207 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:18:49.207 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:49.207 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:18:49.208 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:49.208 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:18:49.208 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:49.208 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:49.208 rmmod nvme_tcp 00:18:49.208 rmmod nvme_fabrics 00:18:49.208 rmmod nvme_keyring 00:18:49.208 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:49.208 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:18:49.208 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:18:49.208 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@513 -- # '[' -n 333914 ']' 00:18:49.208 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # killprocess 333914 00:18:49.208 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 333914 ']' 00:18:49.208 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 333914 00:18:49.208 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:18:49.208 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:49.208 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 333914 00:18:49.208 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:49.208 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:49.208 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 333914' 00:18:49.208 killing process with pid 333914 00:18:49.208 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 333914 00:18:49.208 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 333914 00:18:49.208 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:49.208 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:49.208 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:49.208 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:18:49.208 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # iptables-save 00:18:49.208 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:49.208 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # iptables-restore 00:18:49.208 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:49.208 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:49.208 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.208 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:49.208 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:51.236 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:51.530 00:18:51.530 real 0m14.114s 00:18:51.530 user 0m8.080s 00:18:51.530 sys 0m7.277s 00:18:51.530 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:51.530 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:51.530 ************************************ 00:18:51.530 END TEST nvmf_fused_ordering 00:18:51.530 ************************************ 00:18:51.530 15:37:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:18:51.530 15:37:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:51.530 15:37:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:51.530 15:37:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:51.530 ************************************ 00:18:51.530 START TEST nvmf_ns_masking 00:18:51.530 ************************************ 00:18:51.530 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:18:51.530 * Looking for test storage... 00:18:51.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:51.530 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:51.530 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lcov --version 00:18:51.530 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:51.530 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:51.530 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:51.530 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:51.530 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:51.530 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:18:51.530 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:18:51.530 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:18:51.530 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:18:51.530 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:18:51.530 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:18:51.530 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:18:51.530 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:51.530 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:18:51.530 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:18:51.530 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:51.530 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:51.530 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:18:51.530 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:18:51.530 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:51.530 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:18:51.530 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:18:51.530 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:18:51.530 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:18:51.530 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:51.530 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:18:51.530 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:18:51.530 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:51.530 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:51.530 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:18:51.530 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:51.530 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:51.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.530 --rc genhtml_branch_coverage=1 00:18:51.530 --rc genhtml_function_coverage=1 00:18:51.530 --rc genhtml_legend=1 00:18:51.530 --rc geninfo_all_blocks=1 00:18:51.530 --rc geninfo_unexecuted_blocks=1 00:18:51.530 00:18:51.530 ' 00:18:51.530 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:51.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.530 --rc genhtml_branch_coverage=1 00:18:51.530 --rc genhtml_function_coverage=1 00:18:51.530 --rc genhtml_legend=1 00:18:51.530 --rc geninfo_all_blocks=1 00:18:51.530 --rc geninfo_unexecuted_blocks=1 00:18:51.530 00:18:51.530 ' 00:18:51.530 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:51.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.530 --rc genhtml_branch_coverage=1 00:18:51.530 --rc genhtml_function_coverage=1 00:18:51.530 --rc genhtml_legend=1 00:18:51.530 --rc geninfo_all_blocks=1 00:18:51.530 --rc geninfo_unexecuted_blocks=1 00:18:51.530 00:18:51.530 ' 00:18:51.530 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:51.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.530 --rc genhtml_branch_coverage=1 00:18:51.530 --rc genhtml_function_coverage=1 00:18:51.530 --rc genhtml_legend=1 00:18:51.530 --rc geninfo_all_blocks=1 00:18:51.530 --rc geninfo_unexecuted_blocks=1 00:18:51.530 00:18:51.530 ' 00:18:51.530 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:51.530 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:18:51.530 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:51.530 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:51.530 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:51.530 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:51.530 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:51.530 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:51.530 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:51.530 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:51.530 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:51.530 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:51.820 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:51.820 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:51.820 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:51.820 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:51.820 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:51.820 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:51.820 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:51.820 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:18:51.821 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:51.821 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:51.821 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:51.821 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.821 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.821 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.821 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:18:51.821 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.821 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:18:51.821 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:51.821 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:51.821 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:51.821 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:51.821 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:51.821 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:51.821 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:51.821 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:51.821 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:51.821 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:51.821 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:51.821 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:18:51.821 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:18:51.821 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:18:51.821 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=0551bccc-13fe-4ffa-8a27-bac5534ba5b7 00:18:51.821 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:18:51.821 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=cbdc791a-267c-4819-97b2-371f57972b4e 00:18:51.821 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:18:51.821 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:18:51.821 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:18:51.821 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:18:51.821 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=f7fa9f1c-8d9c-4a77-95cb-cc51af085128 00:18:51.821 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:18:51.821 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:51.821 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:51.821 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:51.821 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:51.821 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:51.821 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:51.821 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:51.821 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:51.821 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:18:51.821 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:18:51.821 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:18:51.821 15:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:00.042 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:00.042 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:19:00.042 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:00.042 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:00.042 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:00.042 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:00.042 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:00.042 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:19:00.042 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:00.042 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:19:00.042 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:19:00.042 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:19:00.042 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:19:00.042 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:19:00.042 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:19:00.042 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:00.042 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:00.042 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:00.042 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:00.042 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:00.042 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:00.042 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:00.042 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:00.042 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:00.042 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:00.042 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:00.042 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:19:00.042 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:19:00.042 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:19:00.042 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:19:00.042 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:19:00.042 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:19:00.042 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:19:00.042 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:00.042 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:00.042 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:19:00.042 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:19:00.042 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:00.042 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:00.042 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:19:00.042 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:00.043 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ up == up ]] 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:00.043 Found net devices under 0000:31:00.0: cvl_0_0 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ up == up ]] 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:00.043 Found net devices under 0000:31:00.1: cvl_0_1 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # is_hw=yes 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:00.043 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:00.043 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.604 ms 00:19:00.043 00:19:00.043 --- 10.0.0.2 ping statistics --- 00:19:00.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.043 rtt min/avg/max/mdev = 0.604/0.604/0.604/0.000 ms 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:00.043 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:00.043 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:19:00.043 00:19:00.043 --- 10.0.0.1 ping statistics --- 00:19:00.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.043 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # return 0 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # nvmfpid=338940 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # waitforlisten 338940 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 338940 ']' 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:00.043 15:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:00.043 [2024-09-27 15:37:39.781530] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:19:00.043 [2024-09-27 15:37:39.781591] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:00.043 [2024-09-27 15:37:39.874189] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.043 [2024-09-27 15:37:39.919827] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:00.043 [2024-09-27 15:37:39.919882] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:00.043 [2024-09-27 15:37:39.919891] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:00.043 [2024-09-27 15:37:39.919906] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:00.043 [2024-09-27 15:37:39.919912] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:00.043 [2024-09-27 15:37:39.919936] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.305 15:37:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:00.305 15:37:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:19:00.305 15:37:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:00.305 15:37:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:00.305 15:37:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:00.305 15:37:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:00.305 15:37:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:00.567 [2024-09-27 15:37:40.820179] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:00.567 15:37:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:19:00.567 15:37:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:19:00.567 15:37:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:00.567 Malloc1 00:19:00.567 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:00.828 Malloc2 00:19:00.828 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:01.089 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:19:01.351 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:01.351 [2024-09-27 15:37:41.738045] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:01.351 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:19:01.351 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f7fa9f1c-8d9c-4a77-95cb-cc51af085128 -a 10.0.0.2 -s 4420 -i 4 00:19:01.612 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:19:01.612 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:19:01.612 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:01.612 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:01.612 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:19:03.528 15:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:03.528 15:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:03.528 15:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:03.528 15:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:03.528 15:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:03.528 15:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:19:03.528 15:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:03.528 15:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:03.528 15:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:03.528 15:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:03.528 15:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:19:03.528 15:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:03.528 15:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:03.797 [ 0]:0x1 00:19:03.797 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:03.797 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:03.797 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=54cc9b77d2bd4e59b986c9ce7ea911a3 00:19:03.797 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 54cc9b77d2bd4e59b986c9ce7ea911a3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:03.797 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:19:03.797 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:19:03.797 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:03.797 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:03.797 [ 0]:0x1 00:19:04.058 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:04.058 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:04.058 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=54cc9b77d2bd4e59b986c9ce7ea911a3 00:19:04.058 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 54cc9b77d2bd4e59b986c9ce7ea911a3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:04.058 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:19:04.058 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:04.058 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:04.058 [ 1]:0x2 00:19:04.058 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:04.058 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:04.058 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3d93ac76a37b4ca984f2a7cb4e2383e9 00:19:04.058 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3d93ac76a37b4ca984f2a7cb4e2383e9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:04.058 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:19:04.058 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:04.058 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:04.058 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:04.320 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:19:04.582 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:19:04.582 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f7fa9f1c-8d9c-4a77-95cb-cc51af085128 -a 10.0.0.2 -s 4420 -i 4 00:19:04.582 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:19:04.582 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:19:04.582 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:04.582 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:19:04.582 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:19:04.582 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:19:06.496 15:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:06.496 15:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:06.496 15:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:06.758 15:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:06.758 15:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:06.758 15:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:19:06.758 15:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:06.758 15:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:06.758 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:06.758 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:06.758 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:19:06.758 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:19:06.758 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:19:06.758 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:19:06.758 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:06.758 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:19:06.758 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:06.758 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:19:06.758 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:06.758 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:06.758 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:06.758 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:06.758 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:06.758 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:06.758 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:19:06.758 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:06.758 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:06.758 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:06.758 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:19:06.758 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:06.758 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:06.758 [ 0]:0x2 00:19:06.758 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:06.758 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:06.758 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3d93ac76a37b4ca984f2a7cb4e2383e9 00:19:06.758 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3d93ac76a37b4ca984f2a7cb4e2383e9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:06.758 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:07.019 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:19:07.019 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:07.019 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:07.019 [ 0]:0x1 00:19:07.019 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:07.019 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:07.019 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=54cc9b77d2bd4e59b986c9ce7ea911a3 00:19:07.019 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 54cc9b77d2bd4e59b986c9ce7ea911a3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:07.019 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:19:07.019 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:07.019 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:07.019 [ 1]:0x2 00:19:07.019 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:07.019 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:07.019 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3d93ac76a37b4ca984f2a7cb4e2383e9 00:19:07.019 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3d93ac76a37b4ca984f2a7cb4e2383e9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:07.019 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:07.280 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:19:07.280 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:19:07.280 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:19:07.280 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:19:07.280 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:07.280 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:19:07.280 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:07.280 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:19:07.280 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:07.280 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:07.280 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:07.280 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:07.280 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:07.280 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:07.280 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:19:07.280 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:07.280 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:07.280 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:07.280 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:19:07.280 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:07.280 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:07.280 [ 0]:0x2 00:19:07.280 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:07.280 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:07.541 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3d93ac76a37b4ca984f2a7cb4e2383e9 00:19:07.541 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3d93ac76a37b4ca984f2a7cb4e2383e9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:07.541 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:19:07.541 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:07.541 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:07.541 15:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:07.801 15:37:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:19:07.801 15:37:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f7fa9f1c-8d9c-4a77-95cb-cc51af085128 -a 10.0.0.2 -s 4420 -i 4 00:19:07.801 15:37:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:07.801 15:37:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:19:07.801 15:37:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:07.801 15:37:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:19:07.801 15:37:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:19:07.801 15:37:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:19:09.715 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:09.715 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:09.715 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:09.715 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:19:09.715 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:09.715 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:19:09.715 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:09.715 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:09.977 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:09.977 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:09.977 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:19:09.977 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:09.977 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:09.977 [ 0]:0x1 00:19:09.977 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:09.977 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:09.977 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=54cc9b77d2bd4e59b986c9ce7ea911a3 00:19:09.977 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 54cc9b77d2bd4e59b986c9ce7ea911a3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:09.977 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:19:09.977 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:09.977 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:09.977 [ 1]:0x2 00:19:09.977 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:09.977 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:09.977 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3d93ac76a37b4ca984f2a7cb4e2383e9 00:19:09.977 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3d93ac76a37b4ca984f2a7cb4e2383e9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:09.977 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:10.238 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:19:10.238 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:19:10.238 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:19:10.238 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:19:10.238 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:10.238 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:19:10.238 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:10.238 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:19:10.238 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:10.238 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:10.238 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:10.238 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:10.238 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:10.238 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:10.238 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:19:10.238 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:10.238 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:10.238 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:10.238 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:19:10.238 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:10.238 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:10.238 [ 0]:0x2 00:19:10.239 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:10.239 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:10.239 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3d93ac76a37b4ca984f2a7cb4e2383e9 00:19:10.239 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3d93ac76a37b4ca984f2a7cb4e2383e9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:10.239 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:10.239 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:19:10.239 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:10.239 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:10.239 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:10.239 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:10.501 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:10.501 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:10.501 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:10.501 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:10.501 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:19:10.501 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:10.501 [2024-09-27 15:37:50.886800] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:19:10.501 request: 00:19:10.501 { 00:19:10.501 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:10.501 "nsid": 2, 00:19:10.501 "host": "nqn.2016-06.io.spdk:host1", 00:19:10.501 "method": "nvmf_ns_remove_host", 00:19:10.501 "req_id": 1 00:19:10.501 } 00:19:10.501 Got JSON-RPC error response 00:19:10.501 response: 00:19:10.501 { 00:19:10.501 "code": -32602, 00:19:10.501 "message": "Invalid parameters" 00:19:10.501 } 00:19:10.501 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:19:10.501 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:10.501 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:10.501 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:10.501 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:19:10.501 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:19:10.501 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:19:10.501 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:19:10.501 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:10.501 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:19:10.501 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:10.501 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:19:10.501 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:10.501 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:10.501 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:10.501 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:10.501 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:10.762 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:10.762 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:19:10.762 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:10.762 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:10.762 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:10.762 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:19:10.762 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:10.762 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:10.762 [ 0]:0x2 00:19:10.762 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:10.762 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:10.762 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3d93ac76a37b4ca984f2a7cb4e2383e9 00:19:10.762 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3d93ac76a37b4ca984f2a7cb4e2383e9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:10.762 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:19:10.762 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:10.762 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:10.762 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=341293 00:19:10.762 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:19:10.762 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:19:10.762 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 341293 /var/tmp/host.sock 00:19:10.762 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 341293 ']' 00:19:10.762 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:19:10.762 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:10.762 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:10.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:10.762 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:10.762 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:11.023 [2024-09-27 15:37:51.294937] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:19:11.023 [2024-09-27 15:37:51.294989] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid341293 ] 00:19:11.024 [2024-09-27 15:37:51.372431] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.024 [2024-09-27 15:37:51.403606] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:11.596 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:11.596 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:19:11.596 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:11.858 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:12.120 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 0551bccc-13fe-4ffa-8a27-bac5534ba5b7 00:19:12.120 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@783 -- # tr -d - 00:19:12.120 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 0551BCCC13FE4FFA8A27BAC5534BA5B7 -i 00:19:12.120 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid cbdc791a-267c-4819-97b2-371f57972b4e 00:19:12.120 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@783 -- # tr -d - 00:19:12.120 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g CBDC791A267C481997B2371F57972B4E -i 00:19:12.381 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:12.642 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:19:12.642 15:37:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:19:12.642 15:37:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:19:13.215 nvme0n1 00:19:13.215 15:37:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:19:13.215 15:37:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:19:13.477 nvme1n2 00:19:13.477 15:37:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:19:13.477 15:37:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:19:13.477 15:37:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:19:13.477 15:37:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:19:13.477 15:37:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:19:13.477 15:37:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:19:13.477 15:37:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:19:13.477 15:37:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:19:13.477 15:37:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:19:13.738 15:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 0551bccc-13fe-4ffa-8a27-bac5534ba5b7 == \0\5\5\1\b\c\c\c\-\1\3\f\e\-\4\f\f\a\-\8\a\2\7\-\b\a\c\5\5\3\4\b\a\5\b\7 ]] 00:19:13.738 15:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:19:13.738 15:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:19:13.738 15:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:19:13.999 15:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ cbdc791a-267c-4819-97b2-371f57972b4e == \c\b\d\c\7\9\1\a\-\2\6\7\c\-\4\8\1\9\-\9\7\b\2\-\3\7\1\f\5\7\9\7\2\b\4\e ]] 00:19:13.999 15:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 341293 00:19:13.999 15:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 341293 ']' 00:19:13.999 15:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 341293 00:19:13.999 15:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:19:13.999 15:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:13.999 15:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 341293 00:19:13.999 15:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:13.999 15:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:13.999 15:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 341293' 00:19:13.999 killing process with pid 341293 00:19:13.999 15:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 341293 00:19:13.999 15:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 341293 00:19:14.261 15:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:14.522 15:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:19:14.522 15:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:19:14.522 15:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # nvmfcleanup 00:19:14.522 15:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:19:14.522 15:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:14.522 15:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:19:14.522 15:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:14.522 15:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:14.522 rmmod nvme_tcp 00:19:14.522 rmmod nvme_fabrics 00:19:14.522 rmmod nvme_keyring 00:19:14.522 15:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:14.522 15:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:19:14.522 15:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:19:14.522 15:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@513 -- # '[' -n 338940 ']' 00:19:14.522 15:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # killprocess 338940 00:19:14.522 15:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 338940 ']' 00:19:14.522 15:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 338940 00:19:14.522 15:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:19:14.522 15:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:14.522 15:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 338940 00:19:14.522 15:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:14.522 15:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:14.522 15:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 338940' 00:19:14.522 killing process with pid 338940 00:19:14.522 15:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 338940 00:19:14.522 15:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 338940 00:19:14.783 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:19:14.783 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:19:14.783 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:19:14.784 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:19:14.784 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:19:14.784 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # iptables-save 00:19:14.784 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # iptables-restore 00:19:14.784 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:14.784 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:14.784 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:14.784 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:14.784 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:16.702 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:16.702 00:19:16.702 real 0m25.308s 00:19:16.702 user 0m25.476s 00:19:16.702 sys 0m7.970s 00:19:16.702 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:16.702 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:16.702 ************************************ 00:19:16.702 END TEST nvmf_ns_masking 00:19:16.702 ************************************ 00:19:16.702 15:37:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:19:16.702 15:37:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:19:16.702 15:37:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:16.702 15:37:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:16.702 15:37:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:16.964 ************************************ 00:19:16.964 START TEST nvmf_nvme_cli 00:19:16.964 ************************************ 00:19:16.964 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:19:16.964 * Looking for test storage... 00:19:16.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:16.964 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:16.964 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lcov --version 00:19:16.964 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:16.964 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:16.964 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:16.964 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:16.964 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:16.964 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:19:16.964 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:19:16.964 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:19:16.964 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:19:16.964 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:19:16.964 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:19:16.964 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:19:16.964 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:16.964 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:19:16.964 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:19:16.964 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:16.964 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:16.964 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:19:16.964 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:19:16.964 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:16.964 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:19:16.964 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:19:16.964 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:19:16.964 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:19:16.964 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:16.964 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:19:16.964 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:19:16.964 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:16.964 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:16.964 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:19:16.964 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:16.964 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:16.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.964 --rc genhtml_branch_coverage=1 00:19:16.964 --rc genhtml_function_coverage=1 00:19:16.964 --rc genhtml_legend=1 00:19:16.964 --rc geninfo_all_blocks=1 00:19:16.964 --rc geninfo_unexecuted_blocks=1 00:19:16.964 00:19:16.964 ' 00:19:16.964 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:16.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.964 --rc genhtml_branch_coverage=1 00:19:16.964 --rc genhtml_function_coverage=1 00:19:16.964 --rc genhtml_legend=1 00:19:16.964 --rc geninfo_all_blocks=1 00:19:16.964 --rc geninfo_unexecuted_blocks=1 00:19:16.964 00:19:16.964 ' 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:16.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.965 --rc genhtml_branch_coverage=1 00:19:16.965 --rc genhtml_function_coverage=1 00:19:16.965 --rc genhtml_legend=1 00:19:16.965 --rc geninfo_all_blocks=1 00:19:16.965 --rc geninfo_unexecuted_blocks=1 00:19:16.965 00:19:16.965 ' 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:16.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.965 --rc genhtml_branch_coverage=1 00:19:16.965 --rc genhtml_function_coverage=1 00:19:16.965 --rc genhtml_legend=1 00:19:16.965 --rc geninfo_all_blocks=1 00:19:16.965 --rc geninfo_unexecuted_blocks=1 00:19:16.965 00:19:16.965 ' 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:16.965 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # prepare_net_devs 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@434 -- # local -g is_hw=no 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # remove_spdk_ns 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:19:16.965 15:37:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:25.109 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:25.109 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:19:25.109 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:25.109 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:25.109 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:25.109 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:25.109 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:25.109 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:19:25.109 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:25.109 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:19:25.109 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:19:25.109 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:19:25.109 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:19:25.109 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:19:25.109 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:19:25.109 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:25.109 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:25.109 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:25.109 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:25.109 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:25.110 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:25.110 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ up == up ]] 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:25.110 Found net devices under 0000:31:00.0: cvl_0_0 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ up == up ]] 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:25.110 Found net devices under 0000:31:00.1: cvl_0_1 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # is_hw=yes 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:25.110 15:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:25.110 15:38:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:25.110 15:38:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:25.111 15:38:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:25.111 15:38:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:25.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:25.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.540 ms 00:19:25.111 00:19:25.111 --- 10.0.0.2 ping statistics --- 00:19:25.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:25.111 rtt min/avg/max/mdev = 0.540/0.540/0.540/0.000 ms 00:19:25.111 15:38:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:25.111 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:25.111 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:19:25.111 00:19:25.111 --- 10.0.0.1 ping statistics --- 00:19:25.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:25.111 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:19:25.111 15:38:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:25.111 15:38:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # return 0 00:19:25.111 15:38:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:19:25.111 15:38:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:25.111 15:38:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:19:25.111 15:38:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:19:25.111 15:38:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:25.111 15:38:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:19:25.111 15:38:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:19:25.111 15:38:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:19:25.111 15:38:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:25.111 15:38:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:25.111 15:38:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:25.111 15:38:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # nvmfpid=346384 00:19:25.111 15:38:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # waitforlisten 346384 00:19:25.111 15:38:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:25.111 15:38:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 346384 ']' 00:19:25.111 15:38:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.111 15:38:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:25.111 15:38:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.111 15:38:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:25.111 15:38:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:25.111 [2024-09-27 15:38:05.155033] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:19:25.111 [2024-09-27 15:38:05.155101] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:25.111 [2024-09-27 15:38:05.245262] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:25.111 [2024-09-27 15:38:05.294009] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:25.111 [2024-09-27 15:38:05.294064] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:25.111 [2024-09-27 15:38:05.294072] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:25.111 [2024-09-27 15:38:05.294080] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:25.111 [2024-09-27 15:38:05.294086] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:25.111 [2024-09-27 15:38:05.294235] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:25.111 [2024-09-27 15:38:05.294372] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:25.111 [2024-09-27 15:38:05.294529] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.111 [2024-09-27 15:38:05.294530] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:19:25.684 15:38:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:25.684 15:38:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:19:25.684 15:38:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:25.684 15:38:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:25.684 15:38:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:25.684 15:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:25.684 15:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:25.684 15:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.684 15:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:25.684 [2024-09-27 15:38:06.035627] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:25.684 15:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.684 15:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:25.684 15:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.684 15:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:25.684 Malloc0 00:19:25.684 15:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.684 15:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:25.684 15:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.684 15:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:25.684 Malloc1 00:19:25.684 15:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.684 15:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:19:25.684 15:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.684 15:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:25.684 15:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.684 15:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:25.684 15:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.684 15:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:25.684 15:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.684 15:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:25.684 15:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.684 15:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:25.684 15:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.684 15:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:25.684 15:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.684 15:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:25.684 [2024-09-27 15:38:06.136765] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:25.684 15:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.684 15:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:25.684 15:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.684 15:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:25.684 15:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.684 15:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:19:25.946 00:19:25.946 Discovery Log Number of Records 2, Generation counter 2 00:19:25.946 =====Discovery Log Entry 0====== 00:19:25.946 trtype: tcp 00:19:25.946 adrfam: ipv4 00:19:25.946 subtype: current discovery subsystem 00:19:25.946 treq: not required 00:19:25.946 portid: 0 00:19:25.946 trsvcid: 4420 00:19:25.946 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:25.946 traddr: 10.0.0.2 00:19:25.946 eflags: explicit discovery connections, duplicate discovery information 00:19:25.946 sectype: none 00:19:25.946 =====Discovery Log Entry 1====== 00:19:25.946 trtype: tcp 00:19:25.946 adrfam: ipv4 00:19:25.946 subtype: nvme subsystem 00:19:25.946 treq: not required 00:19:25.946 portid: 0 00:19:25.946 trsvcid: 4420 00:19:25.946 subnqn: nqn.2016-06.io.spdk:cnode1 00:19:25.946 traddr: 10.0.0.2 00:19:25.946 eflags: none 00:19:25.946 sectype: none 00:19:25.946 15:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:19:25.946 15:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:19:25.946 15:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:19:25.946 15:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:19:25.946 15:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:19:25.946 15:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:19:25.946 15:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:19:25.946 15:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:19:25.946 15:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:19:25.946 15:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:19:25.946 15:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:27.860 15:38:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:27.860 15:38:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:19:27.860 15:38:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:27.860 15:38:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:19:27.860 15:38:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:19:27.861 15:38:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:19:29.772 15:38:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:29.772 15:38:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:29.772 15:38:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:29.772 15:38:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:19:29.772 15:38:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:29.772 15:38:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:19:29.772 15:38:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:19:29.772 15:38:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:19:29.772 15:38:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:19:29.772 15:38:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:19:29.772 15:38:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:19:29.772 15:38:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:19:29.772 15:38:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:19:29.772 15:38:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:19:29.772 15:38:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:29.772 15:38:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n1 00:19:29.772 15:38:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:19:29.772 15:38:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:29.772 15:38:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n2 00:19:29.772 15:38:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:19:29.772 15:38:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:19:29.772 /dev/nvme0n2 ]] 00:19:29.772 15:38:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:19:29.772 15:38:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:19:29.772 15:38:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:19:29.772 15:38:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:19:29.772 15:38:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:19:29.772 15:38:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:19:29.772 15:38:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:19:29.772 15:38:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:19:29.772 15:38:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:19:29.772 15:38:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:29.772 15:38:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n1 00:19:29.772 15:38:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:19:29.772 15:38:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:29.772 15:38:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n2 00:19:29.772 15:38:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:19:29.772 15:38:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:19:29.772 15:38:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:29.772 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:29.772 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:29.772 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:19:29.772 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:29.772 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:29.772 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:29.772 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:29.772 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:19:29.772 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:19:29.772 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:29.772 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.772 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:29.772 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.772 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:19:29.772 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:19:29.772 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # nvmfcleanup 00:19:29.772 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:19:29.772 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:29.772 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:19:29.772 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:29.772 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:29.772 rmmod nvme_tcp 00:19:29.772 rmmod nvme_fabrics 00:19:29.772 rmmod nvme_keyring 00:19:29.772 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:29.772 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:19:29.772 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:19:29.772 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@513 -- # '[' -n 346384 ']' 00:19:29.772 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # killprocess 346384 00:19:29.772 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 346384 ']' 00:19:29.772 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 346384 00:19:29.772 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:19:29.772 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:29.772 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 346384 00:19:29.772 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:29.772 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:29.772 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 346384' 00:19:29.772 killing process with pid 346384 00:19:29.772 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 346384 00:19:29.772 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 346384 00:19:30.033 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:19:30.033 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:19:30.033 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:19:30.033 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:19:30.033 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@787 -- # iptables-save 00:19:30.033 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:19:30.033 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@787 -- # iptables-restore 00:19:30.033 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:30.033 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:30.033 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.033 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:30.033 15:38:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:32.583 00:19:32.583 real 0m15.282s 00:19:32.583 user 0m22.610s 00:19:32.583 sys 0m6.450s 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:32.583 ************************************ 00:19:32.583 END TEST nvmf_nvme_cli 00:19:32.583 ************************************ 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:32.583 ************************************ 00:19:32.583 START TEST nvmf_vfio_user 00:19:32.583 ************************************ 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:19:32.583 * Looking for test storage... 00:19:32.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lcov --version 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:32.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.583 --rc genhtml_branch_coverage=1 00:19:32.583 --rc genhtml_function_coverage=1 00:19:32.583 --rc genhtml_legend=1 00:19:32.583 --rc geninfo_all_blocks=1 00:19:32.583 --rc geninfo_unexecuted_blocks=1 00:19:32.583 00:19:32.583 ' 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:32.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.583 --rc genhtml_branch_coverage=1 00:19:32.583 --rc genhtml_function_coverage=1 00:19:32.583 --rc genhtml_legend=1 00:19:32.583 --rc geninfo_all_blocks=1 00:19:32.583 --rc geninfo_unexecuted_blocks=1 00:19:32.583 00:19:32.583 ' 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:32.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.583 --rc genhtml_branch_coverage=1 00:19:32.583 --rc genhtml_function_coverage=1 00:19:32.583 --rc genhtml_legend=1 00:19:32.583 --rc geninfo_all_blocks=1 00:19:32.583 --rc geninfo_unexecuted_blocks=1 00:19:32.583 00:19:32.583 ' 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:32.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.583 --rc genhtml_branch_coverage=1 00:19:32.583 --rc genhtml_function_coverage=1 00:19:32.583 --rc genhtml_legend=1 00:19:32.583 --rc geninfo_all_blocks=1 00:19:32.583 --rc geninfo_unexecuted_blocks=1 00:19:32.583 00:19:32.583 ' 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:32.583 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:32.584 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.584 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.584 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.584 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:19:32.584 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.584 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:19:32.584 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:32.584 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:32.584 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:32.584 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:32.584 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:32.584 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:32.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:32.584 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:32.584 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:32.584 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:32.584 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:32.584 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:32.584 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:19:32.584 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:32.584 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:32.584 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:32.584 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:19:32.584 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:19:32.584 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:19:32.584 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:19:32.584 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=347883 00:19:32.584 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 347883' 00:19:32.584 Process pid: 347883 00:19:32.584 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:32.584 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 347883 00:19:32.584 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 347883 ']' 00:19:32.584 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:19:32.584 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.584 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:32.584 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:32.584 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:32.584 15:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:32.584 [2024-09-27 15:38:12.840827] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:19:32.584 [2024-09-27 15:38:12.840881] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:32.584 [2024-09-27 15:38:12.912047] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:32.584 [2024-09-27 15:38:12.942221] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:32.584 [2024-09-27 15:38:12.942252] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:32.584 [2024-09-27 15:38:12.942258] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:32.584 [2024-09-27 15:38:12.942263] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:32.584 [2024-09-27 15:38:12.942267] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:32.584 [2024-09-27 15:38:12.942414] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:32.584 [2024-09-27 15:38:12.942569] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:32.584 [2024-09-27 15:38:12.942720] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.584 [2024-09-27 15:38:12.942722] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:19:32.584 15:38:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:32.584 15:38:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:19:32.584 15:38:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:19:33.971 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:19:33.971 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:19:33.971 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:19:33.971 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:33.971 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:19:33.971 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:33.971 Malloc1 00:19:33.971 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:19:34.232 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:19:34.494 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:19:34.494 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:34.494 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:19:34.755 15:38:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:34.755 Malloc2 00:19:34.755 15:38:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:19:35.017 15:38:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:19:35.279 15:38:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:19:35.279 15:38:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:19:35.279 15:38:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:19:35.279 15:38:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:35.279 15:38:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:19:35.279 15:38:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:19:35.279 15:38:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:19:35.279 [2024-09-27 15:38:15.754535] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:19:35.279 [2024-09-27 15:38:15.754591] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid348567 ] 00:19:35.544 [2024-09-27 15:38:15.782983] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:19:35.544 [2024-09-27 15:38:15.793159] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:35.544 [2024-09-27 15:38:15.793175] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f0df00a5000 00:19:35.544 [2024-09-27 15:38:15.794155] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:35.544 [2024-09-27 15:38:15.795154] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:35.544 [2024-09-27 15:38:15.796166] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:35.544 [2024-09-27 15:38:15.797169] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:35.545 [2024-09-27 15:38:15.798167] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:35.545 [2024-09-27 15:38:15.799178] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:35.545 [2024-09-27 15:38:15.800181] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:35.545 [2024-09-27 15:38:15.801189] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:35.545 [2024-09-27 15:38:15.802195] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:35.545 [2024-09-27 15:38:15.802202] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f0deedaf000 00:19:35.545 [2024-09-27 15:38:15.803120] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:35.545 [2024-09-27 15:38:15.815568] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:19:35.545 [2024-09-27 15:38:15.815591] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:19:35.545 [2024-09-27 15:38:15.818288] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:19:35.545 [2024-09-27 15:38:15.818319] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:19:35.545 [2024-09-27 15:38:15.818386] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:19:35.545 [2024-09-27 15:38:15.818399] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:19:35.545 [2024-09-27 15:38:15.818403] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:19:35.545 [2024-09-27 15:38:15.822899] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:19:35.545 [2024-09-27 15:38:15.822905] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:19:35.545 [2024-09-27 15:38:15.822910] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:19:35.545 [2024-09-27 15:38:15.823313] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:19:35.545 [2024-09-27 15:38:15.823318] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:19:35.545 [2024-09-27 15:38:15.823324] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:19:35.545 [2024-09-27 15:38:15.824316] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:19:35.545 [2024-09-27 15:38:15.824322] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:35.545 [2024-09-27 15:38:15.825326] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:19:35.545 [2024-09-27 15:38:15.825332] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:19:35.545 [2024-09-27 15:38:15.825335] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:19:35.545 [2024-09-27 15:38:15.825340] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:35.545 [2024-09-27 15:38:15.825444] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:19:35.545 [2024-09-27 15:38:15.825447] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:35.545 [2024-09-27 15:38:15.825451] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:19:35.545 [2024-09-27 15:38:15.826333] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:19:35.545 [2024-09-27 15:38:15.827337] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:19:35.545 [2024-09-27 15:38:15.828346] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:19:35.545 [2024-09-27 15:38:15.829350] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:35.546 [2024-09-27 15:38:15.829400] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:35.546 [2024-09-27 15:38:15.830357] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:19:35.546 [2024-09-27 15:38:15.830362] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:35.546 [2024-09-27 15:38:15.830368] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:19:35.546 [2024-09-27 15:38:15.830383] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:19:35.546 [2024-09-27 15:38:15.830388] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:19:35.546 [2024-09-27 15:38:15.830399] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:35.546 [2024-09-27 15:38:15.830403] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:35.546 [2024-09-27 15:38:15.830406] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:35.546 [2024-09-27 15:38:15.830416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:35.546 [2024-09-27 15:38:15.830448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:19:35.546 [2024-09-27 15:38:15.830455] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:19:35.546 [2024-09-27 15:38:15.830459] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:19:35.546 [2024-09-27 15:38:15.830462] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:19:35.546 [2024-09-27 15:38:15.830465] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:19:35.546 [2024-09-27 15:38:15.830468] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:19:35.546 [2024-09-27 15:38:15.830471] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:19:35.546 [2024-09-27 15:38:15.830475] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:19:35.546 [2024-09-27 15:38:15.830480] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:19:35.546 [2024-09-27 15:38:15.830487] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:19:35.546 [2024-09-27 15:38:15.830498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:19:35.546 [2024-09-27 15:38:15.830506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.546 [2024-09-27 15:38:15.830511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.546 [2024-09-27 15:38:15.830517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.546 [2024-09-27 15:38:15.830523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.546 [2024-09-27 15:38:15.830527] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:19:35.546 [2024-09-27 15:38:15.830533] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:35.546 [2024-09-27 15:38:15.830540] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:19:35.546 [2024-09-27 15:38:15.830546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:19:35.546 [2024-09-27 15:38:15.830552] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:19:35.546 [2024-09-27 15:38:15.830555] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:35.546 [2024-09-27 15:38:15.830560] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:19:35.546 [2024-09-27 15:38:15.830567] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:19:35.546 [2024-09-27 15:38:15.830573] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:35.547 [2024-09-27 15:38:15.830585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:19:35.547 [2024-09-27 15:38:15.830628] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:19:35.547 [2024-09-27 15:38:15.830633] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:19:35.547 [2024-09-27 15:38:15.830638] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:19:35.547 [2024-09-27 15:38:15.830641] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:19:35.547 [2024-09-27 15:38:15.830644] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:35.547 [2024-09-27 15:38:15.830648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:19:35.547 [2024-09-27 15:38:15.830659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:19:35.547 [2024-09-27 15:38:15.830665] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:19:35.547 [2024-09-27 15:38:15.830674] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:19:35.547 [2024-09-27 15:38:15.830680] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:19:35.547 [2024-09-27 15:38:15.830685] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:35.547 [2024-09-27 15:38:15.830688] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:35.547 [2024-09-27 15:38:15.830690] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:35.547 [2024-09-27 15:38:15.830695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:35.547 [2024-09-27 15:38:15.830712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:19:35.547 [2024-09-27 15:38:15.830721] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:35.547 [2024-09-27 15:38:15.830726] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:35.549 [2024-09-27 15:38:15.830731] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:35.549 [2024-09-27 15:38:15.830734] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:35.549 [2024-09-27 15:38:15.830736] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:35.549 [2024-09-27 15:38:15.830742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:35.549 [2024-09-27 15:38:15.830754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:19:35.549 [2024-09-27 15:38:15.830760] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:35.549 [2024-09-27 15:38:15.830765] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:19:35.549 [2024-09-27 15:38:15.830771] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:19:35.550 [2024-09-27 15:38:15.830775] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:19:35.550 [2024-09-27 15:38:15.830778] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:35.550 [2024-09-27 15:38:15.830782] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:19:35.550 [2024-09-27 15:38:15.830786] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:19:35.550 [2024-09-27 15:38:15.830789] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:19:35.550 [2024-09-27 15:38:15.830793] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:19:35.550 [2024-09-27 15:38:15.830806] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:19:35.550 [2024-09-27 15:38:15.830814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:19:35.550 [2024-09-27 15:38:15.830822] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:19:35.550 [2024-09-27 15:38:15.830832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:19:35.550 [2024-09-27 15:38:15.830840] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:19:35.550 [2024-09-27 15:38:15.830850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:19:35.550 [2024-09-27 15:38:15.830858] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:35.550 [2024-09-27 15:38:15.830868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:19:35.550 [2024-09-27 15:38:15.830877] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:19:35.550 [2024-09-27 15:38:15.830881] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:19:35.550 [2024-09-27 15:38:15.830883] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:19:35.551 [2024-09-27 15:38:15.830886] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:19:35.551 [2024-09-27 15:38:15.830888] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:19:35.551 [2024-09-27 15:38:15.830896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:19:35.551 [2024-09-27 15:38:15.830901] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:19:35.551 [2024-09-27 15:38:15.830905] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:19:35.551 [2024-09-27 15:38:15.830907] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:35.551 [2024-09-27 15:38:15.830912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:19:35.551 [2024-09-27 15:38:15.830917] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:19:35.551 [2024-09-27 15:38:15.830920] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:35.551 [2024-09-27 15:38:15.830922] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:35.551 [2024-09-27 15:38:15.830926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:35.551 [2024-09-27 15:38:15.830931] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:19:35.551 [2024-09-27 15:38:15.830934] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:19:35.551 [2024-09-27 15:38:15.830937] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:35.551 [2024-09-27 15:38:15.830941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:19:35.551 [2024-09-27 15:38:15.830946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:19:35.551 [2024-09-27 15:38:15.830954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:19:35.551 [2024-09-27 15:38:15.830962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:19:35.551 [2024-09-27 15:38:15.830967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:19:35.551 ===================================================== 00:19:35.551 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:35.551 ===================================================== 00:19:35.551 Controller Capabilities/Features 00:19:35.551 ================================ 00:19:35.551 Vendor ID: 4e58 00:19:35.552 Subsystem Vendor ID: 4e58 00:19:35.552 Serial Number: SPDK1 00:19:35.552 Model Number: SPDK bdev Controller 00:19:35.552 Firmware Version: 25.01 00:19:35.552 Recommended Arb Burst: 6 00:19:35.552 IEEE OUI Identifier: 8d 6b 50 00:19:35.552 Multi-path I/O 00:19:35.552 May have multiple subsystem ports: Yes 00:19:35.552 May have multiple controllers: Yes 00:19:35.552 Associated with SR-IOV VF: No 00:19:35.552 Max Data Transfer Size: 131072 00:19:35.552 Max Number of Namespaces: 32 00:19:35.552 Max Number of I/O Queues: 127 00:19:35.552 NVMe Specification Version (VS): 1.3 00:19:35.552 NVMe Specification Version (Identify): 1.3 00:19:35.552 Maximum Queue Entries: 256 00:19:35.552 Contiguous Queues Required: Yes 00:19:35.552 Arbitration Mechanisms Supported 00:19:35.552 Weighted Round Robin: Not Supported 00:19:35.552 Vendor Specific: Not Supported 00:19:35.552 Reset Timeout: 15000 ms 00:19:35.552 Doorbell Stride: 4 bytes 00:19:35.552 NVM Subsystem Reset: Not Supported 00:19:35.552 Command Sets Supported 00:19:35.552 NVM Command Set: Supported 00:19:35.552 Boot Partition: Not Supported 00:19:35.552 Memory Page Size Minimum: 4096 bytes 00:19:35.552 Memory Page Size Maximum: 4096 bytes 00:19:35.552 Persistent Memory Region: Not Supported 00:19:35.552 Optional Asynchronous Events Supported 00:19:35.552 Namespace Attribute Notices: Supported 00:19:35.552 Firmware Activation Notices: Not Supported 00:19:35.552 ANA Change Notices: Not Supported 00:19:35.552 PLE Aggregate Log Change Notices: Not Supported 00:19:35.552 LBA Status Info Alert Notices: Not Supported 00:19:35.552 EGE Aggregate Log Change Notices: Not Supported 00:19:35.552 Normal NVM Subsystem Shutdown event: Not Supported 00:19:35.552 Zone Descriptor Change Notices: Not Supported 00:19:35.552 Discovery Log Change Notices: Not Supported 00:19:35.552 Controller Attributes 00:19:35.552 128-bit Host Identifier: Supported 00:19:35.552 Non-Operational Permissive Mode: Not Supported 00:19:35.552 NVM Sets: Not Supported 00:19:35.552 Read Recovery Levels: Not Supported 00:19:35.552 Endurance Groups: Not Supported 00:19:35.552 Predictable Latency Mode: Not Supported 00:19:35.552 Traffic Based Keep ALive: Not Supported 00:19:35.552 Namespace Granularity: Not Supported 00:19:35.552 SQ Associations: Not Supported 00:19:35.552 UUID List: Not Supported 00:19:35.553 Multi-Domain Subsystem: Not Supported 00:19:35.553 Fixed Capacity Management: Not Supported 00:19:35.553 Variable Capacity Management: Not Supported 00:19:35.553 Delete Endurance Group: Not Supported 00:19:35.553 Delete NVM Set: Not Supported 00:19:35.553 Extended LBA Formats Supported: Not Supported 00:19:35.553 Flexible Data Placement Supported: Not Supported 00:19:35.553 00:19:35.553 Controller Memory Buffer Support 00:19:35.553 ================================ 00:19:35.553 Supported: No 00:19:35.553 00:19:35.553 Persistent Memory Region Support 00:19:35.553 ================================ 00:19:35.553 Supported: No 00:19:35.553 00:19:35.553 Admin Command Set Attributes 00:19:35.553 ============================ 00:19:35.553 Security Send/Receive: Not Supported 00:19:35.553 Format NVM: Not Supported 00:19:35.553 Firmware Activate/Download: Not Supported 00:19:35.553 Namespace Management: Not Supported 00:19:35.553 Device Self-Test: Not Supported 00:19:35.553 Directives: Not Supported 00:19:35.553 NVMe-MI: Not Supported 00:19:35.553 Virtualization Management: Not Supported 00:19:35.553 Doorbell Buffer Config: Not Supported 00:19:35.553 Get LBA Status Capability: Not Supported 00:19:35.553 Command & Feature Lockdown Capability: Not Supported 00:19:35.553 Abort Command Limit: 4 00:19:35.553 Async Event Request Limit: 4 00:19:35.553 Number of Firmware Slots: N/A 00:19:35.553 Firmware Slot 1 Read-Only: N/A 00:19:35.553 Firmware Activation Without Reset: N/A 00:19:35.553 Multiple Update Detection Support: N/A 00:19:35.553 Firmware Update Granularity: No Information Provided 00:19:35.553 Per-Namespace SMART Log: No 00:19:35.553 Asymmetric Namespace Access Log Page: Not Supported 00:19:35.553 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:19:35.553 Command Effects Log Page: Supported 00:19:35.553 Get Log Page Extended Data: Supported 00:19:35.553 Telemetry Log Pages: Not Supported 00:19:35.553 Persistent Event Log Pages: Not Supported 00:19:35.553 Supported Log Pages Log Page: May Support 00:19:35.554 Commands Supported & Effects Log Page: Not Supported 00:19:35.554 Feature Identifiers & Effects Log Page:May Support 00:19:35.554 NVMe-MI Commands & Effects Log Page: May Support 00:19:35.554 Data Area 4 for Telemetry Log: Not Supported 00:19:35.554 Error Log Page Entries Supported: 128 00:19:35.554 Keep Alive: Supported 00:19:35.554 Keep Alive Granularity: 10000 ms 00:19:35.554 00:19:35.554 NVM Command Set Attributes 00:19:35.554 ========================== 00:19:35.554 Submission Queue Entry Size 00:19:35.554 Max: 64 00:19:35.554 Min: 64 00:19:35.554 Completion Queue Entry Size 00:19:35.554 Max: 16 00:19:35.554 Min: 16 00:19:35.554 Number of Namespaces: 32 00:19:35.554 Compare Command: Supported 00:19:35.554 Write Uncorrectable Command: Not Supported 00:19:35.554 Dataset Management Command: Supported 00:19:35.554 Write Zeroes Command: Supported 00:19:35.554 Set Features Save Field: Not Supported 00:19:35.554 Reservations: Not Supported 00:19:35.554 Timestamp: Not Supported 00:19:35.554 Copy: Supported 00:19:35.554 Volatile Write Cache: Present 00:19:35.554 Atomic Write Unit (Normal): 1 00:19:35.554 Atomic Write Unit (PFail): 1 00:19:35.554 Atomic Compare & Write Unit: 1 00:19:35.554 Fused Compare & Write: Supported 00:19:35.554 Scatter-Gather List 00:19:35.554 SGL Command Set: Supported (Dword aligned) 00:19:35.554 SGL Keyed: Not Supported 00:19:35.554 SGL Bit Bucket Descriptor: Not Supported 00:19:35.554 SGL Metadata Pointer: Not Supported 00:19:35.554 Oversized SGL: Not Supported 00:19:35.554 SGL Metadata Address: Not Supported 00:19:35.554 SGL Offset: Not Supported 00:19:35.554 Transport SGL Data Block: Not Supported 00:19:35.554 Replay Protected Memory Block: Not Supported 00:19:35.554 00:19:35.554 Firmware Slot Information 00:19:35.555 ========================= 00:19:35.555 Active slot: 1 00:19:35.555 Slot 1 Firmware Revision: 25.01 00:19:35.555 00:19:35.555 00:19:35.555 Commands Supported and Effects 00:19:35.555 ============================== 00:19:35.555 Admin Commands 00:19:35.555 -------------- 00:19:35.555 Get Log Page (02h): Supported 00:19:35.555 Identify (06h): Supported 00:19:35.555 Abort (08h): Supported 00:19:35.555 Set Features (09h): Supported 00:19:35.555 Get Features (0Ah): Supported 00:19:35.555 Asynchronous Event Request (0Ch): Supported 00:19:35.555 Keep Alive (18h): Supported 00:19:35.555 I/O Commands 00:19:35.555 ------------ 00:19:35.555 Flush (00h): Supported LBA-Change 00:19:35.555 Write (01h): Supported LBA-Change 00:19:35.555 Read (02h): Supported 00:19:35.555 Compare (05h): Supported 00:19:35.555 Write Zeroes (08h): Supported LBA-Change 00:19:35.555 Dataset Management (09h): Supported LBA-Change 00:19:35.555 Copy (19h): Supported LBA-Change 00:19:35.555 00:19:35.555 Error Log 00:19:35.555 ========= 00:19:35.555 00:19:35.555 Arbitration 00:19:35.555 =========== 00:19:35.555 Arbitration Burst: 1 00:19:35.555 00:19:35.555 Power Management 00:19:35.555 ================ 00:19:35.555 Number of Power States: 1 00:19:35.555 Current Power State: Power State #0 00:19:35.555 Power State #0: 00:19:35.555 Max Power: 0.00 W 00:19:35.555 Non-Operational State: Operational 00:19:35.555 Entry Latency: Not Reported 00:19:35.555 Exit Latency: Not Reported 00:19:35.555 Relative Read Throughput: 0 00:19:35.555 Relative Read Latency: 0 00:19:35.555 Relative Write Throughput: 0 00:19:35.555 Relative Write Latency: 0 00:19:35.555 Idle Power: Not Reported 00:19:35.555 Active Power: Not Reported 00:19:35.555 Non-Operational Permissive Mode: Not Supported 00:19:35.555 00:19:35.555 Health Information 00:19:35.555 ================== 00:19:35.555 Critical Warnings: 00:19:35.555 Available Spare Space: OK 00:19:35.555 Temperature: OK 00:19:35.555 Device Reliability: OK 00:19:35.555 Read Only: No 00:19:35.555 Volatile Memory Backup: OK 00:19:35.556 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:35.556 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:35.556 Available Spare: 0% 00:19:35.556 Available Sp[2024-09-27 15:38:15.831041] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:19:35.556 [2024-09-27 15:38:15.831049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:19:35.556 [2024-09-27 15:38:15.831069] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:19:35.556 [2024-09-27 15:38:15.831076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.556 [2024-09-27 15:38:15.831081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.556 [2024-09-27 15:38:15.831085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.556 [2024-09-27 15:38:15.831089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.556 [2024-09-27 15:38:15.831368] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:19:35.556 [2024-09-27 15:38:15.831375] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:19:35.556 [2024-09-27 15:38:15.832368] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:35.556 [2024-09-27 15:38:15.832406] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:19:35.556 [2024-09-27 15:38:15.832411] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:19:35.556 [2024-09-27 15:38:15.833374] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:19:35.556 [2024-09-27 15:38:15.833384] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:19:35.556 [2024-09-27 15:38:15.833439] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:19:35.556 [2024-09-27 15:38:15.834396] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:35.556 are Threshold: 0% 00:19:35.556 Life Percentage Used: 0% 00:19:35.556 Data Units Read: 0 00:19:35.556 Data Units Written: 0 00:19:35.556 Host Read Commands: 0 00:19:35.556 Host Write Commands: 0 00:19:35.556 Controller Busy Time: 0 minutes 00:19:35.556 Power Cycles: 0 00:19:35.556 Power On Hours: 0 hours 00:19:35.556 Unsafe Shutdowns: 0 00:19:35.556 Unrecoverable Media Errors: 0 00:19:35.556 Lifetime Error Log Entries: 0 00:19:35.556 Warning Temperature Time: 0 minutes 00:19:35.556 Critical Temperature Time: 0 minutes 00:19:35.556 00:19:35.556 Number of Queues 00:19:35.556 ================ 00:19:35.556 Number of I/O Submission Queues: 127 00:19:35.556 Number of I/O Completion Queues: 127 00:19:35.556 00:19:35.556 Active Namespaces 00:19:35.556 ================= 00:19:35.556 Namespace ID:1 00:19:35.556 Error Recovery Timeout: Unlimited 00:19:35.556 Command Set Identifier: NVM (00h) 00:19:35.556 Deallocate: Supported 00:19:35.556 Deallocated/Unwritten Error: Not Supported 00:19:35.556 Deallocated Read Value: Unknown 00:19:35.556 Deallocate in Write Zeroes: Not Supported 00:19:35.556 Deallocated Guard Field: 0xFFFF 00:19:35.556 Flush: Supported 00:19:35.556 Reservation: Supported 00:19:35.557 Namespace Sharing Capabilities: Multiple Controllers 00:19:35.557 Size (in LBAs): 131072 (0GiB) 00:19:35.557 Capacity (in LBAs): 131072 (0GiB) 00:19:35.557 Utilization (in LBAs): 131072 (0GiB) 00:19:35.557 NGUID: D4406EC45B214E4DA384E207CAB257EB 00:19:35.557 UUID: d4406ec4-5b21-4e4d-a384-e207cab257eb 00:19:35.557 Thin Provisioning: Not Supported 00:19:35.557 Per-NS Atomic Units: Yes 00:19:35.557 Atomic Boundary Size (Normal): 0 00:19:35.557 Atomic Boundary Size (PFail): 0 00:19:35.557 Atomic Boundary Offset: 0 00:19:35.557 Maximum Single Source Range Length: 65535 00:19:35.557 Maximum Copy Length: 65535 00:19:35.557 Maximum Source Range Count: 1 00:19:35.557 NGUID/EUI64 Never Reused: No 00:19:35.557 Namespace Write Protected: No 00:19:35.557 Number of LBA Formats: 1 00:19:35.557 Current LBA Format: LBA Format #00 00:19:35.557 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:35.557 00:19:35.557 15:38:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:19:35.557 [2024-09-27 15:38:16.001545] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:40.860 Initializing NVMe Controllers 00:19:40.860 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:40.860 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:19:40.860 Initialization complete. Launching workers. 00:19:40.860 ======================================================== 00:19:40.860 Latency(us) 00:19:40.860 Device Information : IOPS MiB/s Average min max 00:19:40.860 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40074.79 156.54 3193.71 841.23 6949.55 00:19:40.860 ======================================================== 00:19:40.860 Total : 40074.79 156.54 3193.71 841.23 6949.55 00:19:40.860 00:19:40.860 [2024-09-27 15:38:21.018356] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:40.860 15:38:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:19:40.860 [2024-09-27 15:38:21.202196] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:46.156 Initializing NVMe Controllers 00:19:46.156 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:46.156 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:19:46.156 Initialization complete. Launching workers. 00:19:46.156 ======================================================== 00:19:46.156 Latency(us) 00:19:46.156 Device Information : IOPS MiB/s Average min max 00:19:46.156 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16033.94 62.63 7988.63 5328.73 14889.75 00:19:46.156 ======================================================== 00:19:46.156 Total : 16033.94 62.63 7988.63 5328.73 14889.75 00:19:46.156 00:19:46.156 [2024-09-27 15:38:26.243318] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:46.156 15:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:19:46.156 [2024-09-27 15:38:26.433111] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:51.454 [2024-09-27 15:38:31.510097] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:51.454 Initializing NVMe Controllers 00:19:51.454 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:51.454 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:51.454 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:19:51.454 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:19:51.454 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:19:51.454 Initialization complete. Launching workers. 00:19:51.454 Starting thread on core 2 00:19:51.454 Starting thread on core 3 00:19:51.454 Starting thread on core 1 00:19:51.454 15:38:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:19:51.454 [2024-09-27 15:38:31.740957] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:54.755 [2024-09-27 15:38:34.802100] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:54.755 Initializing NVMe Controllers 00:19:54.755 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:54.755 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:54.755 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:19:54.755 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:19:54.755 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:19:54.755 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:19:54.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:19:54.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:19:54.755 Initialization complete. Launching workers. 00:19:54.755 Starting thread on core 1 with urgent priority queue 00:19:54.755 Starting thread on core 2 with urgent priority queue 00:19:54.755 Starting thread on core 3 with urgent priority queue 00:19:54.755 Starting thread on core 0 with urgent priority queue 00:19:54.755 SPDK bdev Controller (SPDK1 ) core 0: 10432.67 IO/s 9.59 secs/100000 ios 00:19:54.755 SPDK bdev Controller (SPDK1 ) core 1: 12220.00 IO/s 8.18 secs/100000 ios 00:19:54.755 SPDK bdev Controller (SPDK1 ) core 2: 8179.00 IO/s 12.23 secs/100000 ios 00:19:54.755 SPDK bdev Controller (SPDK1 ) core 3: 13461.67 IO/s 7.43 secs/100000 ios 00:19:54.755 ======================================================== 00:19:54.755 00:19:54.755 15:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:19:54.755 [2024-09-27 15:38:35.032316] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:54.755 Initializing NVMe Controllers 00:19:54.755 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:54.755 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:54.755 Namespace ID: 1 size: 0GB 00:19:54.755 Initialization complete. 00:19:54.755 INFO: using host memory buffer for IO 00:19:54.755 Hello world! 00:19:54.755 [2024-09-27 15:38:35.065508] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:54.755 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:19:55.015 [2024-09-27 15:38:35.286167] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:55.960 Initializing NVMe Controllers 00:19:55.960 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:55.960 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:55.960 Initialization complete. Launching workers. 00:19:55.960 submit (in ns) avg, min, max = 5938.1, 2852.5, 3999209.2 00:19:55.960 complete (in ns) avg, min, max = 17819.9, 1630.0, 3997686.7 00:19:55.960 00:19:55.960 Submit histogram 00:19:55.960 ================ 00:19:55.960 Range in us Cumulative Count 00:19:55.960 2.840 - 2.853: 0.0049% ( 1) 00:19:55.960 2.853 - 2.867: 0.0243% ( 4) 00:19:55.960 2.867 - 2.880: 0.1312% ( 22) 00:19:55.960 2.880 - 2.893: 0.7582% ( 129) 00:19:55.960 2.893 - 2.907: 2.0072% ( 257) 00:19:55.960 2.907 - 2.920: 4.1796% ( 447) 00:19:55.960 2.920 - 2.933: 7.7760% ( 740) 00:19:55.960 2.933 - 2.947: 13.1221% ( 1100) 00:19:55.960 2.947 - 2.960: 19.7220% ( 1358) 00:19:55.960 2.960 - 2.973: 26.9586% ( 1489) 00:19:55.961 2.973 - 2.987: 33.8501% ( 1418) 00:19:55.961 2.987 - 3.000: 39.6433% ( 1192) 00:19:55.961 3.000 - 3.013: 44.8824% ( 1078) 00:19:55.961 3.013 - 3.027: 51.4288% ( 1347) 00:19:55.961 3.027 - 3.040: 59.6569% ( 1693) 00:19:55.961 3.040 - 3.053: 70.1254% ( 2154) 00:19:55.961 3.053 - 3.067: 79.5295% ( 1935) 00:19:55.961 3.067 - 3.080: 86.9217% ( 1521) 00:19:55.961 3.080 - 3.093: 91.7477% ( 993) 00:19:55.961 3.093 - 3.107: 95.2712% ( 725) 00:19:55.961 3.107 - 3.120: 97.0791% ( 372) 00:19:55.961 3.120 - 3.133: 98.0657% ( 203) 00:19:55.961 3.133 - 3.147: 98.7850% ( 148) 00:19:55.961 3.147 - 3.160: 99.2856% ( 103) 00:19:55.961 3.160 - 3.173: 99.4703% ( 38) 00:19:55.961 3.173 - 3.187: 99.4994% ( 6) 00:19:55.961 3.187 - 3.200: 99.5189% ( 4) 00:19:55.961 3.200 - 3.213: 99.5237% ( 1) 00:19:55.961 3.227 - 3.240: 99.5286% ( 1) 00:19:55.961 3.253 - 3.267: 99.5383% ( 2) 00:19:55.961 3.293 - 3.307: 99.5432% ( 1) 00:19:55.961 3.320 - 3.333: 99.5480% ( 1) 00:19:55.961 3.373 - 3.387: 99.5529% ( 1) 00:19:55.961 3.520 - 3.547: 99.5577% ( 1) 00:19:55.961 3.547 - 3.573: 99.5675% ( 2) 00:19:55.961 3.573 - 3.600: 99.5723% ( 1) 00:19:55.961 3.707 - 3.733: 99.5772% ( 1) 00:19:55.961 3.760 - 3.787: 99.5820% ( 1) 00:19:55.961 3.867 - 3.893: 99.5869% ( 1) 00:19:55.961 4.000 - 4.027: 99.5918% ( 1) 00:19:55.961 4.400 - 4.427: 99.6015% ( 2) 00:19:55.961 4.587 - 4.613: 99.6063% ( 1) 00:19:55.961 4.613 - 4.640: 99.6112% ( 1) 00:19:55.961 4.640 - 4.667: 99.6258% ( 3) 00:19:55.961 4.667 - 4.693: 99.6355% ( 2) 00:19:55.961 4.827 - 4.853: 99.6404% ( 1) 00:19:55.961 4.853 - 4.880: 99.6501% ( 2) 00:19:55.961 4.933 - 4.960: 99.6549% ( 1) 00:19:55.961 4.960 - 4.987: 99.6598% ( 1) 00:19:55.961 4.987 - 5.013: 99.6647% ( 1) 00:19:55.961 5.013 - 5.040: 99.6695% ( 1) 00:19:55.961 5.040 - 5.067: 99.6744% ( 1) 00:19:55.961 5.093 - 5.120: 99.6841% ( 2) 00:19:55.961 5.147 - 5.173: 99.6938% ( 2) 00:19:55.961 5.173 - 5.200: 99.6987% ( 1) 00:19:55.961 5.200 - 5.227: 99.7035% ( 1) 00:19:55.961 5.307 - 5.333: 99.7084% ( 1) 00:19:55.961 5.333 - 5.360: 99.7133% ( 1) 00:19:55.961 5.360 - 5.387: 99.7181% ( 1) 00:19:55.961 5.413 - 5.440: 99.7230% ( 1) 00:19:55.961 5.440 - 5.467: 99.7278% ( 1) 00:19:55.961 5.547 - 5.573: 99.7376% ( 2) 00:19:55.961 5.573 - 5.600: 99.7424% ( 1) 00:19:55.961 5.627 - 5.653: 99.7473% ( 1) 00:19:55.961 5.760 - 5.787: 99.7521% ( 1) 00:19:55.961 6.000 - 6.027: 99.7570% ( 1) 00:19:55.961 6.027 - 6.053: 99.7619% ( 1) 00:19:55.961 6.053 - 6.080: 99.7667% ( 1) 00:19:55.961 6.107 - 6.133: 99.7716% ( 1) 00:19:55.961 6.160 - 6.187: 99.7862% ( 3) 00:19:55.961 6.187 - 6.213: 99.8056% ( 4) 00:19:55.961 6.213 - 6.240: 99.8153% ( 2) 00:19:55.961 6.267 - 6.293: 99.8202% ( 1) 00:19:55.961 6.347 - 6.373: 99.8250% ( 1) 00:19:55.961 6.373 - 6.400: 99.8348% ( 2) 00:19:55.961 6.400 - 6.427: 99.8396% ( 1) 00:19:55.961 6.427 - 6.453: 99.8445% ( 1) 00:19:55.961 6.453 - 6.480: 99.8493% ( 1) 00:19:55.961 6.480 - 6.507: 99.8542% ( 1) 00:19:55.961 [2024-09-27 15:38:36.307629] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:55.961 6.507 - 6.533: 99.8591% ( 1) 00:19:55.961 6.533 - 6.560: 99.8639% ( 1) 00:19:55.961 6.560 - 6.587: 99.8688% ( 1) 00:19:55.961 6.613 - 6.640: 99.8736% ( 1) 00:19:55.961 6.640 - 6.667: 99.8785% ( 1) 00:19:55.961 6.693 - 6.720: 99.8834% ( 1) 00:19:55.961 6.720 - 6.747: 99.8882% ( 1) 00:19:55.961 6.747 - 6.773: 99.8979% ( 2) 00:19:55.961 6.933 - 6.987: 99.9077% ( 2) 00:19:55.961 6.987 - 7.040: 99.9125% ( 1) 00:19:55.961 7.253 - 7.307: 99.9174% ( 1) 00:19:55.961 7.413 - 7.467: 99.9222% ( 1) 00:19:55.961 12.853 - 12.907: 99.9271% ( 1) 00:19:55.961 3986.773 - 4014.080: 100.0000% ( 15) 00:19:55.961 00:19:55.961 Complete histogram 00:19:55.961 ================== 00:19:55.961 Range in us Cumulative Count 00:19:55.961 1.627 - 1.633: 0.0049% ( 1) 00:19:55.961 1.633 - 1.640: 0.0292% ( 5) 00:19:55.961 1.640 - 1.647: 0.0923% ( 13) 00:19:55.961 1.647 - 1.653: 0.1798% ( 18) 00:19:55.961 1.653 - 1.660: 1.1178% ( 193) 00:19:55.961 1.660 - 1.667: 1.3171% ( 41) 00:19:55.961 1.667 - 1.673: 1.3462% ( 6) 00:19:55.961 1.673 - 1.680: 1.4288% ( 17) 00:19:55.961 1.680 - 1.687: 1.4872% ( 12) 00:19:55.961 1.687 - 1.693: 16.0138% ( 2989) 00:19:55.961 1.693 - 1.700: 34.6520% ( 3835) 00:19:55.961 1.700 - 1.707: 43.7306% ( 1868) 00:19:55.961 1.707 - 1.720: 70.4948% ( 5507) 00:19:55.961 1.720 - 1.733: 79.6656% ( 1887) 00:19:55.961 1.733 - 1.747: 83.4224% ( 773) 00:19:55.961 1.747 - 1.760: 86.4211% ( 617) 00:19:55.961 1.760 - 1.773: 91.3005% ( 1004) 00:19:55.961 1.773 - 1.787: 95.6503% ( 895) 00:19:55.961 1.787 - 1.800: 98.1143% ( 507) 00:19:55.961 1.800 - 1.813: 99.0377% ( 190) 00:19:55.961 1.813 - 1.827: 99.3439% ( 63) 00:19:55.961 1.827 - 1.840: 99.4022% ( 12) 00:19:55.961 1.840 - 1.853: 99.4119% ( 2) 00:19:55.961 1.853 - 1.867: 99.4168% ( 1) 00:19:55.961 1.880 - 1.893: 99.4217% ( 1) 00:19:55.961 1.933 - 1.947: 99.4265% ( 1) 00:19:55.961 1.960 - 1.973: 99.4314% ( 1) 00:19:55.961 1.987 - 2.000: 99.4362% ( 1) 00:19:55.961 2.013 - 2.027: 99.4411% ( 1) 00:19:55.961 2.053 - 2.067: 99.4460% ( 1) 00:19:55.961 2.067 - 2.080: 99.4508% ( 1) 00:19:55.961 2.080 - 2.093: 99.4557% ( 1) 00:19:55.961 2.213 - 2.227: 99.4605% ( 1) 00:19:55.961 2.280 - 2.293: 99.4654% ( 1) 00:19:55.961 3.240 - 3.253: 99.4703% ( 1) 00:19:55.961 3.267 - 3.280: 99.4751% ( 1) 00:19:55.961 3.413 - 3.440: 99.4800% ( 1) 00:19:55.961 3.440 - 3.467: 99.4848% ( 1) 00:19:55.961 3.760 - 3.787: 99.4897% ( 1) 00:19:55.961 3.867 - 3.893: 99.4946% ( 1) 00:19:55.961 3.920 - 3.947: 99.5043% ( 2) 00:19:55.961 4.027 - 4.053: 99.5091% ( 1) 00:19:55.961 4.133 - 4.160: 99.5140% ( 1) 00:19:55.961 4.373 - 4.400: 99.5189% ( 1) 00:19:55.961 4.400 - 4.427: 99.5237% ( 1) 00:19:55.961 4.453 - 4.480: 99.5286% ( 1) 00:19:55.961 4.533 - 4.560: 99.5383% ( 2) 00:19:55.961 4.667 - 4.693: 99.5432% ( 1) 00:19:55.961 4.773 - 4.800: 99.5480% ( 1) 00:19:55.961 4.933 - 4.960: 99.5529% ( 1) 00:19:55.961 5.280 - 5.307: 99.5577% ( 1) 00:19:55.961 5.307 - 5.333: 99.5626% ( 1) 00:19:55.961 5.467 - 5.493: 99.5675% ( 1) 00:19:55.961 5.840 - 5.867: 99.5723% ( 1) 00:19:55.961 6.640 - 6.667: 99.5772% ( 1) 00:19:55.961 8.800 - 8.853: 99.5820% ( 1) 00:19:55.961 10.240 - 10.293: 99.5869% ( 1) 00:19:55.961 10.827 - 10.880: 99.5918% ( 1) 00:19:55.961 11.147 - 11.200: 99.5966% ( 1) 00:19:55.961 3713.707 - 3741.013: 99.6015% ( 1) 00:19:55.961 3986.773 - 4014.080: 100.0000% ( 82) 00:19:55.961 00:19:55.961 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:19:55.961 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:19:55.961 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:19:55.961 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:19:55.961 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:56.222 [ 00:19:56.223 { 00:19:56.223 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:56.223 "subtype": "Discovery", 00:19:56.223 "listen_addresses": [], 00:19:56.223 "allow_any_host": true, 00:19:56.223 "hosts": [] 00:19:56.223 }, 00:19:56.223 { 00:19:56.223 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:56.223 "subtype": "NVMe", 00:19:56.223 "listen_addresses": [ 00:19:56.223 { 00:19:56.223 "trtype": "VFIOUSER", 00:19:56.223 "adrfam": "IPv4", 00:19:56.223 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:56.223 "trsvcid": "0" 00:19:56.223 } 00:19:56.223 ], 00:19:56.223 "allow_any_host": true, 00:19:56.223 "hosts": [], 00:19:56.223 "serial_number": "SPDK1", 00:19:56.223 "model_number": "SPDK bdev Controller", 00:19:56.223 "max_namespaces": 32, 00:19:56.223 "min_cntlid": 1, 00:19:56.223 "max_cntlid": 65519, 00:19:56.223 "namespaces": [ 00:19:56.223 { 00:19:56.223 "nsid": 1, 00:19:56.223 "bdev_name": "Malloc1", 00:19:56.223 "name": "Malloc1", 00:19:56.223 "nguid": "D4406EC45B214E4DA384E207CAB257EB", 00:19:56.223 "uuid": "d4406ec4-5b21-4e4d-a384-e207cab257eb" 00:19:56.223 } 00:19:56.223 ] 00:19:56.223 }, 00:19:56.223 { 00:19:56.223 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:56.223 "subtype": "NVMe", 00:19:56.223 "listen_addresses": [ 00:19:56.223 { 00:19:56.223 "trtype": "VFIOUSER", 00:19:56.223 "adrfam": "IPv4", 00:19:56.223 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:56.223 "trsvcid": "0" 00:19:56.223 } 00:19:56.223 ], 00:19:56.223 "allow_any_host": true, 00:19:56.223 "hosts": [], 00:19:56.223 "serial_number": "SPDK2", 00:19:56.223 "model_number": "SPDK bdev Controller", 00:19:56.223 "max_namespaces": 32, 00:19:56.223 "min_cntlid": 1, 00:19:56.223 "max_cntlid": 65519, 00:19:56.223 "namespaces": [ 00:19:56.223 { 00:19:56.223 "nsid": 1, 00:19:56.223 "bdev_name": "Malloc2", 00:19:56.223 "name": "Malloc2", 00:19:56.223 "nguid": "B8D7BDE4788B453AA0CB6CEB01F33B36", 00:19:56.223 "uuid": "b8d7bde4-788b-453a-a0cb-6ceb01f33b36" 00:19:56.223 } 00:19:56.223 ] 00:19:56.223 } 00:19:56.223 ] 00:19:56.223 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:56.223 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:19:56.223 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=352585 00:19:56.223 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:19:56.223 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:19:56.223 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:56.223 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:19:56.223 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=1 00:19:56.223 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:19:56.223 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:56.223 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:19:56.223 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=2 00:19:56.223 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:19:56.223 [2024-09-27 15:38:36.663212] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:56.484 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:56.484 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:56.484 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:19:56.484 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:19:56.484 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:19:56.484 Malloc3 00:19:56.484 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:19:56.744 [2024-09-27 15:38:37.105070] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:56.744 15:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:56.744 Asynchronous Event Request test 00:19:56.744 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:56.744 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:56.744 Registering asynchronous event callbacks... 00:19:56.744 Starting namespace attribute notice tests for all controllers... 00:19:56.744 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:56.744 aer_cb - Changed Namespace 00:19:56.744 Cleaning up... 00:19:57.006 [ 00:19:57.006 { 00:19:57.006 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:57.006 "subtype": "Discovery", 00:19:57.006 "listen_addresses": [], 00:19:57.006 "allow_any_host": true, 00:19:57.006 "hosts": [] 00:19:57.006 }, 00:19:57.006 { 00:19:57.006 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:57.006 "subtype": "NVMe", 00:19:57.006 "listen_addresses": [ 00:19:57.006 { 00:19:57.006 "trtype": "VFIOUSER", 00:19:57.006 "adrfam": "IPv4", 00:19:57.006 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:57.006 "trsvcid": "0" 00:19:57.006 } 00:19:57.006 ], 00:19:57.006 "allow_any_host": true, 00:19:57.006 "hosts": [], 00:19:57.006 "serial_number": "SPDK1", 00:19:57.006 "model_number": "SPDK bdev Controller", 00:19:57.006 "max_namespaces": 32, 00:19:57.006 "min_cntlid": 1, 00:19:57.006 "max_cntlid": 65519, 00:19:57.006 "namespaces": [ 00:19:57.006 { 00:19:57.006 "nsid": 1, 00:19:57.006 "bdev_name": "Malloc1", 00:19:57.006 "name": "Malloc1", 00:19:57.006 "nguid": "D4406EC45B214E4DA384E207CAB257EB", 00:19:57.006 "uuid": "d4406ec4-5b21-4e4d-a384-e207cab257eb" 00:19:57.006 }, 00:19:57.006 { 00:19:57.006 "nsid": 2, 00:19:57.006 "bdev_name": "Malloc3", 00:19:57.006 "name": "Malloc3", 00:19:57.006 "nguid": "CFCA9E5D941A4D56A0B84A5B1C71BD54", 00:19:57.006 "uuid": "cfca9e5d-941a-4d56-a0b8-4a5b1c71bd54" 00:19:57.006 } 00:19:57.006 ] 00:19:57.006 }, 00:19:57.006 { 00:19:57.006 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:57.006 "subtype": "NVMe", 00:19:57.007 "listen_addresses": [ 00:19:57.007 { 00:19:57.007 "trtype": "VFIOUSER", 00:19:57.007 "adrfam": "IPv4", 00:19:57.007 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:57.007 "trsvcid": "0" 00:19:57.007 } 00:19:57.007 ], 00:19:57.007 "allow_any_host": true, 00:19:57.007 "hosts": [], 00:19:57.007 "serial_number": "SPDK2", 00:19:57.007 "model_number": "SPDK bdev Controller", 00:19:57.007 "max_namespaces": 32, 00:19:57.007 "min_cntlid": 1, 00:19:57.007 "max_cntlid": 65519, 00:19:57.007 "namespaces": [ 00:19:57.007 { 00:19:57.007 "nsid": 1, 00:19:57.007 "bdev_name": "Malloc2", 00:19:57.007 "name": "Malloc2", 00:19:57.007 "nguid": "B8D7BDE4788B453AA0CB6CEB01F33B36", 00:19:57.007 "uuid": "b8d7bde4-788b-453a-a0cb-6ceb01f33b36" 00:19:57.007 } 00:19:57.007 ] 00:19:57.007 } 00:19:57.007 ] 00:19:57.007 15:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 352585 00:19:57.007 15:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:57.007 15:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:19:57.007 15:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:19:57.007 15:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:19:57.007 [2024-09-27 15:38:37.343171] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:19:57.007 [2024-09-27 15:38:37.343210] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid352643 ] 00:19:57.007 [2024-09-27 15:38:37.370916] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:19:57.007 [2024-09-27 15:38:37.381108] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:57.007 [2024-09-27 15:38:37.381125] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f6a44952000 00:19:57.007 [2024-09-27 15:38:37.382108] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:57.007 [2024-09-27 15:38:37.383114] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:57.007 [2024-09-27 15:38:37.384115] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:57.007 [2024-09-27 15:38:37.385122] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:57.007 [2024-09-27 15:38:37.386128] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:57.007 [2024-09-27 15:38:37.387132] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:57.007 [2024-09-27 15:38:37.388140] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:57.007 [2024-09-27 15:38:37.389150] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:57.007 [2024-09-27 15:38:37.390157] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:57.007 [2024-09-27 15:38:37.390165] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f6a4365c000 00:19:57.007 [2024-09-27 15:38:37.391078] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:57.007 [2024-09-27 15:38:37.403450] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:19:57.007 [2024-09-27 15:38:37.403472] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:19:57.007 [2024-09-27 15:38:37.405512] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:19:57.007 [2024-09-27 15:38:37.405546] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:19:57.007 [2024-09-27 15:38:37.405603] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:19:57.007 [2024-09-27 15:38:37.405615] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:19:57.007 [2024-09-27 15:38:37.405618] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:19:57.007 [2024-09-27 15:38:37.406899] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:19:57.007 [2024-09-27 15:38:37.406907] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:19:57.007 [2024-09-27 15:38:37.406912] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:19:57.007 [2024-09-27 15:38:37.407521] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:19:57.007 [2024-09-27 15:38:37.407527] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:19:57.007 [2024-09-27 15:38:37.407532] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:19:57.007 [2024-09-27 15:38:37.408527] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:19:57.007 [2024-09-27 15:38:37.408534] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:57.007 [2024-09-27 15:38:37.409533] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:19:57.007 [2024-09-27 15:38:37.409540] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:19:57.007 [2024-09-27 15:38:37.409543] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:19:57.007 [2024-09-27 15:38:37.409548] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:57.007 [2024-09-27 15:38:37.409652] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:19:57.007 [2024-09-27 15:38:37.409655] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:57.007 [2024-09-27 15:38:37.409659] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:19:57.007 [2024-09-27 15:38:37.410542] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:19:57.007 [2024-09-27 15:38:37.411545] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:19:57.007 [2024-09-27 15:38:37.412546] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:19:57.007 [2024-09-27 15:38:37.413557] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:57.007 [2024-09-27 15:38:37.413586] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:57.007 [2024-09-27 15:38:37.414565] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:19:57.007 [2024-09-27 15:38:37.414574] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:57.007 [2024-09-27 15:38:37.414577] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:19:57.007 [2024-09-27 15:38:37.414592] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:19:57.007 [2024-09-27 15:38:37.414597] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:19:57.007 [2024-09-27 15:38:37.414606] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:57.007 [2024-09-27 15:38:37.414610] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:57.007 [2024-09-27 15:38:37.414613] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:57.007 [2024-09-27 15:38:37.414622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:57.007 [2024-09-27 15:38:37.424901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:19:57.007 [2024-09-27 15:38:37.424910] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:19:57.007 [2024-09-27 15:38:37.424914] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:19:57.007 [2024-09-27 15:38:37.424917] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:19:57.007 [2024-09-27 15:38:37.424920] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:19:57.007 [2024-09-27 15:38:37.424924] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:19:57.007 [2024-09-27 15:38:37.424927] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:19:57.007 [2024-09-27 15:38:37.424930] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:19:57.007 [2024-09-27 15:38:37.424936] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:19:57.007 [2024-09-27 15:38:37.424944] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:19:57.008 [2024-09-27 15:38:37.432899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:19:57.008 [2024-09-27 15:38:37.432910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.008 [2024-09-27 15:38:37.432916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.008 [2024-09-27 15:38:37.432922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.008 [2024-09-27 15:38:37.432928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.008 [2024-09-27 15:38:37.432931] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:19:57.008 [2024-09-27 15:38:37.432938] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:57.008 [2024-09-27 15:38:37.432947] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:19:57.008 [2024-09-27 15:38:37.440899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:19:57.008 [2024-09-27 15:38:37.440906] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:19:57.008 [2024-09-27 15:38:37.440909] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:57.008 [2024-09-27 15:38:37.440914] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:19:57.008 [2024-09-27 15:38:37.440920] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:19:57.008 [2024-09-27 15:38:37.440927] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:57.008 [2024-09-27 15:38:37.448900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:19:57.008 [2024-09-27 15:38:37.448946] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:19:57.008 [2024-09-27 15:38:37.448951] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:19:57.008 [2024-09-27 15:38:37.448957] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:19:57.008 [2024-09-27 15:38:37.448960] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:19:57.008 [2024-09-27 15:38:37.448962] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:57.008 [2024-09-27 15:38:37.448967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:19:57.008 [2024-09-27 15:38:37.456899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:19:57.008 [2024-09-27 15:38:37.456911] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:19:57.008 [2024-09-27 15:38:37.456917] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:19:57.008 [2024-09-27 15:38:37.456923] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:19:57.008 [2024-09-27 15:38:37.456927] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:57.008 [2024-09-27 15:38:37.456930] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:57.008 [2024-09-27 15:38:37.456933] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:57.008 [2024-09-27 15:38:37.456937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:57.008 [2024-09-27 15:38:37.464899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:19:57.008 [2024-09-27 15:38:37.464910] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:57.008 [2024-09-27 15:38:37.464915] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:57.008 [2024-09-27 15:38:37.464920] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:57.008 [2024-09-27 15:38:37.464925] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:57.008 [2024-09-27 15:38:37.464928] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:57.008 [2024-09-27 15:38:37.464932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:57.008 [2024-09-27 15:38:37.472899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:19:57.008 [2024-09-27 15:38:37.472907] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:57.008 [2024-09-27 15:38:37.472911] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:19:57.008 [2024-09-27 15:38:37.472918] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:19:57.008 [2024-09-27 15:38:37.472922] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:19:57.008 [2024-09-27 15:38:37.472926] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:57.008 [2024-09-27 15:38:37.472930] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:19:57.008 [2024-09-27 15:38:37.472933] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:19:57.008 [2024-09-27 15:38:37.472936] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:19:57.008 [2024-09-27 15:38:37.472940] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:19:57.008 [2024-09-27 15:38:37.472953] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:19:57.008 [2024-09-27 15:38:37.480901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:19:57.008 [2024-09-27 15:38:37.480912] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:19:57.008 [2024-09-27 15:38:37.488900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:19:57.008 [2024-09-27 15:38:37.488910] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:19:57.270 [2024-09-27 15:38:37.496898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:19:57.270 [2024-09-27 15:38:37.496908] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:57.270 [2024-09-27 15:38:37.504899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:19:57.270 [2024-09-27 15:38:37.504913] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:19:57.270 [2024-09-27 15:38:37.504916] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:19:57.270 [2024-09-27 15:38:37.504919] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:19:57.270 [2024-09-27 15:38:37.504922] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:19:57.270 [2024-09-27 15:38:37.504924] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:19:57.270 [2024-09-27 15:38:37.504929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:19:57.270 [2024-09-27 15:38:37.504936] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:19:57.270 [2024-09-27 15:38:37.504939] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:19:57.270 [2024-09-27 15:38:37.504942] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:57.270 [2024-09-27 15:38:37.504946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:19:57.270 [2024-09-27 15:38:37.504951] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:19:57.270 [2024-09-27 15:38:37.504954] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:57.270 [2024-09-27 15:38:37.504956] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:57.270 [2024-09-27 15:38:37.504961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:57.270 [2024-09-27 15:38:37.504966] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:19:57.270 [2024-09-27 15:38:37.504969] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:19:57.270 [2024-09-27 15:38:37.504972] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:57.270 [2024-09-27 15:38:37.504976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:19:57.270 [2024-09-27 15:38:37.512900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:19:57.270 [2024-09-27 15:38:37.512911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:19:57.270 [2024-09-27 15:38:37.512919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:19:57.270 [2024-09-27 15:38:37.512924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:19:57.270 ===================================================== 00:19:57.270 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:57.270 ===================================================== 00:19:57.270 Controller Capabilities/Features 00:19:57.270 ================================ 00:19:57.270 Vendor ID: 4e58 00:19:57.270 Subsystem Vendor ID: 4e58 00:19:57.270 Serial Number: SPDK2 00:19:57.270 Model Number: SPDK bdev Controller 00:19:57.270 Firmware Version: 25.01 00:19:57.270 Recommended Arb Burst: 6 00:19:57.270 IEEE OUI Identifier: 8d 6b 50 00:19:57.270 Multi-path I/O 00:19:57.270 May have multiple subsystem ports: Yes 00:19:57.270 May have multiple controllers: Yes 00:19:57.270 Associated with SR-IOV VF: No 00:19:57.270 Max Data Transfer Size: 131072 00:19:57.270 Max Number of Namespaces: 32 00:19:57.270 Max Number of I/O Queues: 127 00:19:57.270 NVMe Specification Version (VS): 1.3 00:19:57.270 NVMe Specification Version (Identify): 1.3 00:19:57.270 Maximum Queue Entries: 256 00:19:57.270 Contiguous Queues Required: Yes 00:19:57.270 Arbitration Mechanisms Supported 00:19:57.270 Weighted Round Robin: Not Supported 00:19:57.270 Vendor Specific: Not Supported 00:19:57.270 Reset Timeout: 15000 ms 00:19:57.270 Doorbell Stride: 4 bytes 00:19:57.270 NVM Subsystem Reset: Not Supported 00:19:57.270 Command Sets Supported 00:19:57.270 NVM Command Set: Supported 00:19:57.270 Boot Partition: Not Supported 00:19:57.270 Memory Page Size Minimum: 4096 bytes 00:19:57.270 Memory Page Size Maximum: 4096 bytes 00:19:57.270 Persistent Memory Region: Not Supported 00:19:57.270 Optional Asynchronous Events Supported 00:19:57.270 Namespace Attribute Notices: Supported 00:19:57.270 Firmware Activation Notices: Not Supported 00:19:57.270 ANA Change Notices: Not Supported 00:19:57.270 PLE Aggregate Log Change Notices: Not Supported 00:19:57.270 LBA Status Info Alert Notices: Not Supported 00:19:57.270 EGE Aggregate Log Change Notices: Not Supported 00:19:57.270 Normal NVM Subsystem Shutdown event: Not Supported 00:19:57.270 Zone Descriptor Change Notices: Not Supported 00:19:57.270 Discovery Log Change Notices: Not Supported 00:19:57.270 Controller Attributes 00:19:57.270 128-bit Host Identifier: Supported 00:19:57.270 Non-Operational Permissive Mode: Not Supported 00:19:57.270 NVM Sets: Not Supported 00:19:57.270 Read Recovery Levels: Not Supported 00:19:57.270 Endurance Groups: Not Supported 00:19:57.270 Predictable Latency Mode: Not Supported 00:19:57.270 Traffic Based Keep ALive: Not Supported 00:19:57.270 Namespace Granularity: Not Supported 00:19:57.270 SQ Associations: Not Supported 00:19:57.270 UUID List: Not Supported 00:19:57.270 Multi-Domain Subsystem: Not Supported 00:19:57.270 Fixed Capacity Management: Not Supported 00:19:57.270 Variable Capacity Management: Not Supported 00:19:57.270 Delete Endurance Group: Not Supported 00:19:57.270 Delete NVM Set: Not Supported 00:19:57.270 Extended LBA Formats Supported: Not Supported 00:19:57.270 Flexible Data Placement Supported: Not Supported 00:19:57.270 00:19:57.271 Controller Memory Buffer Support 00:19:57.271 ================================ 00:19:57.271 Supported: No 00:19:57.271 00:19:57.271 Persistent Memory Region Support 00:19:57.271 ================================ 00:19:57.271 Supported: No 00:19:57.271 00:19:57.271 Admin Command Set Attributes 00:19:57.271 ============================ 00:19:57.271 Security Send/Receive: Not Supported 00:19:57.271 Format NVM: Not Supported 00:19:57.271 Firmware Activate/Download: Not Supported 00:19:57.271 Namespace Management: Not Supported 00:19:57.271 Device Self-Test: Not Supported 00:19:57.271 Directives: Not Supported 00:19:57.271 NVMe-MI: Not Supported 00:19:57.271 Virtualization Management: Not Supported 00:19:57.271 Doorbell Buffer Config: Not Supported 00:19:57.271 Get LBA Status Capability: Not Supported 00:19:57.271 Command & Feature Lockdown Capability: Not Supported 00:19:57.271 Abort Command Limit: 4 00:19:57.271 Async Event Request Limit: 4 00:19:57.271 Number of Firmware Slots: N/A 00:19:57.271 Firmware Slot 1 Read-Only: N/A 00:19:57.271 Firmware Activation Without Reset: N/A 00:19:57.271 Multiple Update Detection Support: N/A 00:19:57.271 Firmware Update Granularity: No Information Provided 00:19:57.271 Per-Namespace SMART Log: No 00:19:57.271 Asymmetric Namespace Access Log Page: Not Supported 00:19:57.271 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:19:57.271 Command Effects Log Page: Supported 00:19:57.271 Get Log Page Extended Data: Supported 00:19:57.271 Telemetry Log Pages: Not Supported 00:19:57.271 Persistent Event Log Pages: Not Supported 00:19:57.271 Supported Log Pages Log Page: May Support 00:19:57.271 Commands Supported & Effects Log Page: Not Supported 00:19:57.271 Feature Identifiers & Effects Log Page:May Support 00:19:57.271 NVMe-MI Commands & Effects Log Page: May Support 00:19:57.271 Data Area 4 for Telemetry Log: Not Supported 00:19:57.271 Error Log Page Entries Supported: 128 00:19:57.271 Keep Alive: Supported 00:19:57.271 Keep Alive Granularity: 10000 ms 00:19:57.271 00:19:57.271 NVM Command Set Attributes 00:19:57.271 ========================== 00:19:57.271 Submission Queue Entry Size 00:19:57.271 Max: 64 00:19:57.271 Min: 64 00:19:57.271 Completion Queue Entry Size 00:19:57.271 Max: 16 00:19:57.271 Min: 16 00:19:57.271 Number of Namespaces: 32 00:19:57.271 Compare Command: Supported 00:19:57.271 Write Uncorrectable Command: Not Supported 00:19:57.271 Dataset Management Command: Supported 00:19:57.271 Write Zeroes Command: Supported 00:19:57.271 Set Features Save Field: Not Supported 00:19:57.271 Reservations: Not Supported 00:19:57.271 Timestamp: Not Supported 00:19:57.271 Copy: Supported 00:19:57.271 Volatile Write Cache: Present 00:19:57.271 Atomic Write Unit (Normal): 1 00:19:57.271 Atomic Write Unit (PFail): 1 00:19:57.271 Atomic Compare & Write Unit: 1 00:19:57.271 Fused Compare & Write: Supported 00:19:57.271 Scatter-Gather List 00:19:57.271 SGL Command Set: Supported (Dword aligned) 00:19:57.271 SGL Keyed: Not Supported 00:19:57.271 SGL Bit Bucket Descriptor: Not Supported 00:19:57.271 SGL Metadata Pointer: Not Supported 00:19:57.271 Oversized SGL: Not Supported 00:19:57.271 SGL Metadata Address: Not Supported 00:19:57.271 SGL Offset: Not Supported 00:19:57.271 Transport SGL Data Block: Not Supported 00:19:57.271 Replay Protected Memory Block: Not Supported 00:19:57.271 00:19:57.271 Firmware Slot Information 00:19:57.271 ========================= 00:19:57.271 Active slot: 1 00:19:57.271 Slot 1 Firmware Revision: 25.01 00:19:57.271 00:19:57.271 00:19:57.271 Commands Supported and Effects 00:19:57.271 ============================== 00:19:57.271 Admin Commands 00:19:57.271 -------------- 00:19:57.271 Get Log Page (02h): Supported 00:19:57.271 Identify (06h): Supported 00:19:57.271 Abort (08h): Supported 00:19:57.271 Set Features (09h): Supported 00:19:57.271 Get Features (0Ah): Supported 00:19:57.271 Asynchronous Event Request (0Ch): Supported 00:19:57.271 Keep Alive (18h): Supported 00:19:57.271 I/O Commands 00:19:57.271 ------------ 00:19:57.271 Flush (00h): Supported LBA-Change 00:19:57.271 Write (01h): Supported LBA-Change 00:19:57.271 Read (02h): Supported 00:19:57.271 Compare (05h): Supported 00:19:57.271 Write Zeroes (08h): Supported LBA-Change 00:19:57.271 Dataset Management (09h): Supported LBA-Change 00:19:57.271 Copy (19h): Supported LBA-Change 00:19:57.271 00:19:57.271 Error Log 00:19:57.271 ========= 00:19:57.271 00:19:57.271 Arbitration 00:19:57.271 =========== 00:19:57.271 Arbitration Burst: 1 00:19:57.271 00:19:57.271 Power Management 00:19:57.271 ================ 00:19:57.271 Number of Power States: 1 00:19:57.271 Current Power State: Power State #0 00:19:57.271 Power State #0: 00:19:57.271 Max Power: 0.00 W 00:19:57.271 Non-Operational State: Operational 00:19:57.271 Entry Latency: Not Reported 00:19:57.271 Exit Latency: Not Reported 00:19:57.271 Relative Read Throughput: 0 00:19:57.271 Relative Read Latency: 0 00:19:57.271 Relative Write Throughput: 0 00:19:57.271 Relative Write Latency: 0 00:19:57.271 Idle Power: Not Reported 00:19:57.271 Active Power: Not Reported 00:19:57.271 Non-Operational Permissive Mode: Not Supported 00:19:57.271 00:19:57.271 Health Information 00:19:57.271 ================== 00:19:57.271 Critical Warnings: 00:19:57.271 Available Spare Space: OK 00:19:57.271 Temperature: OK 00:19:57.271 Device Reliability: OK 00:19:57.271 Read Only: No 00:19:57.271 Volatile Memory Backup: OK 00:19:57.271 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:57.271 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:57.271 Available Spare: 0% 00:19:57.271 Available Sp[2024-09-27 15:38:37.512994] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:19:57.271 [2024-09-27 15:38:37.520900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:19:57.271 [2024-09-27 15:38:37.520924] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:19:57.271 [2024-09-27 15:38:37.520931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.271 [2024-09-27 15:38:37.520935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.271 [2024-09-27 15:38:37.520940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.271 [2024-09-27 15:38:37.520944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.271 [2024-09-27 15:38:37.520989] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:19:57.271 [2024-09-27 15:38:37.520997] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:19:57.271 [2024-09-27 15:38:37.521995] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:57.271 [2024-09-27 15:38:37.522030] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:19:57.271 [2024-09-27 15:38:37.522037] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:19:57.271 [2024-09-27 15:38:37.522995] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:19:57.271 [2024-09-27 15:38:37.523004] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:19:57.271 [2024-09-27 15:38:37.523052] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:19:57.271 [2024-09-27 15:38:37.524013] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:57.271 are Threshold: 0% 00:19:57.271 Life Percentage Used: 0% 00:19:57.271 Data Units Read: 0 00:19:57.271 Data Units Written: 0 00:19:57.271 Host Read Commands: 0 00:19:57.271 Host Write Commands: 0 00:19:57.271 Controller Busy Time: 0 minutes 00:19:57.271 Power Cycles: 0 00:19:57.271 Power On Hours: 0 hours 00:19:57.271 Unsafe Shutdowns: 0 00:19:57.271 Unrecoverable Media Errors: 0 00:19:57.271 Lifetime Error Log Entries: 0 00:19:57.271 Warning Temperature Time: 0 minutes 00:19:57.271 Critical Temperature Time: 0 minutes 00:19:57.271 00:19:57.271 Number of Queues 00:19:57.271 ================ 00:19:57.271 Number of I/O Submission Queues: 127 00:19:57.271 Number of I/O Completion Queues: 127 00:19:57.271 00:19:57.271 Active Namespaces 00:19:57.271 ================= 00:19:57.271 Namespace ID:1 00:19:57.271 Error Recovery Timeout: Unlimited 00:19:57.271 Command Set Identifier: NVM (00h) 00:19:57.271 Deallocate: Supported 00:19:57.271 Deallocated/Unwritten Error: Not Supported 00:19:57.271 Deallocated Read Value: Unknown 00:19:57.271 Deallocate in Write Zeroes: Not Supported 00:19:57.271 Deallocated Guard Field: 0xFFFF 00:19:57.271 Flush: Supported 00:19:57.271 Reservation: Supported 00:19:57.271 Namespace Sharing Capabilities: Multiple Controllers 00:19:57.272 Size (in LBAs): 131072 (0GiB) 00:19:57.272 Capacity (in LBAs): 131072 (0GiB) 00:19:57.272 Utilization (in LBAs): 131072 (0GiB) 00:19:57.272 NGUID: B8D7BDE4788B453AA0CB6CEB01F33B36 00:19:57.272 UUID: b8d7bde4-788b-453a-a0cb-6ceb01f33b36 00:19:57.272 Thin Provisioning: Not Supported 00:19:57.272 Per-NS Atomic Units: Yes 00:19:57.272 Atomic Boundary Size (Normal): 0 00:19:57.272 Atomic Boundary Size (PFail): 0 00:19:57.272 Atomic Boundary Offset: 0 00:19:57.272 Maximum Single Source Range Length: 65535 00:19:57.272 Maximum Copy Length: 65535 00:19:57.272 Maximum Source Range Count: 1 00:19:57.272 NGUID/EUI64 Never Reused: No 00:19:57.272 Namespace Write Protected: No 00:19:57.272 Number of LBA Formats: 1 00:19:57.272 Current LBA Format: LBA Format #00 00:19:57.272 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:57.272 00:19:57.272 15:38:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:19:57.272 [2024-09-27 15:38:37.693938] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:02.561 Initializing NVMe Controllers 00:20:02.561 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:02.561 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:20:02.561 Initialization complete. Launching workers. 00:20:02.561 ======================================================== 00:20:02.561 Latency(us) 00:20:02.561 Device Information : IOPS MiB/s Average min max 00:20:02.561 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40022.71 156.34 3198.03 836.45 10787.90 00:20:02.561 ======================================================== 00:20:02.561 Total : 40022.71 156.34 3198.03 836.45 10787.90 00:20:02.561 00:20:02.561 [2024-09-27 15:38:42.801115] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:02.561 15:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:20:02.561 [2024-09-27 15:38:42.976622] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:07.849 Initializing NVMe Controllers 00:20:07.849 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:07.849 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:20:07.849 Initialization complete. Launching workers. 00:20:07.849 ======================================================== 00:20:07.849 Latency(us) 00:20:07.849 Device Information : IOPS MiB/s Average min max 00:20:07.849 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39978.80 156.17 3201.66 848.31 10774.80 00:20:07.849 ======================================================== 00:20:07.849 Total : 39978.80 156.17 3201.66 848.31 10774.80 00:20:07.849 00:20:07.849 [2024-09-27 15:38:47.996682] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:07.849 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:20:07.849 [2024-09-27 15:38:48.189862] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:13.138 [2024-09-27 15:38:53.312115] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:13.138 Initializing NVMe Controllers 00:20:13.138 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:13.138 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:13.138 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:20:13.138 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:20:13.138 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:20:13.138 Initialization complete. Launching workers. 00:20:13.138 Starting thread on core 2 00:20:13.138 Starting thread on core 3 00:20:13.138 Starting thread on core 1 00:20:13.138 15:38:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:20:13.138 [2024-09-27 15:38:53.544297] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:16.439 [2024-09-27 15:38:56.593501] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:16.439 Initializing NVMe Controllers 00:20:16.439 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:16.439 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:16.439 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:20:16.439 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:20:16.439 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:20:16.439 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:20:16.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:20:16.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:20:16.439 Initialization complete. Launching workers. 00:20:16.439 Starting thread on core 1 with urgent priority queue 00:20:16.439 Starting thread on core 2 with urgent priority queue 00:20:16.440 Starting thread on core 3 with urgent priority queue 00:20:16.440 Starting thread on core 0 with urgent priority queue 00:20:16.440 SPDK bdev Controller (SPDK2 ) core 0: 15110.33 IO/s 6.62 secs/100000 ios 00:20:16.440 SPDK bdev Controller (SPDK2 ) core 1: 10589.67 IO/s 9.44 secs/100000 ios 00:20:16.440 SPDK bdev Controller (SPDK2 ) core 2: 11235.67 IO/s 8.90 secs/100000 ios 00:20:16.440 SPDK bdev Controller (SPDK2 ) core 3: 16038.00 IO/s 6.24 secs/100000 ios 00:20:16.440 ======================================================== 00:20:16.440 00:20:16.440 15:38:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:20:16.440 [2024-09-27 15:38:56.826332] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:16.440 Initializing NVMe Controllers 00:20:16.440 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:16.440 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:16.440 Namespace ID: 1 size: 0GB 00:20:16.440 Initialization complete. 00:20:16.440 INFO: using host memory buffer for IO 00:20:16.440 Hello world! 00:20:16.440 [2024-09-27 15:38:56.836393] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:16.440 15:38:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:20:16.701 [2024-09-27 15:38:57.058262] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:18.088 Initializing NVMe Controllers 00:20:18.088 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:18.088 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:18.088 Initialization complete. Launching workers. 00:20:18.088 submit (in ns) avg, min, max = 5295.2, 2836.7, 3997602.5 00:20:18.088 complete (in ns) avg, min, max = 17304.6, 1632.5, 3998872.5 00:20:18.088 00:20:18.088 Submit histogram 00:20:18.088 ================ 00:20:18.088 Range in us Cumulative Count 00:20:18.088 2.827 - 2.840: 0.0578% ( 12) 00:20:18.088 2.840 - 2.853: 0.5638% ( 105) 00:20:18.088 2.853 - 2.867: 2.5106% ( 404) 00:20:18.088 2.867 - 2.880: 5.3055% ( 580) 00:20:18.088 2.880 - 2.893: 8.4377% ( 650) 00:20:18.088 2.893 - 2.907: 12.2012% ( 781) 00:20:18.088 2.907 - 2.920: 17.3044% ( 1059) 00:20:18.088 2.920 - 2.933: 24.8795% ( 1572) 00:20:18.088 2.933 - 2.947: 32.5077% ( 1583) 00:20:18.088 2.947 - 2.960: 38.9649% ( 1340) 00:20:18.088 2.960 - 2.973: 44.8246% ( 1216) 00:20:18.088 2.973 - 2.987: 50.4867% ( 1175) 00:20:18.088 2.987 - 3.000: 57.3005% ( 1414) 00:20:18.088 3.000 - 3.013: 67.9260% ( 2205) 00:20:18.088 3.013 - 3.027: 77.4190% ( 1970) 00:20:18.088 3.027 - 3.040: 84.9171% ( 1556) 00:20:18.088 3.040 - 3.053: 90.3624% ( 1130) 00:20:18.088 3.053 - 3.067: 94.1885% ( 794) 00:20:18.088 3.067 - 3.080: 96.4148% ( 462) 00:20:18.088 3.080 - 3.093: 98.0532% ( 340) 00:20:18.088 3.093 - 3.107: 98.9061% ( 177) 00:20:18.089 3.107 - 3.120: 99.3736% ( 97) 00:20:18.089 3.120 - 3.133: 99.5229% ( 31) 00:20:18.089 3.133 - 3.147: 99.5567% ( 7) 00:20:18.089 3.147 - 3.160: 99.5759% ( 4) 00:20:18.089 3.160 - 3.173: 99.5808% ( 1) 00:20:18.089 3.213 - 3.227: 99.5856% ( 1) 00:20:18.089 3.240 - 3.253: 99.5904% ( 1) 00:20:18.089 3.280 - 3.293: 99.5952% ( 1) 00:20:18.089 3.307 - 3.320: 99.6000% ( 1) 00:20:18.089 3.493 - 3.520: 99.6097% ( 2) 00:20:18.089 3.520 - 3.547: 99.6145% ( 1) 00:20:18.089 3.573 - 3.600: 99.6241% ( 2) 00:20:18.089 3.600 - 3.627: 99.6290% ( 1) 00:20:18.089 3.627 - 3.653: 99.6338% ( 1) 00:20:18.089 3.733 - 3.760: 99.6482% ( 3) 00:20:18.089 3.787 - 3.813: 99.6530% ( 1) 00:20:18.089 3.813 - 3.840: 99.6579% ( 1) 00:20:18.089 3.920 - 3.947: 99.6627% ( 1) 00:20:18.089 4.267 - 4.293: 99.6675% ( 1) 00:20:18.089 4.373 - 4.400: 99.6723% ( 1) 00:20:18.089 4.453 - 4.480: 99.6771% ( 1) 00:20:18.089 4.507 - 4.533: 99.6820% ( 1) 00:20:18.089 4.533 - 4.560: 99.6868% ( 1) 00:20:18.089 4.560 - 4.587: 99.7012% ( 3) 00:20:18.089 4.613 - 4.640: 99.7061% ( 1) 00:20:18.089 4.640 - 4.667: 99.7109% ( 1) 00:20:18.089 4.667 - 4.693: 99.7157% ( 1) 00:20:18.089 4.693 - 4.720: 99.7253% ( 2) 00:20:18.089 4.720 - 4.747: 99.7301% ( 1) 00:20:18.089 4.800 - 4.827: 99.7350% ( 1) 00:20:18.089 4.827 - 4.853: 99.7446% ( 2) 00:20:18.089 4.880 - 4.907: 99.7494% ( 1) 00:20:18.089 4.933 - 4.960: 99.7542% ( 1) 00:20:18.089 4.987 - 5.013: 99.7591% ( 1) 00:20:18.089 5.013 - 5.040: 99.7639% ( 1) 00:20:18.089 5.040 - 5.067: 99.7687% ( 1) 00:20:18.089 5.067 - 5.093: 99.7783% ( 2) 00:20:18.089 5.093 - 5.120: 99.7928% ( 3) 00:20:18.089 5.120 - 5.147: 99.7976% ( 1) 00:20:18.089 5.147 - 5.173: 99.8024% ( 1) 00:20:18.089 5.173 - 5.200: 99.8072% ( 1) 00:20:18.089 5.200 - 5.227: 99.8121% ( 1) 00:20:18.089 5.280 - 5.307: 99.8169% ( 1) 00:20:18.089 5.333 - 5.360: 99.8217% ( 1) 00:20:18.089 5.360 - 5.387: 99.8313% ( 2) 00:20:18.089 5.413 - 5.440: 99.8362% ( 1) 00:20:18.089 5.467 - 5.493: 99.8410% ( 1) 00:20:18.089 5.547 - 5.573: 99.8458% ( 1) 00:20:18.089 5.573 - 5.600: 99.8603% ( 3) 00:20:18.089 5.600 - 5.627: 99.8651% ( 1) 00:20:18.089 5.627 - 5.653: 99.8699% ( 1) 00:20:18.089 5.680 - 5.707: 99.8747% ( 1) 00:20:18.089 5.733 - 5.760: 99.8795% ( 1) 00:20:18.089 5.813 - 5.840: 99.8843% ( 1) 00:20:18.089 5.840 - 5.867: 99.8892% ( 1) 00:20:18.089 5.867 - 5.893: 99.8988% ( 2) 00:20:18.089 5.920 - 5.947: 99.9036% ( 1) 00:20:18.089 6.000 - 6.027: 99.9084% ( 1) 00:20:18.089 [2024-09-27 15:38:58.152431] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:18.089 6.080 - 6.107: 99.9133% ( 1) 00:20:18.089 6.160 - 6.187: 99.9181% ( 1) 00:20:18.089 6.427 - 6.453: 99.9229% ( 1) 00:20:18.089 6.453 - 6.480: 99.9277% ( 1) 00:20:18.089 6.693 - 6.720: 99.9325% ( 1) 00:20:18.089 7.947 - 8.000: 99.9374% ( 1) 00:20:18.089 10.240 - 10.293: 99.9422% ( 1) 00:20:18.089 3986.773 - 4014.080: 100.0000% ( 12) 00:20:18.089 00:20:18.089 Complete histogram 00:20:18.089 ================== 00:20:18.089 Range in us Cumulative Count 00:20:18.089 1.627 - 1.633: 0.0048% ( 1) 00:20:18.089 1.633 - 1.640: 0.6120% ( 126) 00:20:18.089 1.640 - 1.647: 1.1180% ( 105) 00:20:18.089 1.647 - 1.653: 1.1517% ( 7) 00:20:18.089 1.653 - 1.660: 1.3011% ( 31) 00:20:18.089 1.660 - 1.667: 1.3637% ( 13) 00:20:18.089 1.667 - 1.673: 1.3734% ( 2) 00:20:18.089 1.673 - 1.680: 1.4071% ( 7) 00:20:18.089 1.680 - 1.687: 1.4987% ( 19) 00:20:18.089 1.687 - 1.693: 38.6565% ( 7711) 00:20:18.089 1.693 - 1.700: 49.6771% ( 2287) 00:20:18.089 1.700 - 1.707: 58.6690% ( 1866) 00:20:18.089 1.707 - 1.720: 77.5732% ( 3923) 00:20:18.089 1.720 - 1.733: 83.0811% ( 1143) 00:20:18.089 1.733 - 1.747: 84.2859% ( 250) 00:20:18.089 1.747 - 1.760: 87.6542% ( 699) 00:20:18.089 1.760 - 1.773: 92.9067% ( 1090) 00:20:18.089 1.773 - 1.787: 96.9497% ( 839) 00:20:18.089 1.787 - 1.800: 98.6989% ( 363) 00:20:18.089 1.800 - 1.813: 99.2627% ( 117) 00:20:18.089 1.813 - 1.827: 99.3880% ( 26) 00:20:18.089 1.827 - 1.840: 99.4121% ( 5) 00:20:18.089 1.840 - 1.853: 99.4169% ( 1) 00:20:18.089 1.920 - 1.933: 99.4217% ( 1) 00:20:18.089 1.973 - 1.987: 99.4266% ( 1) 00:20:18.089 1.987 - 2.000: 99.4314% ( 1) 00:20:18.089 2.000 - 2.013: 99.4362% ( 1) 00:20:18.089 2.013 - 2.027: 99.4410% ( 1) 00:20:18.089 2.040 - 2.053: 99.4458% ( 1) 00:20:18.089 2.067 - 2.080: 99.4507% ( 1) 00:20:18.089 2.120 - 2.133: 99.4555% ( 1) 00:20:18.089 2.240 - 2.253: 99.4603% ( 1) 00:20:18.089 3.187 - 3.200: 99.4651% ( 1) 00:20:18.089 3.307 - 3.320: 99.4699% ( 1) 00:20:18.089 3.360 - 3.373: 99.4747% ( 1) 00:20:18.089 3.413 - 3.440: 99.4796% ( 1) 00:20:18.089 3.440 - 3.467: 99.4844% ( 1) 00:20:18.089 3.520 - 3.547: 99.4892% ( 1) 00:20:18.089 3.547 - 3.573: 99.4940% ( 1) 00:20:18.089 3.600 - 3.627: 99.5037% ( 2) 00:20:18.089 3.627 - 3.653: 99.5085% ( 1) 00:20:18.089 3.707 - 3.733: 99.5133% ( 1) 00:20:18.089 3.813 - 3.840: 99.5181% ( 1) 00:20:18.089 3.893 - 3.920: 99.5229% ( 1) 00:20:18.089 3.973 - 4.000: 99.5278% ( 1) 00:20:18.089 4.053 - 4.080: 99.5326% ( 1) 00:20:18.089 4.107 - 4.133: 99.5374% ( 1) 00:20:18.089 4.213 - 4.240: 99.5422% ( 1) 00:20:18.089 4.293 - 4.320: 99.5470% ( 1) 00:20:18.089 4.427 - 4.453: 99.5519% ( 1) 00:20:18.089 4.507 - 4.533: 99.5567% ( 1) 00:20:18.089 4.587 - 4.613: 99.5615% ( 1) 00:20:18.089 4.747 - 4.773: 99.5663% ( 1) 00:20:18.089 5.093 - 5.120: 99.5711% ( 1) 00:20:18.089 5.253 - 5.280: 99.5759% ( 1) 00:20:18.089 5.680 - 5.707: 99.5808% ( 1) 00:20:18.089 6.053 - 6.080: 99.5904% ( 2) 00:20:18.089 7.787 - 7.840: 99.5952% ( 1) 00:20:18.089 10.613 - 10.667: 99.6000% ( 1) 00:20:18.089 11.467 - 11.520: 99.6049% ( 1) 00:20:18.089 15.893 - 16.000: 99.6097% ( 1) 00:20:18.089 3986.773 - 4014.080: 100.0000% ( 81) 00:20:18.089 00:20:18.089 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:20:18.089 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:20:18.089 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:20:18.089 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:20:18.089 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:20:18.089 [ 00:20:18.089 { 00:20:18.089 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:18.089 "subtype": "Discovery", 00:20:18.090 "listen_addresses": [], 00:20:18.090 "allow_any_host": true, 00:20:18.090 "hosts": [] 00:20:18.090 }, 00:20:18.090 { 00:20:18.090 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:20:18.090 "subtype": "NVMe", 00:20:18.090 "listen_addresses": [ 00:20:18.090 { 00:20:18.090 "trtype": "VFIOUSER", 00:20:18.090 "adrfam": "IPv4", 00:20:18.090 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:20:18.090 "trsvcid": "0" 00:20:18.090 } 00:20:18.090 ], 00:20:18.090 "allow_any_host": true, 00:20:18.090 "hosts": [], 00:20:18.090 "serial_number": "SPDK1", 00:20:18.090 "model_number": "SPDK bdev Controller", 00:20:18.090 "max_namespaces": 32, 00:20:18.090 "min_cntlid": 1, 00:20:18.090 "max_cntlid": 65519, 00:20:18.090 "namespaces": [ 00:20:18.090 { 00:20:18.090 "nsid": 1, 00:20:18.090 "bdev_name": "Malloc1", 00:20:18.090 "name": "Malloc1", 00:20:18.090 "nguid": "D4406EC45B214E4DA384E207CAB257EB", 00:20:18.090 "uuid": "d4406ec4-5b21-4e4d-a384-e207cab257eb" 00:20:18.090 }, 00:20:18.090 { 00:20:18.090 "nsid": 2, 00:20:18.090 "bdev_name": "Malloc3", 00:20:18.090 "name": "Malloc3", 00:20:18.090 "nguid": "CFCA9E5D941A4D56A0B84A5B1C71BD54", 00:20:18.090 "uuid": "cfca9e5d-941a-4d56-a0b8-4a5b1c71bd54" 00:20:18.090 } 00:20:18.090 ] 00:20:18.090 }, 00:20:18.090 { 00:20:18.090 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:20:18.090 "subtype": "NVMe", 00:20:18.090 "listen_addresses": [ 00:20:18.090 { 00:20:18.090 "trtype": "VFIOUSER", 00:20:18.090 "adrfam": "IPv4", 00:20:18.090 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:20:18.090 "trsvcid": "0" 00:20:18.090 } 00:20:18.090 ], 00:20:18.090 "allow_any_host": true, 00:20:18.090 "hosts": [], 00:20:18.090 "serial_number": "SPDK2", 00:20:18.090 "model_number": "SPDK bdev Controller", 00:20:18.090 "max_namespaces": 32, 00:20:18.090 "min_cntlid": 1, 00:20:18.090 "max_cntlid": 65519, 00:20:18.090 "namespaces": [ 00:20:18.090 { 00:20:18.090 "nsid": 1, 00:20:18.090 "bdev_name": "Malloc2", 00:20:18.090 "name": "Malloc2", 00:20:18.090 "nguid": "B8D7BDE4788B453AA0CB6CEB01F33B36", 00:20:18.090 "uuid": "b8d7bde4-788b-453a-a0cb-6ceb01f33b36" 00:20:18.090 } 00:20:18.090 ] 00:20:18.090 } 00:20:18.090 ] 00:20:18.090 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:18.090 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=356762 00:20:18.090 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:20:18.090 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:20:18.090 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:20:18.090 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:18.090 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:20:18.090 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=1 00:20:18.090 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:18.090 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:18.090 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:20:18.090 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=2 00:20:18.090 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:18.090 [2024-09-27 15:38:58.521301] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:18.351 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:18.351 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:18.351 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:20:18.351 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:20:18.351 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:20:18.351 Malloc4 00:20:18.351 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:20:18.611 [2024-09-27 15:38:58.932123] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:18.611 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:20:18.611 Asynchronous Event Request test 00:20:18.611 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:18.611 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:18.611 Registering asynchronous event callbacks... 00:20:18.611 Starting namespace attribute notice tests for all controllers... 00:20:18.611 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:18.611 aer_cb - Changed Namespace 00:20:18.611 Cleaning up... 00:20:18.871 [ 00:20:18.871 { 00:20:18.871 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:18.871 "subtype": "Discovery", 00:20:18.871 "listen_addresses": [], 00:20:18.871 "allow_any_host": true, 00:20:18.871 "hosts": [] 00:20:18.871 }, 00:20:18.871 { 00:20:18.871 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:20:18.871 "subtype": "NVMe", 00:20:18.871 "listen_addresses": [ 00:20:18.871 { 00:20:18.871 "trtype": "VFIOUSER", 00:20:18.871 "adrfam": "IPv4", 00:20:18.871 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:20:18.871 "trsvcid": "0" 00:20:18.871 } 00:20:18.871 ], 00:20:18.871 "allow_any_host": true, 00:20:18.871 "hosts": [], 00:20:18.871 "serial_number": "SPDK1", 00:20:18.871 "model_number": "SPDK bdev Controller", 00:20:18.871 "max_namespaces": 32, 00:20:18.871 "min_cntlid": 1, 00:20:18.871 "max_cntlid": 65519, 00:20:18.871 "namespaces": [ 00:20:18.871 { 00:20:18.871 "nsid": 1, 00:20:18.871 "bdev_name": "Malloc1", 00:20:18.871 "name": "Malloc1", 00:20:18.871 "nguid": "D4406EC45B214E4DA384E207CAB257EB", 00:20:18.871 "uuid": "d4406ec4-5b21-4e4d-a384-e207cab257eb" 00:20:18.871 }, 00:20:18.871 { 00:20:18.871 "nsid": 2, 00:20:18.871 "bdev_name": "Malloc3", 00:20:18.871 "name": "Malloc3", 00:20:18.871 "nguid": "CFCA9E5D941A4D56A0B84A5B1C71BD54", 00:20:18.871 "uuid": "cfca9e5d-941a-4d56-a0b8-4a5b1c71bd54" 00:20:18.871 } 00:20:18.871 ] 00:20:18.871 }, 00:20:18.871 { 00:20:18.871 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:20:18.871 "subtype": "NVMe", 00:20:18.871 "listen_addresses": [ 00:20:18.871 { 00:20:18.871 "trtype": "VFIOUSER", 00:20:18.871 "adrfam": "IPv4", 00:20:18.871 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:20:18.871 "trsvcid": "0" 00:20:18.871 } 00:20:18.871 ], 00:20:18.871 "allow_any_host": true, 00:20:18.871 "hosts": [], 00:20:18.871 "serial_number": "SPDK2", 00:20:18.871 "model_number": "SPDK bdev Controller", 00:20:18.871 "max_namespaces": 32, 00:20:18.871 "min_cntlid": 1, 00:20:18.871 "max_cntlid": 65519, 00:20:18.871 "namespaces": [ 00:20:18.871 { 00:20:18.871 "nsid": 1, 00:20:18.871 "bdev_name": "Malloc2", 00:20:18.871 "name": "Malloc2", 00:20:18.871 "nguid": "B8D7BDE4788B453AA0CB6CEB01F33B36", 00:20:18.871 "uuid": "b8d7bde4-788b-453a-a0cb-6ceb01f33b36" 00:20:18.871 }, 00:20:18.871 { 00:20:18.871 "nsid": 2, 00:20:18.871 "bdev_name": "Malloc4", 00:20:18.871 "name": "Malloc4", 00:20:18.871 "nguid": "A55B769A8CAD4E5FB02E51E418B04446", 00:20:18.871 "uuid": "a55b769a-8cad-4e5f-b02e-51e418b04446" 00:20:18.871 } 00:20:18.871 ] 00:20:18.871 } 00:20:18.871 ] 00:20:18.871 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 356762 00:20:18.871 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:20:18.871 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 347883 00:20:18.871 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 347883 ']' 00:20:18.871 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 347883 00:20:18.871 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:20:18.871 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:18.871 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 347883 00:20:18.871 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:18.871 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:18.871 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 347883' 00:20:18.871 killing process with pid 347883 00:20:18.871 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 347883 00:20:18.871 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 347883 00:20:19.132 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:20:19.132 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:20:19.132 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:20:19.132 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:20:19.132 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:20:19.132 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=356964 00:20:19.132 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 356964' 00:20:19.132 Process pid: 356964 00:20:19.132 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:19.132 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:20:19.132 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 356964 00:20:19.132 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 356964 ']' 00:20:19.132 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.132 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:19.132 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.132 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:19.132 15:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:20:19.132 [2024-09-27 15:38:59.425701] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:20:19.132 [2024-09-27 15:38:59.426628] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:20:19.194 [2024-09-27 15:38:59.426668] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:19.194 [2024-09-27 15:38:59.505449] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:19.194 [2024-09-27 15:38:59.534087] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:19.194 [2024-09-27 15:38:59.534120] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:19.194 [2024-09-27 15:38:59.534126] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:19.194 [2024-09-27 15:38:59.534131] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:19.194 [2024-09-27 15:38:59.534135] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:19.194 [2024-09-27 15:38:59.534277] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.194 [2024-09-27 15:38:59.534422] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.194 [2024-09-27 15:38:59.534584] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.194 [2024-09-27 15:38:59.534587] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:20:19.194 [2024-09-27 15:38:59.590833] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:20:19.194 [2024-09-27 15:38:59.592010] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:20:19.194 [2024-09-27 15:38:59.592512] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:20:19.194 [2024-09-27 15:38:59.592975] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:20:19.194 [2024-09-27 15:38:59.593017] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:20:19.765 15:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:19.765 15:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:20:19.765 15:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:20:21.149 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:20:21.149 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:20:21.149 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:20:21.149 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:21.149 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:20:21.149 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:21.149 Malloc1 00:20:21.410 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:20:21.410 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:20:21.670 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:20:21.930 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:21.930 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:20:21.930 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:20:21.930 Malloc2 00:20:22.190 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:20:22.190 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:20:22.450 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:20:22.710 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:20:22.710 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 356964 00:20:22.710 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 356964 ']' 00:20:22.710 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 356964 00:20:22.710 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:20:22.710 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:22.710 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 356964 00:20:22.710 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:22.710 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:22.710 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 356964' 00:20:22.710 killing process with pid 356964 00:20:22.710 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 356964 00:20:22.710 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 356964 00:20:22.710 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:20:22.710 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:20:22.710 00:20:22.711 real 0m50.608s 00:20:22.711 user 3m13.295s 00:20:22.711 sys 0m3.072s 00:20:22.711 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:22.711 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:20:22.711 ************************************ 00:20:22.711 END TEST nvmf_vfio_user 00:20:22.711 ************************************ 00:20:22.972 15:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:20:22.972 15:39:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:22.972 15:39:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:22.972 15:39:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:22.972 ************************************ 00:20:22.972 START TEST nvmf_vfio_user_nvme_compliance 00:20:22.972 ************************************ 00:20:22.972 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:20:22.972 * Looking for test storage... 00:20:22.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:20:22.972 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:22.972 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lcov --version 00:20:22.972 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:22.972 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:22.972 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:22.972 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:22.972 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:22.972 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:20:22.972 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:20:22.972 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:20:22.972 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:20:22.972 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:20:22.972 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:20:22.972 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:20:22.972 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:22.972 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:20:22.972 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:20:22.972 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:22.972 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:22.972 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:20:22.972 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:20:22.972 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:22.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.973 --rc genhtml_branch_coverage=1 00:20:22.973 --rc genhtml_function_coverage=1 00:20:22.973 --rc genhtml_legend=1 00:20:22.973 --rc geninfo_all_blocks=1 00:20:22.973 --rc geninfo_unexecuted_blocks=1 00:20:22.973 00:20:22.973 ' 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:22.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.973 --rc genhtml_branch_coverage=1 00:20:22.973 --rc genhtml_function_coverage=1 00:20:22.973 --rc genhtml_legend=1 00:20:22.973 --rc geninfo_all_blocks=1 00:20:22.973 --rc geninfo_unexecuted_blocks=1 00:20:22.973 00:20:22.973 ' 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:22.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.973 --rc genhtml_branch_coverage=1 00:20:22.973 --rc genhtml_function_coverage=1 00:20:22.973 --rc genhtml_legend=1 00:20:22.973 --rc geninfo_all_blocks=1 00:20:22.973 --rc geninfo_unexecuted_blocks=1 00:20:22.973 00:20:22.973 ' 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:22.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.973 --rc genhtml_branch_coverage=1 00:20:22.973 --rc genhtml_function_coverage=1 00:20:22.973 --rc genhtml_legend=1 00:20:22.973 --rc geninfo_all_blocks=1 00:20:22.973 --rc geninfo_unexecuted_blocks=1 00:20:22.973 00:20:22.973 ' 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:22.973 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:22.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:22.974 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:22.974 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:22.974 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:22.974 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:22.974 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:22.974 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:20:22.974 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:20:22.974 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:20:22.974 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=357863 00:20:22.974 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 357863' 00:20:22.974 Process pid: 357863 00:20:22.974 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:23.234 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:20:23.234 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 357863 00:20:23.234 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 357863 ']' 00:20:23.234 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:23.234 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:23.234 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:23.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:23.234 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:23.234 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:23.235 [2024-09-27 15:39:03.509018] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:20:23.235 [2024-09-27 15:39:03.509067] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:23.235 [2024-09-27 15:39:03.579148] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:23.235 [2024-09-27 15:39:03.607786] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:23.235 [2024-09-27 15:39:03.607821] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:23.235 [2024-09-27 15:39:03.607827] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:23.235 [2024-09-27 15:39:03.607832] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:23.235 [2024-09-27 15:39:03.607836] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:23.235 [2024-09-27 15:39:03.607991] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:23.235 [2024-09-27 15:39:03.608241] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.235 [2024-09-27 15:39:03.608242] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:20:23.235 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:23.235 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:20:23.235 15:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:20:24.620 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:20:24.620 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:20:24.620 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:20:24.620 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.620 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:24.620 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.620 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:20:24.620 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:20:24.620 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.620 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:24.620 malloc0 00:20:24.620 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.620 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:20:24.620 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.620 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:24.620 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.620 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:20:24.620 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.620 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:24.620 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.621 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:20:24.621 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.621 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:24.621 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.621 15:39:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:20:24.621 00:20:24.621 00:20:24.621 CUnit - A unit testing framework for C - Version 2.1-3 00:20:24.621 http://cunit.sourceforge.net/ 00:20:24.621 00:20:24.621 00:20:24.621 Suite: nvme_compliance 00:20:24.621 Test: admin_identify_ctrlr_verify_dptr ...[2024-09-27 15:39:04.912319] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:24.621 [2024-09-27 15:39:04.913606] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:20:24.621 [2024-09-27 15:39:04.913617] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:20:24.621 [2024-09-27 15:39:04.913622] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:20:24.621 [2024-09-27 15:39:04.915340] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:24.621 passed 00:20:24.621 Test: admin_identify_ctrlr_verify_fused ...[2024-09-27 15:39:04.993825] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:24.621 [2024-09-27 15:39:04.996848] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:24.621 passed 00:20:24.621 Test: admin_identify_ns ...[2024-09-27 15:39:05.073258] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:24.882 [2024-09-27 15:39:05.136901] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:20:24.882 [2024-09-27 15:39:05.144902] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:20:24.882 [2024-09-27 15:39:05.165982] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:24.882 passed 00:20:24.882 Test: admin_get_features_mandatory_features ...[2024-09-27 15:39:05.240216] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:24.882 [2024-09-27 15:39:05.243231] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:24.882 passed 00:20:24.882 Test: admin_get_features_optional_features ...[2024-09-27 15:39:05.319710] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:24.882 [2024-09-27 15:39:05.324742] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:24.882 passed 00:20:25.143 Test: admin_set_features_number_of_queues ...[2024-09-27 15:39:05.398454] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:25.143 [2024-09-27 15:39:05.502991] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:25.143 passed 00:20:25.143 Test: admin_get_log_page_mandatory_logs ...[2024-09-27 15:39:05.579017] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:25.143 [2024-09-27 15:39:05.582045] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:25.143 passed 00:20:25.403 Test: admin_get_log_page_with_lpo ...[2024-09-27 15:39:05.659276] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:25.403 [2024-09-27 15:39:05.726901] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:20:25.403 [2024-09-27 15:39:05.739940] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:25.403 passed 00:20:25.403 Test: fabric_property_get ...[2024-09-27 15:39:05.813161] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:25.403 [2024-09-27 15:39:05.814367] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:20:25.403 [2024-09-27 15:39:05.816183] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:25.403 passed 00:20:25.663 Test: admin_delete_io_sq_use_admin_qid ...[2024-09-27 15:39:05.892632] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:25.663 [2024-09-27 15:39:05.893836] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:20:25.663 [2024-09-27 15:39:05.895655] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:25.663 passed 00:20:25.663 Test: admin_delete_io_sq_delete_sq_twice ...[2024-09-27 15:39:05.971400] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:25.663 [2024-09-27 15:39:06.055900] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:20:25.663 [2024-09-27 15:39:06.071901] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:20:25.663 [2024-09-27 15:39:06.076973] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:25.663 passed 00:20:25.663 Test: admin_delete_io_cq_use_admin_qid ...[2024-09-27 15:39:06.150236] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:25.663 [2024-09-27 15:39:06.151430] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:20:25.924 [2024-09-27 15:39:06.153254] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:25.924 passed 00:20:25.924 Test: admin_delete_io_cq_delete_cq_first ...[2024-09-27 15:39:06.229291] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:25.924 [2024-09-27 15:39:06.304901] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:20:25.924 [2024-09-27 15:39:06.328905] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:20:25.924 [2024-09-27 15:39:06.333962] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:25.924 passed 00:20:25.924 Test: admin_create_io_cq_verify_iv_pc ...[2024-09-27 15:39:06.411992] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:26.184 [2024-09-27 15:39:06.413195] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:20:26.184 [2024-09-27 15:39:06.413214] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:20:26.184 [2024-09-27 15:39:06.415011] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:26.184 passed 00:20:26.184 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-09-27 15:39:06.487730] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:26.184 [2024-09-27 15:39:06.579899] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:20:26.184 [2024-09-27 15:39:06.587900] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:20:26.184 [2024-09-27 15:39:06.595900] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:20:26.184 [2024-09-27 15:39:06.603899] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:20:26.184 [2024-09-27 15:39:06.632969] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:26.184 passed 00:20:26.446 Test: admin_create_io_sq_verify_pc ...[2024-09-27 15:39:06.706156] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:26.446 [2024-09-27 15:39:06.722904] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:20:26.446 [2024-09-27 15:39:06.740318] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:26.446 passed 00:20:26.446 Test: admin_create_io_qp_max_qps ...[2024-09-27 15:39:06.816789] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:27.834 [2024-09-27 15:39:07.925902] nvme_ctrlr.c:5504:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:20:27.834 [2024-09-27 15:39:08.297841] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:28.095 passed 00:20:28.095 Test: admin_create_io_sq_shared_cq ...[2024-09-27 15:39:08.372251] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:28.095 [2024-09-27 15:39:08.507898] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:20:28.095 [2024-09-27 15:39:08.544951] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:28.095 passed 00:20:28.095 00:20:28.096 Run Summary: Type Total Ran Passed Failed Inactive 00:20:28.096 suites 1 1 n/a 0 0 00:20:28.096 tests 18 18 18 0 0 00:20:28.096 asserts 360 360 360 0 n/a 00:20:28.096 00:20:28.096 Elapsed time = 1.489 seconds 00:20:28.357 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 357863 00:20:28.357 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 357863 ']' 00:20:28.357 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 357863 00:20:28.357 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:20:28.357 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:28.357 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 357863 00:20:28.357 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:28.357 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:28.357 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 357863' 00:20:28.357 killing process with pid 357863 00:20:28.357 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 357863 00:20:28.357 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 357863 00:20:28.357 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:20:28.357 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:20:28.357 00:20:28.357 real 0m5.561s 00:20:28.357 user 0m15.556s 00:20:28.357 sys 0m0.490s 00:20:28.357 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:28.358 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:28.358 ************************************ 00:20:28.358 END TEST nvmf_vfio_user_nvme_compliance 00:20:28.358 ************************************ 00:20:28.358 15:39:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:20:28.358 15:39:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:28.358 15:39:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:28.358 15:39:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:28.621 ************************************ 00:20:28.621 START TEST nvmf_vfio_user_fuzz 00:20:28.621 ************************************ 00:20:28.621 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:20:28.621 * Looking for test storage... 00:20:28.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:28.621 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:28.621 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:20:28.621 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:28.621 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:28.621 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:28.621 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:28.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.622 --rc genhtml_branch_coverage=1 00:20:28.622 --rc genhtml_function_coverage=1 00:20:28.622 --rc genhtml_legend=1 00:20:28.622 --rc geninfo_all_blocks=1 00:20:28.622 --rc geninfo_unexecuted_blocks=1 00:20:28.622 00:20:28.622 ' 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:28.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.622 --rc genhtml_branch_coverage=1 00:20:28.622 --rc genhtml_function_coverage=1 00:20:28.622 --rc genhtml_legend=1 00:20:28.622 --rc geninfo_all_blocks=1 00:20:28.622 --rc geninfo_unexecuted_blocks=1 00:20:28.622 00:20:28.622 ' 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:28.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.622 --rc genhtml_branch_coverage=1 00:20:28.622 --rc genhtml_function_coverage=1 00:20:28.622 --rc genhtml_legend=1 00:20:28.622 --rc geninfo_all_blocks=1 00:20:28.622 --rc geninfo_unexecuted_blocks=1 00:20:28.622 00:20:28.622 ' 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:28.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.622 --rc genhtml_branch_coverage=1 00:20:28.622 --rc genhtml_function_coverage=1 00:20:28.622 --rc genhtml_legend=1 00:20:28.622 --rc geninfo_all_blocks=1 00:20:28.622 --rc geninfo_unexecuted_blocks=1 00:20:28.622 00:20:28.622 ' 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:28.622 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:28.622 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:28.623 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:20:28.623 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:20:28.623 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:20:28.623 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:20:28.623 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:20:28.623 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=359326 00:20:28.623 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 359326' 00:20:28.623 Process pid: 359326 00:20:28.623 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:28.623 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:28.623 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 359326 00:20:28.623 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 359326 ']' 00:20:28.623 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.623 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:28.623 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.623 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:28.623 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:29.569 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:29.569 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:20:29.569 15:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:20:30.513 15:39:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:20:30.513 15:39:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.513 15:39:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:30.513 15:39:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.513 15:39:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:20:30.514 15:39:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:20:30.514 15:39:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.514 15:39:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:30.514 malloc0 00:20:30.514 15:39:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.514 15:39:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:20:30.514 15:39:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.514 15:39:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:30.776 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.776 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:20:30.776 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.776 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:30.776 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.776 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:20:30.776 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.776 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:30.776 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.776 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:20:30.776 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:21:02.905 Fuzzing completed. Shutting down the fuzz application 00:21:02.905 00:21:02.905 Dumping successful admin opcodes: 00:21:02.905 8, 9, 10, 24, 00:21:02.905 Dumping successful io opcodes: 00:21:02.905 0, 00:21:02.905 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1178615, total successful commands: 4630, random_seed: 3065276928 00:21:02.905 NS: 0x200003a1ef00 admin qp, Total commands completed: 211370, total successful commands: 1700, random_seed: 1632335872 00:21:02.905 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:21:02.905 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.905 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:02.905 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.905 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 359326 00:21:02.905 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 359326 ']' 00:21:02.905 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 359326 00:21:02.905 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:21:02.905 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:02.905 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 359326 00:21:02.905 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:02.905 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:02.905 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 359326' 00:21:02.905 killing process with pid 359326 00:21:02.905 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 359326 00:21:02.905 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 359326 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:21:02.906 00:21:02.906 real 0m32.809s 00:21:02.906 user 0m35.445s 00:21:02.906 sys 0m25.490s 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:02.906 ************************************ 00:21:02.906 END TEST nvmf_vfio_user_fuzz 00:21:02.906 ************************************ 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:02.906 ************************************ 00:21:02.906 START TEST nvmf_auth_target 00:21:02.906 ************************************ 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:21:02.906 * Looking for test storage... 00:21:02.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:02.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.906 --rc genhtml_branch_coverage=1 00:21:02.906 --rc genhtml_function_coverage=1 00:21:02.906 --rc genhtml_legend=1 00:21:02.906 --rc geninfo_all_blocks=1 00:21:02.906 --rc geninfo_unexecuted_blocks=1 00:21:02.906 00:21:02.906 ' 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:02.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.906 --rc genhtml_branch_coverage=1 00:21:02.906 --rc genhtml_function_coverage=1 00:21:02.906 --rc genhtml_legend=1 00:21:02.906 --rc geninfo_all_blocks=1 00:21:02.906 --rc geninfo_unexecuted_blocks=1 00:21:02.906 00:21:02.906 ' 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:02.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.906 --rc genhtml_branch_coverage=1 00:21:02.906 --rc genhtml_function_coverage=1 00:21:02.906 --rc genhtml_legend=1 00:21:02.906 --rc geninfo_all_blocks=1 00:21:02.906 --rc geninfo_unexecuted_blocks=1 00:21:02.906 00:21:02.906 ' 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:02.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.906 --rc genhtml_branch_coverage=1 00:21:02.906 --rc genhtml_function_coverage=1 00:21:02.906 --rc genhtml_legend=1 00:21:02.906 --rc geninfo_all_blocks=1 00:21:02.906 --rc geninfo_unexecuted_blocks=1 00:21:02.906 00:21:02.906 ' 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.906 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.907 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.907 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:21:02.907 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.907 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:21:02.907 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:02.907 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:02.907 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:02.907 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:02.907 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:02.907 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:02.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:02.907 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:02.907 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:02.907 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:02.907 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:21:02.907 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:21:02.907 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:21:02.907 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:02.907 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:21:02.907 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:21:02.907 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:21:02.907 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:21:02.907 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:21:02.907 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:02.907 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:21:02.907 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:21:02.907 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:21:02.907 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.907 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:02.907 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.907 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:21:02.907 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:21:02.907 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:21:02.907 15:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.500 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:09.500 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:21:09.500 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:09.500 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:09.500 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:09.500 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:09.500 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:09.500 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:21:09.500 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:09.500 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:21:09.500 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:21:09.500 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:21:09.500 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:21:09.500 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:21:09.500 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:21:09.500 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:09.500 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:09.500 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:09.500 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:09.500 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:09.500 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:09.500 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:09.500 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:09.500 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:09.500 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:09.500 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:09.500 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:21:09.500 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:21:09.500 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:21:09.500 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:21:09.500 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:21:09.500 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:21:09.500 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:21:09.500 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:09.500 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:09.500 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:21:09.500 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:21:09.500 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:09.500 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:09.500 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:21:09.500 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:21:09.500 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:09.501 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:09.501 Found net devices under 0000:31:00.0: cvl_0_0 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:09.501 Found net devices under 0000:31:00.1: cvl_0_1 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # is_hw=yes 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:09.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:09.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.499 ms 00:21:09.501 00:21:09.501 --- 10.0.0.2 ping statistics --- 00:21:09.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.501 rtt min/avg/max/mdev = 0.499/0.499/0.499/0.000 ms 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:09.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:09.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:21:09.501 00:21:09.501 --- 10.0.0.1 ping statistics --- 00:21:09.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.501 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # return 0 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=369619 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 369619 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 369619 ']' 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:09.501 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.074 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:10.074 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:21:10.074 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:21:10.074 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:10.074 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.074 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:10.074 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=369734 00:21:10.074 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:10.074 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:21:10.074 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:21:10.074 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:21:10.074 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:10.074 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:21:10.074 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=null 00:21:10.074 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:21:10.074 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:10.336 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=f1e155e139f2aca4c7178baaa20c29bed2ad79f433eea72c 00:21:10.336 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:21:10.336 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.Ybg 00:21:10.336 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key f1e155e139f2aca4c7178baaa20c29bed2ad79f433eea72c 0 00:21:10.336 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 f1e155e139f2aca4c7178baaa20c29bed2ad79f433eea72c 0 00:21:10.336 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:21:10.336 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:21:10.336 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=f1e155e139f2aca4c7178baaa20c29bed2ad79f433eea72c 00:21:10.336 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=0 00:21:10.336 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:21:10.336 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.Ybg 00:21:10.336 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.Ybg 00:21:10.336 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Ybg 00:21:10.336 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:21:10.336 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:21:10.336 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:10.336 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:21:10.336 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:21:10.336 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:21:10.336 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:10.336 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=bb7b1e21909b156a94274d5f39158f1c6c0baf21a727db8ae8cd3033f38b8f1f 00:21:10.336 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:21:10.336 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.AoF 00:21:10.336 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key bb7b1e21909b156a94274d5f39158f1c6c0baf21a727db8ae8cd3033f38b8f1f 3 00:21:10.336 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 bb7b1e21909b156a94274d5f39158f1c6c0baf21a727db8ae8cd3033f38b8f1f 3 00:21:10.336 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:21:10.336 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:21:10.336 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=bb7b1e21909b156a94274d5f39158f1c6c0baf21a727db8ae8cd3033f38b8f1f 00:21:10.336 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:21:10.336 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:21:10.336 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.AoF 00:21:10.336 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.AoF 00:21:10.336 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.AoF 00:21:10.336 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:21:10.337 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:21:10.337 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:10.337 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:21:10.337 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:21:10.337 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:21:10.337 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:10.337 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=b4e3947b3715be432a133f2a9f053f87 00:21:10.337 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:21:10.337 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.CwZ 00:21:10.337 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key b4e3947b3715be432a133f2a9f053f87 1 00:21:10.337 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 b4e3947b3715be432a133f2a9f053f87 1 00:21:10.337 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:21:10.337 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:21:10.337 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=b4e3947b3715be432a133f2a9f053f87 00:21:10.337 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:21:10.337 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:21:10.337 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.CwZ 00:21:10.337 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.CwZ 00:21:10.337 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.CwZ 00:21:10.337 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:21:10.337 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:21:10.337 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:10.337 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:21:10.337 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:21:10.337 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:21:10.337 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:10.337 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=8ae5c974dd0fef0524b5e3ab76b5977f2df3feb25665e3d1 00:21:10.337 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:21:10.337 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.qpH 00:21:10.337 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 8ae5c974dd0fef0524b5e3ab76b5977f2df3feb25665e3d1 2 00:21:10.337 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 8ae5c974dd0fef0524b5e3ab76b5977f2df3feb25665e3d1 2 00:21:10.337 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:21:10.337 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:21:10.337 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=8ae5c974dd0fef0524b5e3ab76b5977f2df3feb25665e3d1 00:21:10.337 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:21:10.337 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:21:10.337 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.qpH 00:21:10.337 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.qpH 00:21:10.337 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.qpH 00:21:10.337 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=17ecb484b779b64b622ed2c1a22857f48f8720fd701efc86 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.cku 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 17ecb484b779b64b622ed2c1a22857f48f8720fd701efc86 2 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 17ecb484b779b64b622ed2c1a22857f48f8720fd701efc86 2 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=17ecb484b779b64b622ed2c1a22857f48f8720fd701efc86 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.cku 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.cku 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.cku 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=b011ac11138c54b0991c96ec452de102 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.RWD 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key b011ac11138c54b0991c96ec452de102 1 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 b011ac11138c54b0991c96ec452de102 1 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=b011ac11138c54b0991c96ec452de102 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.RWD 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.RWD 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.RWD 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=b36113c166872da277fe9c6a8bb8e8a9734f56002cf5c5f9bebe26c9f1f483b9 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.euC 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key b36113c166872da277fe9c6a8bb8e8a9734f56002cf5c5f9bebe26c9f1f483b9 3 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 b36113c166872da277fe9c6a8bb8e8a9734f56002cf5c5f9bebe26c9f1f483b9 3 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=b36113c166872da277fe9c6a8bb8e8a9734f56002cf5c5f9bebe26c9f1f483b9 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:21:10.599 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:21:10.599 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.euC 00:21:10.599 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.euC 00:21:10.599 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.euC 00:21:10.600 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:21:10.600 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 369619 00:21:10.600 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 369619 ']' 00:21:10.600 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.600 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:10.600 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.600 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:10.600 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.861 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:10.861 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:21:10.861 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 369734 /var/tmp/host.sock 00:21:10.861 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 369734 ']' 00:21:10.861 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:21:10.861 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:10.861 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:21:10.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:21:10.861 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:10.861 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.122 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:11.123 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:21:11.123 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:21:11.123 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.123 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.123 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.123 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:11.123 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Ybg 00:21:11.123 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.123 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.123 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.123 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Ybg 00:21:11.123 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Ybg 00:21:11.384 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.AoF ]] 00:21:11.384 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.AoF 00:21:11.384 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.384 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.384 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.384 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.AoF 00:21:11.384 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.AoF 00:21:11.384 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:11.384 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.CwZ 00:21:11.384 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.384 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.646 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.646 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.CwZ 00:21:11.646 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.CwZ 00:21:11.646 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.qpH ]] 00:21:11.646 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.qpH 00:21:11.646 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.646 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.646 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.646 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.qpH 00:21:11.646 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.qpH 00:21:11.908 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:11.908 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.cku 00:21:11.908 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.908 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.908 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.908 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.cku 00:21:11.908 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.cku 00:21:12.169 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.RWD ]] 00:21:12.169 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.RWD 00:21:12.169 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.169 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.169 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.169 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.RWD 00:21:12.169 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.RWD 00:21:12.431 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:12.431 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.euC 00:21:12.431 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.431 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.431 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.431 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.euC 00:21:12.431 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.euC 00:21:12.431 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:21:12.431 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:12.431 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:12.431 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.431 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:12.431 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:12.693 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:21:12.693 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.693 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:12.693 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:12.693 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:12.693 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.693 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.693 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.693 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.693 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.693 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.693 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.693 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.955 00:21:12.955 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:12.955 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:12.955 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.217 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.217 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.217 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.217 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.217 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.217 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.217 { 00:21:13.217 "cntlid": 1, 00:21:13.217 "qid": 0, 00:21:13.217 "state": "enabled", 00:21:13.217 "thread": "nvmf_tgt_poll_group_000", 00:21:13.217 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:13.217 "listen_address": { 00:21:13.217 "trtype": "TCP", 00:21:13.217 "adrfam": "IPv4", 00:21:13.217 "traddr": "10.0.0.2", 00:21:13.217 "trsvcid": "4420" 00:21:13.217 }, 00:21:13.217 "peer_address": { 00:21:13.217 "trtype": "TCP", 00:21:13.217 "adrfam": "IPv4", 00:21:13.217 "traddr": "10.0.0.1", 00:21:13.217 "trsvcid": "36606" 00:21:13.217 }, 00:21:13.217 "auth": { 00:21:13.217 "state": "completed", 00:21:13.217 "digest": "sha256", 00:21:13.217 "dhgroup": "null" 00:21:13.217 } 00:21:13.217 } 00:21:13.217 ]' 00:21:13.217 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.217 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:13.217 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.217 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:13.217 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.217 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.217 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.217 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.478 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjFlMTU1ZTEzOWYyYWNhNGM3MTc4YmFhYTIwYzI5YmVkMmFkNzlmNDMzZWVhNzJjxLk3pQ==: --dhchap-ctrl-secret DHHC-1:03:YmI3YjFlMjE5MDliMTU2YTk0Mjc0ZDVmMzkxNThmMWM2YzBiYWYyMWE3MjdkYjhhZThjZDMwMzNmMzhiOGYxZkbNJFg=: 00:21:13.478 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZjFlMTU1ZTEzOWYyYWNhNGM3MTc4YmFhYTIwYzI5YmVkMmFkNzlmNDMzZWVhNzJjxLk3pQ==: --dhchap-ctrl-secret DHHC-1:03:YmI3YjFlMjE5MDliMTU2YTk0Mjc0ZDVmMzkxNThmMWM2YzBiYWYyMWE3MjdkYjhhZThjZDMwMzNmMzhiOGYxZkbNJFg=: 00:21:17.689 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.689 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:17.689 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.689 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.689 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.689 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.689 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:17.689 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:17.689 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:21:17.689 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.689 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:17.689 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:17.689 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:17.689 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.689 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.689 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.689 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.689 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.689 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.689 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.689 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.689 00:21:17.689 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:17.689 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:17.689 15:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.689 15:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.689 15:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.689 15:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.689 15:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.689 15:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.689 15:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:17.689 { 00:21:17.689 "cntlid": 3, 00:21:17.689 "qid": 0, 00:21:17.689 "state": "enabled", 00:21:17.689 "thread": "nvmf_tgt_poll_group_000", 00:21:17.689 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:17.689 "listen_address": { 00:21:17.689 "trtype": "TCP", 00:21:17.689 "adrfam": "IPv4", 00:21:17.689 "traddr": "10.0.0.2", 00:21:17.689 "trsvcid": "4420" 00:21:17.689 }, 00:21:17.689 "peer_address": { 00:21:17.689 "trtype": "TCP", 00:21:17.689 "adrfam": "IPv4", 00:21:17.689 "traddr": "10.0.0.1", 00:21:17.689 "trsvcid": "36640" 00:21:17.689 }, 00:21:17.689 "auth": { 00:21:17.689 "state": "completed", 00:21:17.689 "digest": "sha256", 00:21:17.689 "dhgroup": "null" 00:21:17.689 } 00:21:17.689 } 00:21:17.689 ]' 00:21:17.689 15:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:17.689 15:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:17.689 15:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:17.951 15:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:17.951 15:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:17.951 15:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.951 15:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.951 15:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.951 15:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjRlMzk0N2IzNzE1YmU0MzJhMTMzZjJhOWYwNTNmODdWvOUz: --dhchap-ctrl-secret DHHC-1:02:OGFlNWM5NzRkZDBmZWYwNTI0YjVlM2FiNzZiNTk3N2YyZGYzZmViMjU2NjVlM2QxhGH64w==: 00:21:17.951 15:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YjRlMzk0N2IzNzE1YmU0MzJhMTMzZjJhOWYwNTNmODdWvOUz: --dhchap-ctrl-secret DHHC-1:02:OGFlNWM5NzRkZDBmZWYwNTI0YjVlM2FiNzZiNTk3N2YyZGYzZmViMjU2NjVlM2QxhGH64w==: 00:21:18.894 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.894 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:18.894 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.894 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.894 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.894 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:18.894 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:18.894 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:18.894 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:21:18.894 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:18.894 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:18.894 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:18.894 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:18.894 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.894 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.894 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.894 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.894 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.894 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.894 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.895 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.156 00:21:19.156 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.156 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.156 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.417 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.417 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.417 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.417 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.417 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.417 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.417 { 00:21:19.417 "cntlid": 5, 00:21:19.417 "qid": 0, 00:21:19.417 "state": "enabled", 00:21:19.417 "thread": "nvmf_tgt_poll_group_000", 00:21:19.417 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:19.417 "listen_address": { 00:21:19.417 "trtype": "TCP", 00:21:19.417 "adrfam": "IPv4", 00:21:19.417 "traddr": "10.0.0.2", 00:21:19.417 "trsvcid": "4420" 00:21:19.417 }, 00:21:19.417 "peer_address": { 00:21:19.417 "trtype": "TCP", 00:21:19.417 "adrfam": "IPv4", 00:21:19.417 "traddr": "10.0.0.1", 00:21:19.417 "trsvcid": "56288" 00:21:19.417 }, 00:21:19.417 "auth": { 00:21:19.417 "state": "completed", 00:21:19.417 "digest": "sha256", 00:21:19.417 "dhgroup": "null" 00:21:19.417 } 00:21:19.417 } 00:21:19.417 ]' 00:21:19.417 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.417 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:19.417 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.417 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:19.417 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.417 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.417 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.417 15:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.678 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTdlY2I0ODRiNzc5YjY0YjYyMmVkMmMxYTIyODU3ZjQ4Zjg3MjBmZDcwMWVmYzg2ye529Q==: --dhchap-ctrl-secret DHHC-1:01:YjAxMWFjMTExMzhjNTRiMDk5MWM5NmVjNDUyZGUxMDISQyKq: 00:21:19.678 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTdlY2I0ODRiNzc5YjY0YjYyMmVkMmMxYTIyODU3ZjQ4Zjg3MjBmZDcwMWVmYzg2ye529Q==: --dhchap-ctrl-secret DHHC-1:01:YjAxMWFjMTExMzhjNTRiMDk5MWM5NmVjNDUyZGUxMDISQyKq: 00:21:20.620 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.620 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:20.620 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.620 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.620 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.620 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:20.620 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:20.620 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:20.620 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:21:20.620 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:20.620 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:20.620 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:20.620 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:20.620 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.620 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:20.620 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.620 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.620 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.620 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:20.620 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:20.620 15:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:20.880 00:21:20.880 15:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:20.880 15:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:20.880 15:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.141 15:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.141 15:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.141 15:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.141 15:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.141 15:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.141 15:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.141 { 00:21:21.141 "cntlid": 7, 00:21:21.141 "qid": 0, 00:21:21.141 "state": "enabled", 00:21:21.141 "thread": "nvmf_tgt_poll_group_000", 00:21:21.141 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:21.141 "listen_address": { 00:21:21.141 "trtype": "TCP", 00:21:21.141 "adrfam": "IPv4", 00:21:21.141 "traddr": "10.0.0.2", 00:21:21.141 "trsvcid": "4420" 00:21:21.141 }, 00:21:21.141 "peer_address": { 00:21:21.141 "trtype": "TCP", 00:21:21.141 "adrfam": "IPv4", 00:21:21.141 "traddr": "10.0.0.1", 00:21:21.141 "trsvcid": "56310" 00:21:21.141 }, 00:21:21.141 "auth": { 00:21:21.141 "state": "completed", 00:21:21.141 "digest": "sha256", 00:21:21.141 "dhgroup": "null" 00:21:21.141 } 00:21:21.141 } 00:21:21.141 ]' 00:21:21.141 15:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.141 15:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:21.141 15:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.141 15:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:21.142 15:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.142 15:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.142 15:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.142 15:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.403 15:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjM2MTEzYzE2Njg3MmRhMjc3ZmU5YzZhOGJiOGU4YTk3MzRmNTYwMDJjZjVjNWY5YmViZTI2YzlmMWY0ODNiOQEpGbY=: 00:21:21.403 15:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjM2MTEzYzE2Njg3MmRhMjc3ZmU5YzZhOGJiOGU4YTk3MzRmNTYwMDJjZjVjNWY5YmViZTI2YzlmMWY0ODNiOQEpGbY=: 00:21:21.974 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.974 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:21.974 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.974 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.974 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.974 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:21.974 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:21.974 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:21.974 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:22.235 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:21:22.235 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.235 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:22.235 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:22.235 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:22.235 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.235 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.235 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.235 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.235 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.235 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.235 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.236 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.496 00:21:22.496 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:22.496 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.496 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.757 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.757 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.757 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.757 15:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.757 15:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.757 15:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:22.757 { 00:21:22.757 "cntlid": 9, 00:21:22.757 "qid": 0, 00:21:22.757 "state": "enabled", 00:21:22.757 "thread": "nvmf_tgt_poll_group_000", 00:21:22.757 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:22.757 "listen_address": { 00:21:22.757 "trtype": "TCP", 00:21:22.757 "adrfam": "IPv4", 00:21:22.757 "traddr": "10.0.0.2", 00:21:22.757 "trsvcid": "4420" 00:21:22.757 }, 00:21:22.757 "peer_address": { 00:21:22.757 "trtype": "TCP", 00:21:22.757 "adrfam": "IPv4", 00:21:22.757 "traddr": "10.0.0.1", 00:21:22.757 "trsvcid": "56344" 00:21:22.757 }, 00:21:22.757 "auth": { 00:21:22.757 "state": "completed", 00:21:22.757 "digest": "sha256", 00:21:22.757 "dhgroup": "ffdhe2048" 00:21:22.757 } 00:21:22.757 } 00:21:22.757 ]' 00:21:22.757 15:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.758 15:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:22.758 15:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:22.758 15:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:22.758 15:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:22.758 15:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.758 15:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.758 15:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.018 15:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjFlMTU1ZTEzOWYyYWNhNGM3MTc4YmFhYTIwYzI5YmVkMmFkNzlmNDMzZWVhNzJjxLk3pQ==: --dhchap-ctrl-secret DHHC-1:03:YmI3YjFlMjE5MDliMTU2YTk0Mjc0ZDVmMzkxNThmMWM2YzBiYWYyMWE3MjdkYjhhZThjZDMwMzNmMzhiOGYxZkbNJFg=: 00:21:23.018 15:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZjFlMTU1ZTEzOWYyYWNhNGM3MTc4YmFhYTIwYzI5YmVkMmFkNzlmNDMzZWVhNzJjxLk3pQ==: --dhchap-ctrl-secret DHHC-1:03:YmI3YjFlMjE5MDliMTU2YTk0Mjc0ZDVmMzkxNThmMWM2YzBiYWYyMWE3MjdkYjhhZThjZDMwMzNmMzhiOGYxZkbNJFg=: 00:21:23.590 15:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.590 15:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:23.590 15:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.590 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.590 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.590 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.590 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:23.590 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:23.851 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:21:23.851 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:23.851 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:23.851 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:23.851 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:23.851 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.851 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.851 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.851 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.851 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.851 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.851 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.851 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.113 00:21:24.113 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.113 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.113 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.113 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.113 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.113 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.113 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.374 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.374 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.374 { 00:21:24.374 "cntlid": 11, 00:21:24.374 "qid": 0, 00:21:24.374 "state": "enabled", 00:21:24.374 "thread": "nvmf_tgt_poll_group_000", 00:21:24.374 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:24.374 "listen_address": { 00:21:24.375 "trtype": "TCP", 00:21:24.375 "adrfam": "IPv4", 00:21:24.375 "traddr": "10.0.0.2", 00:21:24.375 "trsvcid": "4420" 00:21:24.375 }, 00:21:24.375 "peer_address": { 00:21:24.375 "trtype": "TCP", 00:21:24.375 "adrfam": "IPv4", 00:21:24.375 "traddr": "10.0.0.1", 00:21:24.375 "trsvcid": "56376" 00:21:24.375 }, 00:21:24.375 "auth": { 00:21:24.375 "state": "completed", 00:21:24.375 "digest": "sha256", 00:21:24.375 "dhgroup": "ffdhe2048" 00:21:24.375 } 00:21:24.375 } 00:21:24.375 ]' 00:21:24.375 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.375 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:24.375 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.375 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:24.375 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.375 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.375 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.375 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.635 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjRlMzk0N2IzNzE1YmU0MzJhMTMzZjJhOWYwNTNmODdWvOUz: --dhchap-ctrl-secret DHHC-1:02:OGFlNWM5NzRkZDBmZWYwNTI0YjVlM2FiNzZiNTk3N2YyZGYzZmViMjU2NjVlM2QxhGH64w==: 00:21:24.635 15:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YjRlMzk0N2IzNzE1YmU0MzJhMTMzZjJhOWYwNTNmODdWvOUz: --dhchap-ctrl-secret DHHC-1:02:OGFlNWM5NzRkZDBmZWYwNTI0YjVlM2FiNzZiNTk3N2YyZGYzZmViMjU2NjVlM2QxhGH64w==: 00:21:25.207 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.208 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:25.208 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.208 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.208 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.208 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:25.208 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:25.208 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:25.469 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:21:25.469 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:25.469 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:25.469 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:25.469 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:25.469 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.469 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.469 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.469 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.469 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.469 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.469 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.469 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.469 00:21:25.730 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:25.730 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:25.730 15:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.730 15:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.730 15:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.730 15:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.730 15:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.730 15:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.730 15:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:25.730 { 00:21:25.730 "cntlid": 13, 00:21:25.730 "qid": 0, 00:21:25.730 "state": "enabled", 00:21:25.730 "thread": "nvmf_tgt_poll_group_000", 00:21:25.730 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:25.730 "listen_address": { 00:21:25.730 "trtype": "TCP", 00:21:25.730 "adrfam": "IPv4", 00:21:25.730 "traddr": "10.0.0.2", 00:21:25.730 "trsvcid": "4420" 00:21:25.730 }, 00:21:25.730 "peer_address": { 00:21:25.730 "trtype": "TCP", 00:21:25.730 "adrfam": "IPv4", 00:21:25.730 "traddr": "10.0.0.1", 00:21:25.730 "trsvcid": "56408" 00:21:25.730 }, 00:21:25.730 "auth": { 00:21:25.730 "state": "completed", 00:21:25.730 "digest": "sha256", 00:21:25.730 "dhgroup": "ffdhe2048" 00:21:25.730 } 00:21:25.730 } 00:21:25.730 ]' 00:21:25.730 15:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:25.992 15:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:25.992 15:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:25.992 15:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:25.992 15:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:25.992 15:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.992 15:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.992 15:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.254 15:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTdlY2I0ODRiNzc5YjY0YjYyMmVkMmMxYTIyODU3ZjQ4Zjg3MjBmZDcwMWVmYzg2ye529Q==: --dhchap-ctrl-secret DHHC-1:01:YjAxMWFjMTExMzhjNTRiMDk5MWM5NmVjNDUyZGUxMDISQyKq: 00:21:26.254 15:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTdlY2I0ODRiNzc5YjY0YjYyMmVkMmMxYTIyODU3ZjQ4Zjg3MjBmZDcwMWVmYzg2ye529Q==: --dhchap-ctrl-secret DHHC-1:01:YjAxMWFjMTExMzhjNTRiMDk5MWM5NmVjNDUyZGUxMDISQyKq: 00:21:26.828 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.828 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:26.828 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.828 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.828 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.828 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:26.828 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:26.828 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:27.089 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:21:27.089 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:27.089 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:27.089 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:27.089 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:27.089 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.089 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:27.089 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.089 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.089 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.089 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:27.089 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:27.089 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:27.351 00:21:27.351 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:27.351 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:27.351 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.351 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.351 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.351 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.351 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.351 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.351 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.351 { 00:21:27.351 "cntlid": 15, 00:21:27.351 "qid": 0, 00:21:27.351 "state": "enabled", 00:21:27.351 "thread": "nvmf_tgt_poll_group_000", 00:21:27.351 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:27.351 "listen_address": { 00:21:27.351 "trtype": "TCP", 00:21:27.351 "adrfam": "IPv4", 00:21:27.351 "traddr": "10.0.0.2", 00:21:27.351 "trsvcid": "4420" 00:21:27.351 }, 00:21:27.351 "peer_address": { 00:21:27.351 "trtype": "TCP", 00:21:27.351 "adrfam": "IPv4", 00:21:27.351 "traddr": "10.0.0.1", 00:21:27.351 "trsvcid": "56424" 00:21:27.351 }, 00:21:27.351 "auth": { 00:21:27.351 "state": "completed", 00:21:27.351 "digest": "sha256", 00:21:27.351 "dhgroup": "ffdhe2048" 00:21:27.351 } 00:21:27.351 } 00:21:27.351 ]' 00:21:27.351 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.612 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:27.612 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.612 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:27.612 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.612 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.612 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.612 15:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.873 15:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjM2MTEzYzE2Njg3MmRhMjc3ZmU5YzZhOGJiOGU4YTk3MzRmNTYwMDJjZjVjNWY5YmViZTI2YzlmMWY0ODNiOQEpGbY=: 00:21:27.873 15:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjM2MTEzYzE2Njg3MmRhMjc3ZmU5YzZhOGJiOGU4YTk3MzRmNTYwMDJjZjVjNWY5YmViZTI2YzlmMWY0ODNiOQEpGbY=: 00:21:28.445 15:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.445 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.446 15:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:28.446 15:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.446 15:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.446 15:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.446 15:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:28.446 15:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.446 15:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:28.446 15:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:28.706 15:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:21:28.706 15:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.706 15:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:28.706 15:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:28.706 15:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:28.706 15:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.706 15:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.706 15:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.706 15:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.706 15:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.706 15:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.706 15:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.706 15:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.967 00:21:28.967 15:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:28.967 15:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:28.967 15:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.967 15:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.967 15:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.967 15:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.967 15:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.967 15:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.967 15:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:28.967 { 00:21:28.967 "cntlid": 17, 00:21:28.967 "qid": 0, 00:21:28.967 "state": "enabled", 00:21:28.967 "thread": "nvmf_tgt_poll_group_000", 00:21:28.967 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:28.967 "listen_address": { 00:21:28.967 "trtype": "TCP", 00:21:28.967 "adrfam": "IPv4", 00:21:28.967 "traddr": "10.0.0.2", 00:21:28.967 "trsvcid": "4420" 00:21:28.967 }, 00:21:28.967 "peer_address": { 00:21:28.967 "trtype": "TCP", 00:21:28.967 "adrfam": "IPv4", 00:21:28.967 "traddr": "10.0.0.1", 00:21:28.967 "trsvcid": "38036" 00:21:28.967 }, 00:21:28.967 "auth": { 00:21:28.967 "state": "completed", 00:21:28.967 "digest": "sha256", 00:21:28.967 "dhgroup": "ffdhe3072" 00:21:28.967 } 00:21:28.967 } 00:21:28.967 ]' 00:21:28.967 15:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:29.228 15:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:29.228 15:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:29.228 15:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:29.228 15:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:29.228 15:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.228 15:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.228 15:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.489 15:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjFlMTU1ZTEzOWYyYWNhNGM3MTc4YmFhYTIwYzI5YmVkMmFkNzlmNDMzZWVhNzJjxLk3pQ==: --dhchap-ctrl-secret DHHC-1:03:YmI3YjFlMjE5MDliMTU2YTk0Mjc0ZDVmMzkxNThmMWM2YzBiYWYyMWE3MjdkYjhhZThjZDMwMzNmMzhiOGYxZkbNJFg=: 00:21:29.489 15:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZjFlMTU1ZTEzOWYyYWNhNGM3MTc4YmFhYTIwYzI5YmVkMmFkNzlmNDMzZWVhNzJjxLk3pQ==: --dhchap-ctrl-secret DHHC-1:03:YmI3YjFlMjE5MDliMTU2YTk0Mjc0ZDVmMzkxNThmMWM2YzBiYWYyMWE3MjdkYjhhZThjZDMwMzNmMzhiOGYxZkbNJFg=: 00:21:30.060 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.060 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.060 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:30.061 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.061 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.061 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.061 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:30.061 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:30.061 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:30.322 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:21:30.322 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:30.322 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:30.322 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:30.322 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:30.322 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.322 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.322 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.322 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.322 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.322 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.322 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.322 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.584 00:21:30.584 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:30.584 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:30.584 15:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.584 15:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.584 15:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.584 15:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.584 15:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.584 15:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.584 15:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:30.584 { 00:21:30.584 "cntlid": 19, 00:21:30.584 "qid": 0, 00:21:30.584 "state": "enabled", 00:21:30.584 "thread": "nvmf_tgt_poll_group_000", 00:21:30.584 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:30.584 "listen_address": { 00:21:30.584 "trtype": "TCP", 00:21:30.584 "adrfam": "IPv4", 00:21:30.584 "traddr": "10.0.0.2", 00:21:30.584 "trsvcid": "4420" 00:21:30.584 }, 00:21:30.584 "peer_address": { 00:21:30.584 "trtype": "TCP", 00:21:30.584 "adrfam": "IPv4", 00:21:30.584 "traddr": "10.0.0.1", 00:21:30.584 "trsvcid": "38064" 00:21:30.584 }, 00:21:30.584 "auth": { 00:21:30.584 "state": "completed", 00:21:30.584 "digest": "sha256", 00:21:30.584 "dhgroup": "ffdhe3072" 00:21:30.584 } 00:21:30.584 } 00:21:30.584 ]' 00:21:30.584 15:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:30.845 15:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:30.845 15:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:30.845 15:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:30.845 15:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:30.845 15:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.845 15:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.845 15:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.105 15:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjRlMzk0N2IzNzE1YmU0MzJhMTMzZjJhOWYwNTNmODdWvOUz: --dhchap-ctrl-secret DHHC-1:02:OGFlNWM5NzRkZDBmZWYwNTI0YjVlM2FiNzZiNTk3N2YyZGYzZmViMjU2NjVlM2QxhGH64w==: 00:21:31.105 15:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YjRlMzk0N2IzNzE1YmU0MzJhMTMzZjJhOWYwNTNmODdWvOUz: --dhchap-ctrl-secret DHHC-1:02:OGFlNWM5NzRkZDBmZWYwNTI0YjVlM2FiNzZiNTk3N2YyZGYzZmViMjU2NjVlM2QxhGH64w==: 00:21:31.678 15:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.678 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:31.678 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.678 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.678 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.678 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:31.678 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:31.678 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:31.939 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:21:31.939 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:31.939 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:31.939 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:31.939 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:31.939 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.939 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.939 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.939 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.939 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.939 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.939 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.939 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.939 00:21:32.200 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:32.200 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:32.200 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.200 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.200 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.200 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.200 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.200 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.200 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:32.200 { 00:21:32.200 "cntlid": 21, 00:21:32.200 "qid": 0, 00:21:32.200 "state": "enabled", 00:21:32.200 "thread": "nvmf_tgt_poll_group_000", 00:21:32.200 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:32.200 "listen_address": { 00:21:32.200 "trtype": "TCP", 00:21:32.200 "adrfam": "IPv4", 00:21:32.200 "traddr": "10.0.0.2", 00:21:32.200 "trsvcid": "4420" 00:21:32.200 }, 00:21:32.200 "peer_address": { 00:21:32.200 "trtype": "TCP", 00:21:32.200 "adrfam": "IPv4", 00:21:32.200 "traddr": "10.0.0.1", 00:21:32.200 "trsvcid": "38098" 00:21:32.200 }, 00:21:32.200 "auth": { 00:21:32.200 "state": "completed", 00:21:32.200 "digest": "sha256", 00:21:32.200 "dhgroup": "ffdhe3072" 00:21:32.200 } 00:21:32.200 } 00:21:32.200 ]' 00:21:32.200 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:32.200 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:32.200 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:32.461 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:32.461 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:32.461 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.461 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.461 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.721 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTdlY2I0ODRiNzc5YjY0YjYyMmVkMmMxYTIyODU3ZjQ4Zjg3MjBmZDcwMWVmYzg2ye529Q==: --dhchap-ctrl-secret DHHC-1:01:YjAxMWFjMTExMzhjNTRiMDk5MWM5NmVjNDUyZGUxMDISQyKq: 00:21:32.721 15:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTdlY2I0ODRiNzc5YjY0YjYyMmVkMmMxYTIyODU3ZjQ4Zjg3MjBmZDcwMWVmYzg2ye529Q==: --dhchap-ctrl-secret DHHC-1:01:YjAxMWFjMTExMzhjNTRiMDk5MWM5NmVjNDUyZGUxMDISQyKq: 00:21:33.293 15:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.293 15:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:33.293 15:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.293 15:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.293 15:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.293 15:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:33.293 15:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:33.293 15:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:33.554 15:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:21:33.554 15:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:33.554 15:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:33.554 15:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:33.554 15:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:33.554 15:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.554 15:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:33.554 15:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.554 15:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.554 15:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.554 15:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:33.554 15:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:33.554 15:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:33.815 00:21:33.815 15:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:33.815 15:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.815 15:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:33.815 15:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.815 15:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.815 15:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.815 15:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.815 15:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.815 15:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.815 { 00:21:33.815 "cntlid": 23, 00:21:33.815 "qid": 0, 00:21:33.815 "state": "enabled", 00:21:33.815 "thread": "nvmf_tgt_poll_group_000", 00:21:33.815 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:33.815 "listen_address": { 00:21:33.815 "trtype": "TCP", 00:21:33.815 "adrfam": "IPv4", 00:21:33.815 "traddr": "10.0.0.2", 00:21:33.815 "trsvcid": "4420" 00:21:33.815 }, 00:21:33.815 "peer_address": { 00:21:33.815 "trtype": "TCP", 00:21:33.815 "adrfam": "IPv4", 00:21:33.815 "traddr": "10.0.0.1", 00:21:33.815 "trsvcid": "38112" 00:21:33.815 }, 00:21:33.815 "auth": { 00:21:33.815 "state": "completed", 00:21:33.815 "digest": "sha256", 00:21:33.815 "dhgroup": "ffdhe3072" 00:21:33.815 } 00:21:33.815 } 00:21:33.815 ]' 00:21:33.815 15:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:34.077 15:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:34.077 15:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:34.077 15:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:34.077 15:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:34.077 15:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.077 15:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.077 15:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.338 15:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjM2MTEzYzE2Njg3MmRhMjc3ZmU5YzZhOGJiOGU4YTk3MzRmNTYwMDJjZjVjNWY5YmViZTI2YzlmMWY0ODNiOQEpGbY=: 00:21:34.338 15:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjM2MTEzYzE2Njg3MmRhMjc3ZmU5YzZhOGJiOGU4YTk3MzRmNTYwMDJjZjVjNWY5YmViZTI2YzlmMWY0ODNiOQEpGbY=: 00:21:34.912 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.912 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:34.912 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.912 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.912 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.912 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:34.912 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:34.912 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:34.912 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:35.172 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:21:35.172 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:35.172 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:35.172 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:35.172 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:35.172 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.172 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.172 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.172 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.172 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.172 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.172 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.172 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.434 00:21:35.434 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:35.434 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:35.434 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.694 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.694 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.694 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.694 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.694 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.695 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:35.695 { 00:21:35.695 "cntlid": 25, 00:21:35.695 "qid": 0, 00:21:35.695 "state": "enabled", 00:21:35.695 "thread": "nvmf_tgt_poll_group_000", 00:21:35.695 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:35.695 "listen_address": { 00:21:35.695 "trtype": "TCP", 00:21:35.695 "adrfam": "IPv4", 00:21:35.695 "traddr": "10.0.0.2", 00:21:35.695 "trsvcid": "4420" 00:21:35.695 }, 00:21:35.695 "peer_address": { 00:21:35.695 "trtype": "TCP", 00:21:35.695 "adrfam": "IPv4", 00:21:35.695 "traddr": "10.0.0.1", 00:21:35.695 "trsvcid": "38132" 00:21:35.695 }, 00:21:35.695 "auth": { 00:21:35.695 "state": "completed", 00:21:35.695 "digest": "sha256", 00:21:35.695 "dhgroup": "ffdhe4096" 00:21:35.695 } 00:21:35.695 } 00:21:35.695 ]' 00:21:35.695 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:35.695 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:35.695 15:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.695 15:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:35.695 15:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.695 15:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.695 15:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.695 15:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.955 15:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjFlMTU1ZTEzOWYyYWNhNGM3MTc4YmFhYTIwYzI5YmVkMmFkNzlmNDMzZWVhNzJjxLk3pQ==: --dhchap-ctrl-secret DHHC-1:03:YmI3YjFlMjE5MDliMTU2YTk0Mjc0ZDVmMzkxNThmMWM2YzBiYWYyMWE3MjdkYjhhZThjZDMwMzNmMzhiOGYxZkbNJFg=: 00:21:35.956 15:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZjFlMTU1ZTEzOWYyYWNhNGM3MTc4YmFhYTIwYzI5YmVkMmFkNzlmNDMzZWVhNzJjxLk3pQ==: --dhchap-ctrl-secret DHHC-1:03:YmI3YjFlMjE5MDliMTU2YTk0Mjc0ZDVmMzkxNThmMWM2YzBiYWYyMWE3MjdkYjhhZThjZDMwMzNmMzhiOGYxZkbNJFg=: 00:21:36.528 15:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.528 15:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:36.528 15:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.528 15:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.528 15:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.528 15:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:36.528 15:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:36.528 15:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:36.789 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:21:36.789 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:36.789 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:36.789 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:36.789 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:36.789 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.789 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.789 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.789 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.789 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.789 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.789 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.789 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.051 00:21:37.051 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:37.051 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:37.051 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.312 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.312 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.312 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.312 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.312 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.312 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:37.312 { 00:21:37.312 "cntlid": 27, 00:21:37.312 "qid": 0, 00:21:37.312 "state": "enabled", 00:21:37.312 "thread": "nvmf_tgt_poll_group_000", 00:21:37.312 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:37.312 "listen_address": { 00:21:37.312 "trtype": "TCP", 00:21:37.312 "adrfam": "IPv4", 00:21:37.312 "traddr": "10.0.0.2", 00:21:37.312 "trsvcid": "4420" 00:21:37.312 }, 00:21:37.312 "peer_address": { 00:21:37.312 "trtype": "TCP", 00:21:37.312 "adrfam": "IPv4", 00:21:37.312 "traddr": "10.0.0.1", 00:21:37.312 "trsvcid": "38166" 00:21:37.312 }, 00:21:37.312 "auth": { 00:21:37.312 "state": "completed", 00:21:37.312 "digest": "sha256", 00:21:37.312 "dhgroup": "ffdhe4096" 00:21:37.312 } 00:21:37.312 } 00:21:37.312 ]' 00:21:37.312 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:37.312 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:37.312 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:37.312 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:37.312 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:37.312 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.312 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.312 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.574 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjRlMzk0N2IzNzE1YmU0MzJhMTMzZjJhOWYwNTNmODdWvOUz: --dhchap-ctrl-secret DHHC-1:02:OGFlNWM5NzRkZDBmZWYwNTI0YjVlM2FiNzZiNTk3N2YyZGYzZmViMjU2NjVlM2QxhGH64w==: 00:21:37.574 15:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YjRlMzk0N2IzNzE1YmU0MzJhMTMzZjJhOWYwNTNmODdWvOUz: --dhchap-ctrl-secret DHHC-1:02:OGFlNWM5NzRkZDBmZWYwNTI0YjVlM2FiNzZiNTk3N2YyZGYzZmViMjU2NjVlM2QxhGH64w==: 00:21:38.147 15:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.147 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.147 15:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:38.147 15:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.147 15:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.147 15:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.147 15:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:38.147 15:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:38.147 15:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:38.408 15:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:21:38.408 15:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:38.408 15:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:38.408 15:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:38.408 15:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:38.408 15:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.408 15:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.408 15:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.408 15:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.408 15:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.408 15:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.408 15:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.408 15:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.670 00:21:38.671 15:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.671 15:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.671 15:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.932 15:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.932 15:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.932 15:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.932 15:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.932 15:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.932 15:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:38.932 { 00:21:38.932 "cntlid": 29, 00:21:38.932 "qid": 0, 00:21:38.932 "state": "enabled", 00:21:38.932 "thread": "nvmf_tgt_poll_group_000", 00:21:38.932 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:38.932 "listen_address": { 00:21:38.932 "trtype": "TCP", 00:21:38.932 "adrfam": "IPv4", 00:21:38.932 "traddr": "10.0.0.2", 00:21:38.932 "trsvcid": "4420" 00:21:38.932 }, 00:21:38.932 "peer_address": { 00:21:38.932 "trtype": "TCP", 00:21:38.932 "adrfam": "IPv4", 00:21:38.932 "traddr": "10.0.0.1", 00:21:38.932 "trsvcid": "53736" 00:21:38.932 }, 00:21:38.932 "auth": { 00:21:38.932 "state": "completed", 00:21:38.932 "digest": "sha256", 00:21:38.932 "dhgroup": "ffdhe4096" 00:21:38.932 } 00:21:38.932 } 00:21:38.932 ]' 00:21:38.933 15:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:38.933 15:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:38.933 15:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:38.933 15:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:38.933 15:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:38.933 15:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.933 15:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.933 15:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.194 15:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTdlY2I0ODRiNzc5YjY0YjYyMmVkMmMxYTIyODU3ZjQ4Zjg3MjBmZDcwMWVmYzg2ye529Q==: --dhchap-ctrl-secret DHHC-1:01:YjAxMWFjMTExMzhjNTRiMDk5MWM5NmVjNDUyZGUxMDISQyKq: 00:21:39.194 15:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTdlY2I0ODRiNzc5YjY0YjYyMmVkMmMxYTIyODU3ZjQ4Zjg3MjBmZDcwMWVmYzg2ye529Q==: --dhchap-ctrl-secret DHHC-1:01:YjAxMWFjMTExMzhjNTRiMDk5MWM5NmVjNDUyZGUxMDISQyKq: 00:21:39.767 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.767 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:39.767 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.767 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.767 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.767 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:39.767 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:39.767 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:40.029 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:21:40.029 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:40.029 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:40.029 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:40.029 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:40.029 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.029 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:40.029 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.029 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.029 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.029 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:40.029 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:40.029 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:40.290 00:21:40.290 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:40.290 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:40.290 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.551 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.551 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.551 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.551 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.551 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.551 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:40.551 { 00:21:40.551 "cntlid": 31, 00:21:40.551 "qid": 0, 00:21:40.551 "state": "enabled", 00:21:40.551 "thread": "nvmf_tgt_poll_group_000", 00:21:40.551 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:40.551 "listen_address": { 00:21:40.551 "trtype": "TCP", 00:21:40.551 "adrfam": "IPv4", 00:21:40.551 "traddr": "10.0.0.2", 00:21:40.551 "trsvcid": "4420" 00:21:40.551 }, 00:21:40.551 "peer_address": { 00:21:40.551 "trtype": "TCP", 00:21:40.551 "adrfam": "IPv4", 00:21:40.551 "traddr": "10.0.0.1", 00:21:40.551 "trsvcid": "53762" 00:21:40.551 }, 00:21:40.551 "auth": { 00:21:40.551 "state": "completed", 00:21:40.551 "digest": "sha256", 00:21:40.551 "dhgroup": "ffdhe4096" 00:21:40.551 } 00:21:40.551 } 00:21:40.551 ]' 00:21:40.551 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:40.551 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:40.551 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:40.551 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:40.551 15:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:40.551 15:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.551 15:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.551 15:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.813 15:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjM2MTEzYzE2Njg3MmRhMjc3ZmU5YzZhOGJiOGU4YTk3MzRmNTYwMDJjZjVjNWY5YmViZTI2YzlmMWY0ODNiOQEpGbY=: 00:21:40.813 15:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjM2MTEzYzE2Njg3MmRhMjc3ZmU5YzZhOGJiOGU4YTk3MzRmNTYwMDJjZjVjNWY5YmViZTI2YzlmMWY0ODNiOQEpGbY=: 00:21:41.386 15:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.386 15:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:41.386 15:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.386 15:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.386 15:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.386 15:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:41.386 15:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:41.386 15:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:41.386 15:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:41.649 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:21:41.649 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:41.649 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:41.649 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:41.649 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:41.649 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.649 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.649 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.649 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.649 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.649 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.649 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.649 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.911 00:21:41.911 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:41.911 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:41.911 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.173 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.173 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.173 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.173 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.173 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.173 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:42.173 { 00:21:42.173 "cntlid": 33, 00:21:42.173 "qid": 0, 00:21:42.173 "state": "enabled", 00:21:42.173 "thread": "nvmf_tgt_poll_group_000", 00:21:42.173 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:42.173 "listen_address": { 00:21:42.173 "trtype": "TCP", 00:21:42.173 "adrfam": "IPv4", 00:21:42.173 "traddr": "10.0.0.2", 00:21:42.173 "trsvcid": "4420" 00:21:42.173 }, 00:21:42.173 "peer_address": { 00:21:42.173 "trtype": "TCP", 00:21:42.173 "adrfam": "IPv4", 00:21:42.173 "traddr": "10.0.0.1", 00:21:42.173 "trsvcid": "53780" 00:21:42.173 }, 00:21:42.173 "auth": { 00:21:42.173 "state": "completed", 00:21:42.173 "digest": "sha256", 00:21:42.173 "dhgroup": "ffdhe6144" 00:21:42.173 } 00:21:42.173 } 00:21:42.173 ]' 00:21:42.173 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:42.173 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:42.173 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:42.173 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:42.173 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:42.435 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.435 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.435 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.435 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjFlMTU1ZTEzOWYyYWNhNGM3MTc4YmFhYTIwYzI5YmVkMmFkNzlmNDMzZWVhNzJjxLk3pQ==: --dhchap-ctrl-secret DHHC-1:03:YmI3YjFlMjE5MDliMTU2YTk0Mjc0ZDVmMzkxNThmMWM2YzBiYWYyMWE3MjdkYjhhZThjZDMwMzNmMzhiOGYxZkbNJFg=: 00:21:42.435 15:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZjFlMTU1ZTEzOWYyYWNhNGM3MTc4YmFhYTIwYzI5YmVkMmFkNzlmNDMzZWVhNzJjxLk3pQ==: --dhchap-ctrl-secret DHHC-1:03:YmI3YjFlMjE5MDliMTU2YTk0Mjc0ZDVmMzkxNThmMWM2YzBiYWYyMWE3MjdkYjhhZThjZDMwMzNmMzhiOGYxZkbNJFg=: 00:21:43.379 15:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.379 15:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:43.379 15:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.379 15:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.379 15:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.379 15:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:43.379 15:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:43.379 15:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:43.379 15:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:21:43.379 15:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:43.379 15:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:43.379 15:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:43.379 15:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:43.379 15:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.379 15:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.379 15:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.379 15:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.379 15:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.379 15:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.379 15:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.379 15:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.641 00:21:43.641 15:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:43.641 15:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.641 15:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:43.902 15:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.902 15:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.902 15:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.902 15:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.902 15:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.903 15:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:43.903 { 00:21:43.903 "cntlid": 35, 00:21:43.903 "qid": 0, 00:21:43.903 "state": "enabled", 00:21:43.903 "thread": "nvmf_tgt_poll_group_000", 00:21:43.903 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:43.903 "listen_address": { 00:21:43.903 "trtype": "TCP", 00:21:43.903 "adrfam": "IPv4", 00:21:43.903 "traddr": "10.0.0.2", 00:21:43.903 "trsvcid": "4420" 00:21:43.903 }, 00:21:43.903 "peer_address": { 00:21:43.903 "trtype": "TCP", 00:21:43.903 "adrfam": "IPv4", 00:21:43.903 "traddr": "10.0.0.1", 00:21:43.903 "trsvcid": "53802" 00:21:43.903 }, 00:21:43.903 "auth": { 00:21:43.903 "state": "completed", 00:21:43.903 "digest": "sha256", 00:21:43.903 "dhgroup": "ffdhe6144" 00:21:43.903 } 00:21:43.903 } 00:21:43.903 ]' 00:21:43.903 15:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:43.903 15:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:43.903 15:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:43.903 15:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:43.903 15:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:44.164 15:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.164 15:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.164 15:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.164 15:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjRlMzk0N2IzNzE1YmU0MzJhMTMzZjJhOWYwNTNmODdWvOUz: --dhchap-ctrl-secret DHHC-1:02:OGFlNWM5NzRkZDBmZWYwNTI0YjVlM2FiNzZiNTk3N2YyZGYzZmViMjU2NjVlM2QxhGH64w==: 00:21:44.164 15:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YjRlMzk0N2IzNzE1YmU0MzJhMTMzZjJhOWYwNTNmODdWvOUz: --dhchap-ctrl-secret DHHC-1:02:OGFlNWM5NzRkZDBmZWYwNTI0YjVlM2FiNzZiNTk3N2YyZGYzZmViMjU2NjVlM2QxhGH64w==: 00:21:44.736 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.997 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:44.997 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.997 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.997 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.997 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:44.997 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:44.997 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:44.997 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:21:44.997 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:44.997 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:44.997 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:44.997 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:44.997 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.997 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.997 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.997 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.997 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.997 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.997 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.997 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.573 00:21:45.573 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:45.573 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:45.573 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.573 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.573 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.573 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.573 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.573 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.573 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:45.573 { 00:21:45.573 "cntlid": 37, 00:21:45.573 "qid": 0, 00:21:45.573 "state": "enabled", 00:21:45.573 "thread": "nvmf_tgt_poll_group_000", 00:21:45.573 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:45.573 "listen_address": { 00:21:45.573 "trtype": "TCP", 00:21:45.573 "adrfam": "IPv4", 00:21:45.573 "traddr": "10.0.0.2", 00:21:45.573 "trsvcid": "4420" 00:21:45.573 }, 00:21:45.573 "peer_address": { 00:21:45.573 "trtype": "TCP", 00:21:45.573 "adrfam": "IPv4", 00:21:45.573 "traddr": "10.0.0.1", 00:21:45.573 "trsvcid": "53832" 00:21:45.573 }, 00:21:45.573 "auth": { 00:21:45.573 "state": "completed", 00:21:45.573 "digest": "sha256", 00:21:45.573 "dhgroup": "ffdhe6144" 00:21:45.573 } 00:21:45.573 } 00:21:45.573 ]' 00:21:45.573 15:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:45.573 15:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:45.573 15:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:45.573 15:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:45.573 15:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:45.836 15:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.836 15:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.836 15:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.836 15:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTdlY2I0ODRiNzc5YjY0YjYyMmVkMmMxYTIyODU3ZjQ4Zjg3MjBmZDcwMWVmYzg2ye529Q==: --dhchap-ctrl-secret DHHC-1:01:YjAxMWFjMTExMzhjNTRiMDk5MWM5NmVjNDUyZGUxMDISQyKq: 00:21:45.836 15:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTdlY2I0ODRiNzc5YjY0YjYyMmVkMmMxYTIyODU3ZjQ4Zjg3MjBmZDcwMWVmYzg2ye529Q==: --dhchap-ctrl-secret DHHC-1:01:YjAxMWFjMTExMzhjNTRiMDk5MWM5NmVjNDUyZGUxMDISQyKq: 00:21:46.779 15:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.779 15:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:46.779 15:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.779 15:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.779 15:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.779 15:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:46.779 15:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:46.779 15:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:46.779 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:21:46.779 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:46.779 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:46.779 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:46.779 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:46.779 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.779 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:46.779 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.779 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.779 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.779 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:46.779 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:46.779 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:47.040 00:21:47.040 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:47.040 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:47.040 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.301 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.301 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.301 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.301 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.301 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.301 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:47.302 { 00:21:47.302 "cntlid": 39, 00:21:47.302 "qid": 0, 00:21:47.302 "state": "enabled", 00:21:47.302 "thread": "nvmf_tgt_poll_group_000", 00:21:47.302 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:47.302 "listen_address": { 00:21:47.302 "trtype": "TCP", 00:21:47.302 "adrfam": "IPv4", 00:21:47.302 "traddr": "10.0.0.2", 00:21:47.302 "trsvcid": "4420" 00:21:47.302 }, 00:21:47.302 "peer_address": { 00:21:47.302 "trtype": "TCP", 00:21:47.302 "adrfam": "IPv4", 00:21:47.302 "traddr": "10.0.0.1", 00:21:47.302 "trsvcid": "53874" 00:21:47.302 }, 00:21:47.302 "auth": { 00:21:47.302 "state": "completed", 00:21:47.302 "digest": "sha256", 00:21:47.302 "dhgroup": "ffdhe6144" 00:21:47.302 } 00:21:47.302 } 00:21:47.302 ]' 00:21:47.302 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:47.302 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:47.302 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:47.302 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:47.302 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:47.302 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.302 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.302 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.563 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjM2MTEzYzE2Njg3MmRhMjc3ZmU5YzZhOGJiOGU4YTk3MzRmNTYwMDJjZjVjNWY5YmViZTI2YzlmMWY0ODNiOQEpGbY=: 00:21:47.563 15:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjM2MTEzYzE2Njg3MmRhMjc3ZmU5YzZhOGJiOGU4YTk3MzRmNTYwMDJjZjVjNWY5YmViZTI2YzlmMWY0ODNiOQEpGbY=: 00:21:48.136 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.136 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:48.136 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.136 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.136 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.136 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:48.136 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:48.136 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:48.136 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:48.397 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:21:48.397 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:48.397 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:48.397 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:48.397 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:48.397 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.397 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.397 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.397 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.397 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.397 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.397 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.397 15:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.968 00:21:48.968 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:48.968 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:48.968 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.229 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.229 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.229 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.229 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.229 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.229 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:49.229 { 00:21:49.229 "cntlid": 41, 00:21:49.229 "qid": 0, 00:21:49.229 "state": "enabled", 00:21:49.229 "thread": "nvmf_tgt_poll_group_000", 00:21:49.229 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:49.229 "listen_address": { 00:21:49.229 "trtype": "TCP", 00:21:49.229 "adrfam": "IPv4", 00:21:49.229 "traddr": "10.0.0.2", 00:21:49.229 "trsvcid": "4420" 00:21:49.229 }, 00:21:49.229 "peer_address": { 00:21:49.229 "trtype": "TCP", 00:21:49.229 "adrfam": "IPv4", 00:21:49.229 "traddr": "10.0.0.1", 00:21:49.229 "trsvcid": "53454" 00:21:49.229 }, 00:21:49.229 "auth": { 00:21:49.229 "state": "completed", 00:21:49.229 "digest": "sha256", 00:21:49.229 "dhgroup": "ffdhe8192" 00:21:49.229 } 00:21:49.229 } 00:21:49.229 ]' 00:21:49.229 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:49.229 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:49.229 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:49.229 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:49.229 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:49.229 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.229 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.229 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.490 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjFlMTU1ZTEzOWYyYWNhNGM3MTc4YmFhYTIwYzI5YmVkMmFkNzlmNDMzZWVhNzJjxLk3pQ==: --dhchap-ctrl-secret DHHC-1:03:YmI3YjFlMjE5MDliMTU2YTk0Mjc0ZDVmMzkxNThmMWM2YzBiYWYyMWE3MjdkYjhhZThjZDMwMzNmMzhiOGYxZkbNJFg=: 00:21:49.491 15:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZjFlMTU1ZTEzOWYyYWNhNGM3MTc4YmFhYTIwYzI5YmVkMmFkNzlmNDMzZWVhNzJjxLk3pQ==: --dhchap-ctrl-secret DHHC-1:03:YmI3YjFlMjE5MDliMTU2YTk0Mjc0ZDVmMzkxNThmMWM2YzBiYWYyMWE3MjdkYjhhZThjZDMwMzNmMzhiOGYxZkbNJFg=: 00:21:50.063 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.063 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:50.063 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.063 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.063 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.063 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:50.063 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:50.063 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:50.324 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:21:50.324 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:50.324 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:50.324 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:50.324 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:50.324 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.324 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.324 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.324 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.324 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.324 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.324 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.325 15:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.897 00:21:50.897 15:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:50.897 15:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:50.897 15:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.897 15:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.897 15:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.897 15:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.897 15:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.897 15:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.897 15:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:50.897 { 00:21:50.897 "cntlid": 43, 00:21:50.897 "qid": 0, 00:21:50.897 "state": "enabled", 00:21:50.897 "thread": "nvmf_tgt_poll_group_000", 00:21:50.897 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:50.897 "listen_address": { 00:21:50.897 "trtype": "TCP", 00:21:50.897 "adrfam": "IPv4", 00:21:50.897 "traddr": "10.0.0.2", 00:21:50.897 "trsvcid": "4420" 00:21:50.897 }, 00:21:50.897 "peer_address": { 00:21:50.897 "trtype": "TCP", 00:21:50.897 "adrfam": "IPv4", 00:21:50.897 "traddr": "10.0.0.1", 00:21:50.897 "trsvcid": "53468" 00:21:50.897 }, 00:21:50.897 "auth": { 00:21:50.897 "state": "completed", 00:21:50.897 "digest": "sha256", 00:21:50.897 "dhgroup": "ffdhe8192" 00:21:50.897 } 00:21:50.897 } 00:21:50.897 ]' 00:21:50.897 15:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:50.897 15:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:50.897 15:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:51.159 15:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:51.159 15:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:51.159 15:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.159 15:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.159 15:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.159 15:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjRlMzk0N2IzNzE1YmU0MzJhMTMzZjJhOWYwNTNmODdWvOUz: --dhchap-ctrl-secret DHHC-1:02:OGFlNWM5NzRkZDBmZWYwNTI0YjVlM2FiNzZiNTk3N2YyZGYzZmViMjU2NjVlM2QxhGH64w==: 00:21:51.159 15:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YjRlMzk0N2IzNzE1YmU0MzJhMTMzZjJhOWYwNTNmODdWvOUz: --dhchap-ctrl-secret DHHC-1:02:OGFlNWM5NzRkZDBmZWYwNTI0YjVlM2FiNzZiNTk3N2YyZGYzZmViMjU2NjVlM2QxhGH64w==: 00:21:52.103 15:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.103 15:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:52.103 15:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.103 15:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.103 15:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.103 15:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:52.103 15:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:52.103 15:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:52.103 15:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:21:52.103 15:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:52.103 15:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:52.103 15:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:52.103 15:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:52.103 15:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.103 15:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.103 15:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.103 15:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.103 15:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.103 15:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.103 15:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.103 15:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.678 00:21:52.678 15:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:52.678 15:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:52.678 15:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.678 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.678 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.678 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.678 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.678 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.678 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:52.678 { 00:21:52.678 "cntlid": 45, 00:21:52.678 "qid": 0, 00:21:52.678 "state": "enabled", 00:21:52.678 "thread": "nvmf_tgt_poll_group_000", 00:21:52.678 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:52.678 "listen_address": { 00:21:52.678 "trtype": "TCP", 00:21:52.678 "adrfam": "IPv4", 00:21:52.678 "traddr": "10.0.0.2", 00:21:52.678 "trsvcid": "4420" 00:21:52.678 }, 00:21:52.678 "peer_address": { 00:21:52.678 "trtype": "TCP", 00:21:52.678 "adrfam": "IPv4", 00:21:52.678 "traddr": "10.0.0.1", 00:21:52.678 "trsvcid": "53496" 00:21:52.678 }, 00:21:52.678 "auth": { 00:21:52.678 "state": "completed", 00:21:52.678 "digest": "sha256", 00:21:52.678 "dhgroup": "ffdhe8192" 00:21:52.678 } 00:21:52.678 } 00:21:52.678 ]' 00:21:52.678 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:52.940 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:52.940 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:52.940 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:52.940 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:52.940 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.940 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.940 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.201 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTdlY2I0ODRiNzc5YjY0YjYyMmVkMmMxYTIyODU3ZjQ4Zjg3MjBmZDcwMWVmYzg2ye529Q==: --dhchap-ctrl-secret DHHC-1:01:YjAxMWFjMTExMzhjNTRiMDk5MWM5NmVjNDUyZGUxMDISQyKq: 00:21:53.201 15:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTdlY2I0ODRiNzc5YjY0YjYyMmVkMmMxYTIyODU3ZjQ4Zjg3MjBmZDcwMWVmYzg2ye529Q==: --dhchap-ctrl-secret DHHC-1:01:YjAxMWFjMTExMzhjNTRiMDk5MWM5NmVjNDUyZGUxMDISQyKq: 00:21:53.773 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.773 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:53.773 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.773 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.773 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.773 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:53.773 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:53.773 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:54.033 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:21:54.033 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:54.033 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:54.033 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:54.033 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:54.033 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.033 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:54.033 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.033 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.033 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.033 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:54.033 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:54.033 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:54.294 00:21:54.556 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:54.556 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:54.556 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.556 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.556 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.556 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.556 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.556 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.556 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:54.556 { 00:21:54.556 "cntlid": 47, 00:21:54.556 "qid": 0, 00:21:54.556 "state": "enabled", 00:21:54.556 "thread": "nvmf_tgt_poll_group_000", 00:21:54.556 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:54.556 "listen_address": { 00:21:54.556 "trtype": "TCP", 00:21:54.556 "adrfam": "IPv4", 00:21:54.556 "traddr": "10.0.0.2", 00:21:54.556 "trsvcid": "4420" 00:21:54.556 }, 00:21:54.556 "peer_address": { 00:21:54.556 "trtype": "TCP", 00:21:54.556 "adrfam": "IPv4", 00:21:54.556 "traddr": "10.0.0.1", 00:21:54.556 "trsvcid": "53532" 00:21:54.556 }, 00:21:54.556 "auth": { 00:21:54.556 "state": "completed", 00:21:54.556 "digest": "sha256", 00:21:54.556 "dhgroup": "ffdhe8192" 00:21:54.556 } 00:21:54.556 } 00:21:54.556 ]' 00:21:54.556 15:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:54.556 15:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:54.556 15:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:54.817 15:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:54.817 15:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:54.817 15:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.817 15:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.817 15:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.817 15:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjM2MTEzYzE2Njg3MmRhMjc3ZmU5YzZhOGJiOGU4YTk3MzRmNTYwMDJjZjVjNWY5YmViZTI2YzlmMWY0ODNiOQEpGbY=: 00:21:54.817 15:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjM2MTEzYzE2Njg3MmRhMjc3ZmU5YzZhOGJiOGU4YTk3MzRmNTYwMDJjZjVjNWY5YmViZTI2YzlmMWY0ODNiOQEpGbY=: 00:21:55.764 15:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.764 15:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:55.764 15:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.764 15:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.764 15:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.764 15:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:55.764 15:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:55.764 15:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:55.764 15:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:55.764 15:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:55.764 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:21:55.764 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:55.764 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:55.764 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:55.764 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:55.764 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.764 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.764 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.764 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.764 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.764 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.764 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.764 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.025 00:21:56.025 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:56.025 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:56.025 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.286 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.286 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.286 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.286 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.286 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.286 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:56.286 { 00:21:56.286 "cntlid": 49, 00:21:56.286 "qid": 0, 00:21:56.286 "state": "enabled", 00:21:56.286 "thread": "nvmf_tgt_poll_group_000", 00:21:56.286 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:56.286 "listen_address": { 00:21:56.286 "trtype": "TCP", 00:21:56.286 "adrfam": "IPv4", 00:21:56.286 "traddr": "10.0.0.2", 00:21:56.286 "trsvcid": "4420" 00:21:56.286 }, 00:21:56.286 "peer_address": { 00:21:56.286 "trtype": "TCP", 00:21:56.286 "adrfam": "IPv4", 00:21:56.286 "traddr": "10.0.0.1", 00:21:56.286 "trsvcid": "53558" 00:21:56.286 }, 00:21:56.286 "auth": { 00:21:56.286 "state": "completed", 00:21:56.286 "digest": "sha384", 00:21:56.286 "dhgroup": "null" 00:21:56.286 } 00:21:56.286 } 00:21:56.286 ]' 00:21:56.286 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:56.286 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:56.286 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:56.286 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:56.286 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:56.286 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.286 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.286 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.547 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjFlMTU1ZTEzOWYyYWNhNGM3MTc4YmFhYTIwYzI5YmVkMmFkNzlmNDMzZWVhNzJjxLk3pQ==: --dhchap-ctrl-secret DHHC-1:03:YmI3YjFlMjE5MDliMTU2YTk0Mjc0ZDVmMzkxNThmMWM2YzBiYWYyMWE3MjdkYjhhZThjZDMwMzNmMzhiOGYxZkbNJFg=: 00:21:56.547 15:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZjFlMTU1ZTEzOWYyYWNhNGM3MTc4YmFhYTIwYzI5YmVkMmFkNzlmNDMzZWVhNzJjxLk3pQ==: --dhchap-ctrl-secret DHHC-1:03:YmI3YjFlMjE5MDliMTU2YTk0Mjc0ZDVmMzkxNThmMWM2YzBiYWYyMWE3MjdkYjhhZThjZDMwMzNmMzhiOGYxZkbNJFg=: 00:21:57.118 15:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.118 15:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:57.118 15:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.118 15:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.118 15:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.118 15:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:57.118 15:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:57.119 15:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:57.379 15:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:21:57.379 15:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:57.379 15:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:57.379 15:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:57.379 15:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:57.379 15:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.379 15:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.379 15:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.379 15:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.379 15:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.379 15:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.379 15:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.379 15:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.639 00:21:57.639 15:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:57.639 15:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:57.640 15:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.901 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.901 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.901 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.901 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.901 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.901 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:57.901 { 00:21:57.901 "cntlid": 51, 00:21:57.901 "qid": 0, 00:21:57.901 "state": "enabled", 00:21:57.901 "thread": "nvmf_tgt_poll_group_000", 00:21:57.901 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:57.901 "listen_address": { 00:21:57.901 "trtype": "TCP", 00:21:57.901 "adrfam": "IPv4", 00:21:57.901 "traddr": "10.0.0.2", 00:21:57.901 "trsvcid": "4420" 00:21:57.901 }, 00:21:57.901 "peer_address": { 00:21:57.901 "trtype": "TCP", 00:21:57.901 "adrfam": "IPv4", 00:21:57.901 "traddr": "10.0.0.1", 00:21:57.901 "trsvcid": "53590" 00:21:57.901 }, 00:21:57.901 "auth": { 00:21:57.901 "state": "completed", 00:21:57.901 "digest": "sha384", 00:21:57.901 "dhgroup": "null" 00:21:57.901 } 00:21:57.901 } 00:21:57.901 ]' 00:21:57.901 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:57.901 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:57.901 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:57.901 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:57.901 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:57.901 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.901 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.901 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.162 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjRlMzk0N2IzNzE1YmU0MzJhMTMzZjJhOWYwNTNmODdWvOUz: --dhchap-ctrl-secret DHHC-1:02:OGFlNWM5NzRkZDBmZWYwNTI0YjVlM2FiNzZiNTk3N2YyZGYzZmViMjU2NjVlM2QxhGH64w==: 00:21:58.162 15:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YjRlMzk0N2IzNzE1YmU0MzJhMTMzZjJhOWYwNTNmODdWvOUz: --dhchap-ctrl-secret DHHC-1:02:OGFlNWM5NzRkZDBmZWYwNTI0YjVlM2FiNzZiNTk3N2YyZGYzZmViMjU2NjVlM2QxhGH64w==: 00:21:58.733 15:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.733 15:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:58.733 15:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.733 15:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.733 15:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.733 15:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:58.733 15:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:58.733 15:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:58.994 15:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:21:58.994 15:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:58.994 15:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:58.994 15:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:58.994 15:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:58.994 15:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.994 15:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.994 15:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.994 15:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.994 15:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.994 15:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.994 15:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.994 15:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.254 00:21:59.255 15:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:59.255 15:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:59.255 15:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.255 15:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.255 15:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.255 15:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.255 15:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.516 15:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.516 15:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:59.516 { 00:21:59.516 "cntlid": 53, 00:21:59.516 "qid": 0, 00:21:59.516 "state": "enabled", 00:21:59.516 "thread": "nvmf_tgt_poll_group_000", 00:21:59.516 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:59.516 "listen_address": { 00:21:59.516 "trtype": "TCP", 00:21:59.516 "adrfam": "IPv4", 00:21:59.516 "traddr": "10.0.0.2", 00:21:59.516 "trsvcid": "4420" 00:21:59.516 }, 00:21:59.516 "peer_address": { 00:21:59.516 "trtype": "TCP", 00:21:59.516 "adrfam": "IPv4", 00:21:59.516 "traddr": "10.0.0.1", 00:21:59.516 "trsvcid": "53242" 00:21:59.516 }, 00:21:59.516 "auth": { 00:21:59.516 "state": "completed", 00:21:59.516 "digest": "sha384", 00:21:59.516 "dhgroup": "null" 00:21:59.516 } 00:21:59.516 } 00:21:59.516 ]' 00:21:59.516 15:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:59.516 15:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:59.516 15:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:59.516 15:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:59.516 15:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:59.516 15:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.516 15:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.516 15:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.777 15:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTdlY2I0ODRiNzc5YjY0YjYyMmVkMmMxYTIyODU3ZjQ4Zjg3MjBmZDcwMWVmYzg2ye529Q==: --dhchap-ctrl-secret DHHC-1:01:YjAxMWFjMTExMzhjNTRiMDk5MWM5NmVjNDUyZGUxMDISQyKq: 00:21:59.777 15:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTdlY2I0ODRiNzc5YjY0YjYyMmVkMmMxYTIyODU3ZjQ4Zjg3MjBmZDcwMWVmYzg2ye529Q==: --dhchap-ctrl-secret DHHC-1:01:YjAxMWFjMTExMzhjNTRiMDk5MWM5NmVjNDUyZGUxMDISQyKq: 00:22:00.348 15:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.348 15:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:00.348 15:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.348 15:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.348 15:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.348 15:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:00.348 15:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:00.348 15:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:00.609 15:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:22:00.609 15:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:00.609 15:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:00.609 15:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:00.609 15:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:00.609 15:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.609 15:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:00.609 15:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.609 15:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.609 15:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.609 15:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:00.609 15:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:00.609 15:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:00.871 00:22:00.871 15:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:00.871 15:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:00.871 15:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.871 15:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.871 15:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.871 15:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.871 15:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.871 15:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.871 15:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:00.871 { 00:22:00.871 "cntlid": 55, 00:22:00.871 "qid": 0, 00:22:00.871 "state": "enabled", 00:22:00.871 "thread": "nvmf_tgt_poll_group_000", 00:22:00.871 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:00.871 "listen_address": { 00:22:00.871 "trtype": "TCP", 00:22:00.871 "adrfam": "IPv4", 00:22:00.871 "traddr": "10.0.0.2", 00:22:00.871 "trsvcid": "4420" 00:22:00.871 }, 00:22:00.871 "peer_address": { 00:22:00.871 "trtype": "TCP", 00:22:00.871 "adrfam": "IPv4", 00:22:00.871 "traddr": "10.0.0.1", 00:22:00.871 "trsvcid": "53276" 00:22:00.871 }, 00:22:00.871 "auth": { 00:22:00.871 "state": "completed", 00:22:00.871 "digest": "sha384", 00:22:00.871 "dhgroup": "null" 00:22:00.871 } 00:22:00.871 } 00:22:00.871 ]' 00:22:00.871 15:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:01.132 15:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:01.132 15:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:01.132 15:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:01.132 15:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:01.132 15:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.132 15:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.132 15:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.393 15:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjM2MTEzYzE2Njg3MmRhMjc3ZmU5YzZhOGJiOGU4YTk3MzRmNTYwMDJjZjVjNWY5YmViZTI2YzlmMWY0ODNiOQEpGbY=: 00:22:01.393 15:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjM2MTEzYzE2Njg3MmRhMjc3ZmU5YzZhOGJiOGU4YTk3MzRmNTYwMDJjZjVjNWY5YmViZTI2YzlmMWY0ODNiOQEpGbY=: 00:22:01.964 15:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.964 15:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:01.964 15:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.964 15:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.965 15:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.965 15:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:01.965 15:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:01.965 15:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:01.965 15:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:02.227 15:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:22:02.227 15:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:02.227 15:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:02.227 15:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:02.227 15:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:02.227 15:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.227 15:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.227 15:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.227 15:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.227 15:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.227 15:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.227 15:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.227 15:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.227 00:22:02.487 15:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:02.487 15:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:02.487 15:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.487 15:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.487 15:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.487 15:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.487 15:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.487 15:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.487 15:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:02.487 { 00:22:02.487 "cntlid": 57, 00:22:02.487 "qid": 0, 00:22:02.487 "state": "enabled", 00:22:02.487 "thread": "nvmf_tgt_poll_group_000", 00:22:02.487 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:02.487 "listen_address": { 00:22:02.487 "trtype": "TCP", 00:22:02.487 "adrfam": "IPv4", 00:22:02.487 "traddr": "10.0.0.2", 00:22:02.487 "trsvcid": "4420" 00:22:02.487 }, 00:22:02.487 "peer_address": { 00:22:02.487 "trtype": "TCP", 00:22:02.487 "adrfam": "IPv4", 00:22:02.487 "traddr": "10.0.0.1", 00:22:02.487 "trsvcid": "53298" 00:22:02.487 }, 00:22:02.487 "auth": { 00:22:02.487 "state": "completed", 00:22:02.487 "digest": "sha384", 00:22:02.487 "dhgroup": "ffdhe2048" 00:22:02.488 } 00:22:02.488 } 00:22:02.488 ]' 00:22:02.488 15:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:02.748 15:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:02.748 15:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:02.748 15:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:02.748 15:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:02.748 15:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.748 15:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.748 15:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.007 15:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjFlMTU1ZTEzOWYyYWNhNGM3MTc4YmFhYTIwYzI5YmVkMmFkNzlmNDMzZWVhNzJjxLk3pQ==: --dhchap-ctrl-secret DHHC-1:03:YmI3YjFlMjE5MDliMTU2YTk0Mjc0ZDVmMzkxNThmMWM2YzBiYWYyMWE3MjdkYjhhZThjZDMwMzNmMzhiOGYxZkbNJFg=: 00:22:03.007 15:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZjFlMTU1ZTEzOWYyYWNhNGM3MTc4YmFhYTIwYzI5YmVkMmFkNzlmNDMzZWVhNzJjxLk3pQ==: --dhchap-ctrl-secret DHHC-1:03:YmI3YjFlMjE5MDliMTU2YTk0Mjc0ZDVmMzkxNThmMWM2YzBiYWYyMWE3MjdkYjhhZThjZDMwMzNmMzhiOGYxZkbNJFg=: 00:22:03.577 15:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.577 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.577 15:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:03.577 15:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.577 15:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.577 15:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.577 15:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:03.577 15:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:03.577 15:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:03.838 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:22:03.838 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:03.838 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:03.838 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:03.838 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:03.838 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:03.838 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.838 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.838 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.838 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.838 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.838 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.838 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.838 00:22:04.099 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:04.099 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:04.099 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.099 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.099 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.099 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.099 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.099 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.100 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:04.100 { 00:22:04.100 "cntlid": 59, 00:22:04.100 "qid": 0, 00:22:04.100 "state": "enabled", 00:22:04.100 "thread": "nvmf_tgt_poll_group_000", 00:22:04.100 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:04.100 "listen_address": { 00:22:04.100 "trtype": "TCP", 00:22:04.100 "adrfam": "IPv4", 00:22:04.100 "traddr": "10.0.0.2", 00:22:04.100 "trsvcid": "4420" 00:22:04.100 }, 00:22:04.100 "peer_address": { 00:22:04.100 "trtype": "TCP", 00:22:04.100 "adrfam": "IPv4", 00:22:04.100 "traddr": "10.0.0.1", 00:22:04.100 "trsvcid": "53324" 00:22:04.100 }, 00:22:04.100 "auth": { 00:22:04.100 "state": "completed", 00:22:04.100 "digest": "sha384", 00:22:04.100 "dhgroup": "ffdhe2048" 00:22:04.100 } 00:22:04.100 } 00:22:04.100 ]' 00:22:04.100 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:04.100 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:04.100 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:04.361 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:04.361 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:04.361 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.361 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.361 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.622 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjRlMzk0N2IzNzE1YmU0MzJhMTMzZjJhOWYwNTNmODdWvOUz: --dhchap-ctrl-secret DHHC-1:02:OGFlNWM5NzRkZDBmZWYwNTI0YjVlM2FiNzZiNTk3N2YyZGYzZmViMjU2NjVlM2QxhGH64w==: 00:22:04.622 15:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YjRlMzk0N2IzNzE1YmU0MzJhMTMzZjJhOWYwNTNmODdWvOUz: --dhchap-ctrl-secret DHHC-1:02:OGFlNWM5NzRkZDBmZWYwNTI0YjVlM2FiNzZiNTk3N2YyZGYzZmViMjU2NjVlM2QxhGH64w==: 00:22:05.194 15:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.194 15:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:05.194 15:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.194 15:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.194 15:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.194 15:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:05.194 15:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:05.194 15:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:05.455 15:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:22:05.455 15:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:05.455 15:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:05.455 15:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:05.455 15:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:05.455 15:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.455 15:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.455 15:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.455 15:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.455 15:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.455 15:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.455 15:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.455 15:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.455 00:22:05.716 15:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:05.716 15:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:05.716 15:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.716 15:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.716 15:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:05.716 15:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.716 15:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.716 15:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.716 15:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:05.716 { 00:22:05.716 "cntlid": 61, 00:22:05.716 "qid": 0, 00:22:05.716 "state": "enabled", 00:22:05.716 "thread": "nvmf_tgt_poll_group_000", 00:22:05.717 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:05.717 "listen_address": { 00:22:05.717 "trtype": "TCP", 00:22:05.717 "adrfam": "IPv4", 00:22:05.717 "traddr": "10.0.0.2", 00:22:05.717 "trsvcid": "4420" 00:22:05.717 }, 00:22:05.717 "peer_address": { 00:22:05.717 "trtype": "TCP", 00:22:05.717 "adrfam": "IPv4", 00:22:05.717 "traddr": "10.0.0.1", 00:22:05.717 "trsvcid": "53348" 00:22:05.717 }, 00:22:05.717 "auth": { 00:22:05.717 "state": "completed", 00:22:05.717 "digest": "sha384", 00:22:05.717 "dhgroup": "ffdhe2048" 00:22:05.717 } 00:22:05.717 } 00:22:05.717 ]' 00:22:05.717 15:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:05.717 15:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:05.717 15:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:05.978 15:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:05.978 15:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:05.978 15:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:05.978 15:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:05.979 15:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:05.979 15:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTdlY2I0ODRiNzc5YjY0YjYyMmVkMmMxYTIyODU3ZjQ4Zjg3MjBmZDcwMWVmYzg2ye529Q==: --dhchap-ctrl-secret DHHC-1:01:YjAxMWFjMTExMzhjNTRiMDk5MWM5NmVjNDUyZGUxMDISQyKq: 00:22:05.979 15:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTdlY2I0ODRiNzc5YjY0YjYyMmVkMmMxYTIyODU3ZjQ4Zjg3MjBmZDcwMWVmYzg2ye529Q==: --dhchap-ctrl-secret DHHC-1:01:YjAxMWFjMTExMzhjNTRiMDk5MWM5NmVjNDUyZGUxMDISQyKq: 00:22:06.922 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:06.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:06.922 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:06.922 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.922 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.922 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.922 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:06.922 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:06.922 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:06.922 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:22:06.922 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:06.922 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:06.922 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:06.922 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:06.922 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:06.922 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:06.922 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.922 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.922 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.922 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:06.922 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:06.922 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:07.184 00:22:07.184 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:07.184 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:07.184 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.445 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.445 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.445 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.445 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.446 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.446 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:07.446 { 00:22:07.446 "cntlid": 63, 00:22:07.446 "qid": 0, 00:22:07.446 "state": "enabled", 00:22:07.446 "thread": "nvmf_tgt_poll_group_000", 00:22:07.446 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:07.446 "listen_address": { 00:22:07.446 "trtype": "TCP", 00:22:07.446 "adrfam": "IPv4", 00:22:07.446 "traddr": "10.0.0.2", 00:22:07.446 "trsvcid": "4420" 00:22:07.446 }, 00:22:07.446 "peer_address": { 00:22:07.446 "trtype": "TCP", 00:22:07.446 "adrfam": "IPv4", 00:22:07.446 "traddr": "10.0.0.1", 00:22:07.446 "trsvcid": "53386" 00:22:07.446 }, 00:22:07.446 "auth": { 00:22:07.446 "state": "completed", 00:22:07.446 "digest": "sha384", 00:22:07.446 "dhgroup": "ffdhe2048" 00:22:07.446 } 00:22:07.446 } 00:22:07.446 ]' 00:22:07.446 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:07.446 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:07.446 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:07.446 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:07.446 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:07.446 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.446 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.446 15:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.707 15:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjM2MTEzYzE2Njg3MmRhMjc3ZmU5YzZhOGJiOGU4YTk3MzRmNTYwMDJjZjVjNWY5YmViZTI2YzlmMWY0ODNiOQEpGbY=: 00:22:07.707 15:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjM2MTEzYzE2Njg3MmRhMjc3ZmU5YzZhOGJiOGU4YTk3MzRmNTYwMDJjZjVjNWY5YmViZTI2YzlmMWY0ODNiOQEpGbY=: 00:22:08.280 15:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:08.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:08.280 15:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:08.280 15:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.280 15:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.280 15:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.280 15:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:08.280 15:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:08.280 15:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:08.280 15:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:08.542 15:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:22:08.542 15:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:08.542 15:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:08.542 15:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:08.542 15:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:08.542 15:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:08.542 15:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.542 15:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.542 15:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.542 15:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.542 15:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.542 15:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.542 15:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.804 00:22:08.804 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:08.804 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:08.804 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.065 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.065 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:09.065 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.065 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.065 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.065 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:09.065 { 00:22:09.065 "cntlid": 65, 00:22:09.065 "qid": 0, 00:22:09.065 "state": "enabled", 00:22:09.065 "thread": "nvmf_tgt_poll_group_000", 00:22:09.065 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:09.065 "listen_address": { 00:22:09.065 "trtype": "TCP", 00:22:09.065 "adrfam": "IPv4", 00:22:09.065 "traddr": "10.0.0.2", 00:22:09.065 "trsvcid": "4420" 00:22:09.065 }, 00:22:09.065 "peer_address": { 00:22:09.065 "trtype": "TCP", 00:22:09.065 "adrfam": "IPv4", 00:22:09.065 "traddr": "10.0.0.1", 00:22:09.065 "trsvcid": "57674" 00:22:09.065 }, 00:22:09.065 "auth": { 00:22:09.065 "state": "completed", 00:22:09.065 "digest": "sha384", 00:22:09.065 "dhgroup": "ffdhe3072" 00:22:09.065 } 00:22:09.065 } 00:22:09.065 ]' 00:22:09.065 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:09.065 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:09.065 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:09.065 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:09.065 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:09.065 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.065 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.065 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.326 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjFlMTU1ZTEzOWYyYWNhNGM3MTc4YmFhYTIwYzI5YmVkMmFkNzlmNDMzZWVhNzJjxLk3pQ==: --dhchap-ctrl-secret DHHC-1:03:YmI3YjFlMjE5MDliMTU2YTk0Mjc0ZDVmMzkxNThmMWM2YzBiYWYyMWE3MjdkYjhhZThjZDMwMzNmMzhiOGYxZkbNJFg=: 00:22:09.326 15:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZjFlMTU1ZTEzOWYyYWNhNGM3MTc4YmFhYTIwYzI5YmVkMmFkNzlmNDMzZWVhNzJjxLk3pQ==: --dhchap-ctrl-secret DHHC-1:03:YmI3YjFlMjE5MDliMTU2YTk0Mjc0ZDVmMzkxNThmMWM2YzBiYWYyMWE3MjdkYjhhZThjZDMwMzNmMzhiOGYxZkbNJFg=: 00:22:09.899 15:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.899 15:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:09.899 15:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.899 15:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.899 15:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.899 15:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:09.899 15:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:09.899 15:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:10.160 15:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:22:10.160 15:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:10.160 15:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:10.160 15:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:10.160 15:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:10.160 15:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.160 15:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.160 15:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.160 15:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.160 15:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.160 15:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.160 15:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.161 15:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.421 00:22:10.421 15:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:10.421 15:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:10.421 15:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.682 15:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.682 15:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:10.682 15:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.682 15:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.682 15:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.682 15:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:10.682 { 00:22:10.682 "cntlid": 67, 00:22:10.682 "qid": 0, 00:22:10.682 "state": "enabled", 00:22:10.682 "thread": "nvmf_tgt_poll_group_000", 00:22:10.682 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:10.682 "listen_address": { 00:22:10.682 "trtype": "TCP", 00:22:10.682 "adrfam": "IPv4", 00:22:10.682 "traddr": "10.0.0.2", 00:22:10.682 "trsvcid": "4420" 00:22:10.682 }, 00:22:10.682 "peer_address": { 00:22:10.682 "trtype": "TCP", 00:22:10.682 "adrfam": "IPv4", 00:22:10.682 "traddr": "10.0.0.1", 00:22:10.682 "trsvcid": "57704" 00:22:10.682 }, 00:22:10.682 "auth": { 00:22:10.682 "state": "completed", 00:22:10.682 "digest": "sha384", 00:22:10.682 "dhgroup": "ffdhe3072" 00:22:10.682 } 00:22:10.682 } 00:22:10.682 ]' 00:22:10.682 15:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:10.682 15:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:10.682 15:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:10.682 15:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:10.682 15:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:10.682 15:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.682 15:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.682 15:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.942 15:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjRlMzk0N2IzNzE1YmU0MzJhMTMzZjJhOWYwNTNmODdWvOUz: --dhchap-ctrl-secret DHHC-1:02:OGFlNWM5NzRkZDBmZWYwNTI0YjVlM2FiNzZiNTk3N2YyZGYzZmViMjU2NjVlM2QxhGH64w==: 00:22:10.942 15:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YjRlMzk0N2IzNzE1YmU0MzJhMTMzZjJhOWYwNTNmODdWvOUz: --dhchap-ctrl-secret DHHC-1:02:OGFlNWM5NzRkZDBmZWYwNTI0YjVlM2FiNzZiNTk3N2YyZGYzZmViMjU2NjVlM2QxhGH64w==: 00:22:11.513 15:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.514 15:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:11.514 15:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.514 15:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.514 15:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.514 15:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:11.514 15:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:11.514 15:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:11.775 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:22:11.775 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:11.775 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:11.775 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:11.775 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:11.775 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:11.775 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.775 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.775 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.775 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.775 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.775 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.775 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:12.037 00:22:12.037 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:12.037 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:12.037 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.298 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.298 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:12.298 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.298 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.298 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.298 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:12.298 { 00:22:12.298 "cntlid": 69, 00:22:12.298 "qid": 0, 00:22:12.298 "state": "enabled", 00:22:12.298 "thread": "nvmf_tgt_poll_group_000", 00:22:12.298 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:12.298 "listen_address": { 00:22:12.298 "trtype": "TCP", 00:22:12.298 "adrfam": "IPv4", 00:22:12.298 "traddr": "10.0.0.2", 00:22:12.298 "trsvcid": "4420" 00:22:12.298 }, 00:22:12.298 "peer_address": { 00:22:12.298 "trtype": "TCP", 00:22:12.298 "adrfam": "IPv4", 00:22:12.298 "traddr": "10.0.0.1", 00:22:12.298 "trsvcid": "57718" 00:22:12.298 }, 00:22:12.298 "auth": { 00:22:12.298 "state": "completed", 00:22:12.298 "digest": "sha384", 00:22:12.298 "dhgroup": "ffdhe3072" 00:22:12.298 } 00:22:12.298 } 00:22:12.298 ]' 00:22:12.298 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:12.298 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:12.298 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:12.298 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:12.298 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:12.298 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:12.298 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:12.298 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:12.558 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTdlY2I0ODRiNzc5YjY0YjYyMmVkMmMxYTIyODU3ZjQ4Zjg3MjBmZDcwMWVmYzg2ye529Q==: --dhchap-ctrl-secret DHHC-1:01:YjAxMWFjMTExMzhjNTRiMDk5MWM5NmVjNDUyZGUxMDISQyKq: 00:22:12.558 15:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTdlY2I0ODRiNzc5YjY0YjYyMmVkMmMxYTIyODU3ZjQ4Zjg3MjBmZDcwMWVmYzg2ye529Q==: --dhchap-ctrl-secret DHHC-1:01:YjAxMWFjMTExMzhjNTRiMDk5MWM5NmVjNDUyZGUxMDISQyKq: 00:22:13.174 15:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:13.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:13.174 15:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:13.174 15:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.174 15:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.174 15:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.174 15:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:13.174 15:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:13.174 15:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:13.434 15:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:22:13.434 15:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:13.434 15:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:13.434 15:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:13.434 15:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:13.434 15:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:13.434 15:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:13.434 15:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.434 15:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.434 15:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.434 15:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:13.434 15:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:13.434 15:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:13.694 00:22:13.694 15:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:13.694 15:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:13.694 15:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.954 15:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.954 15:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.954 15:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.954 15:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.954 15:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.954 15:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:13.954 { 00:22:13.954 "cntlid": 71, 00:22:13.954 "qid": 0, 00:22:13.954 "state": "enabled", 00:22:13.954 "thread": "nvmf_tgt_poll_group_000", 00:22:13.954 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:13.954 "listen_address": { 00:22:13.954 "trtype": "TCP", 00:22:13.954 "adrfam": "IPv4", 00:22:13.954 "traddr": "10.0.0.2", 00:22:13.954 "trsvcid": "4420" 00:22:13.954 }, 00:22:13.954 "peer_address": { 00:22:13.954 "trtype": "TCP", 00:22:13.954 "adrfam": "IPv4", 00:22:13.954 "traddr": "10.0.0.1", 00:22:13.954 "trsvcid": "57752" 00:22:13.954 }, 00:22:13.954 "auth": { 00:22:13.954 "state": "completed", 00:22:13.954 "digest": "sha384", 00:22:13.954 "dhgroup": "ffdhe3072" 00:22:13.954 } 00:22:13.954 } 00:22:13.954 ]' 00:22:13.954 15:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:13.954 15:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:13.954 15:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:13.954 15:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:13.954 15:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:13.954 15:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.954 15:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.954 15:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:14.215 15:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjM2MTEzYzE2Njg3MmRhMjc3ZmU5YzZhOGJiOGU4YTk3MzRmNTYwMDJjZjVjNWY5YmViZTI2YzlmMWY0ODNiOQEpGbY=: 00:22:14.215 15:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjM2MTEzYzE2Njg3MmRhMjc3ZmU5YzZhOGJiOGU4YTk3MzRmNTYwMDJjZjVjNWY5YmViZTI2YzlmMWY0ODNiOQEpGbY=: 00:22:14.787 15:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.787 15:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:14.787 15:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.787 15:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.787 15:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.787 15:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:14.787 15:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:14.787 15:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:14.787 15:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:15.048 15:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:22:15.048 15:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:15.048 15:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:15.048 15:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:15.048 15:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:15.048 15:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:15.048 15:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.048 15:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.048 15:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.048 15:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.048 15:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.048 15:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.048 15:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.308 00:22:15.308 15:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:15.308 15:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:15.308 15:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.568 15:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.568 15:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:15.568 15:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.568 15:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.568 15:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.568 15:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:15.568 { 00:22:15.568 "cntlid": 73, 00:22:15.568 "qid": 0, 00:22:15.568 "state": "enabled", 00:22:15.568 "thread": "nvmf_tgt_poll_group_000", 00:22:15.568 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:15.568 "listen_address": { 00:22:15.568 "trtype": "TCP", 00:22:15.568 "adrfam": "IPv4", 00:22:15.568 "traddr": "10.0.0.2", 00:22:15.568 "trsvcid": "4420" 00:22:15.568 }, 00:22:15.568 "peer_address": { 00:22:15.568 "trtype": "TCP", 00:22:15.568 "adrfam": "IPv4", 00:22:15.568 "traddr": "10.0.0.1", 00:22:15.568 "trsvcid": "57774" 00:22:15.568 }, 00:22:15.568 "auth": { 00:22:15.568 "state": "completed", 00:22:15.568 "digest": "sha384", 00:22:15.568 "dhgroup": "ffdhe4096" 00:22:15.568 } 00:22:15.568 } 00:22:15.568 ]' 00:22:15.568 15:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:15.568 15:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:15.568 15:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:15.568 15:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:15.568 15:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:15.568 15:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:15.568 15:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.568 15:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.828 15:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjFlMTU1ZTEzOWYyYWNhNGM3MTc4YmFhYTIwYzI5YmVkMmFkNzlmNDMzZWVhNzJjxLk3pQ==: --dhchap-ctrl-secret DHHC-1:03:YmI3YjFlMjE5MDliMTU2YTk0Mjc0ZDVmMzkxNThmMWM2YzBiYWYyMWE3MjdkYjhhZThjZDMwMzNmMzhiOGYxZkbNJFg=: 00:22:15.828 15:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZjFlMTU1ZTEzOWYyYWNhNGM3MTc4YmFhYTIwYzI5YmVkMmFkNzlmNDMzZWVhNzJjxLk3pQ==: --dhchap-ctrl-secret DHHC-1:03:YmI3YjFlMjE5MDliMTU2YTk0Mjc0ZDVmMzkxNThmMWM2YzBiYWYyMWE3MjdkYjhhZThjZDMwMzNmMzhiOGYxZkbNJFg=: 00:22:16.397 15:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:16.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:16.397 15:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:16.397 15:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.397 15:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.397 15:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.397 15:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:16.397 15:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:16.398 15:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:16.659 15:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:22:16.659 15:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:16.659 15:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:16.659 15:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:16.659 15:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:16.659 15:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:16.659 15:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.659 15:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.659 15:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.659 15:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.659 15:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.659 15:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.659 15:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.920 00:22:16.921 15:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:16.921 15:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:16.921 15:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.921 15:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.921 15:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:16.921 15:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.921 15:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.182 15:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.182 15:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:17.182 { 00:22:17.182 "cntlid": 75, 00:22:17.182 "qid": 0, 00:22:17.182 "state": "enabled", 00:22:17.182 "thread": "nvmf_tgt_poll_group_000", 00:22:17.182 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:17.182 "listen_address": { 00:22:17.182 "trtype": "TCP", 00:22:17.182 "adrfam": "IPv4", 00:22:17.182 "traddr": "10.0.0.2", 00:22:17.182 "trsvcid": "4420" 00:22:17.182 }, 00:22:17.182 "peer_address": { 00:22:17.182 "trtype": "TCP", 00:22:17.182 "adrfam": "IPv4", 00:22:17.182 "traddr": "10.0.0.1", 00:22:17.182 "trsvcid": "57796" 00:22:17.182 }, 00:22:17.182 "auth": { 00:22:17.182 "state": "completed", 00:22:17.182 "digest": "sha384", 00:22:17.182 "dhgroup": "ffdhe4096" 00:22:17.182 } 00:22:17.182 } 00:22:17.182 ]' 00:22:17.182 15:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:17.182 15:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:17.182 15:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:17.182 15:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:17.182 15:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:17.182 15:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:17.182 15:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.182 15:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:17.444 15:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjRlMzk0N2IzNzE1YmU0MzJhMTMzZjJhOWYwNTNmODdWvOUz: --dhchap-ctrl-secret DHHC-1:02:OGFlNWM5NzRkZDBmZWYwNTI0YjVlM2FiNzZiNTk3N2YyZGYzZmViMjU2NjVlM2QxhGH64w==: 00:22:17.444 15:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YjRlMzk0N2IzNzE1YmU0MzJhMTMzZjJhOWYwNTNmODdWvOUz: --dhchap-ctrl-secret DHHC-1:02:OGFlNWM5NzRkZDBmZWYwNTI0YjVlM2FiNzZiNTk3N2YyZGYzZmViMjU2NjVlM2QxhGH64w==: 00:22:18.016 15:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:18.017 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:18.017 15:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:18.017 15:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.017 15:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.017 15:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.017 15:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:18.017 15:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:18.017 15:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:18.277 15:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:22:18.277 15:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:18.278 15:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:18.278 15:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:18.278 15:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:18.278 15:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:18.278 15:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:18.278 15:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.278 15:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.278 15:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.278 15:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:18.278 15:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:18.278 15:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:18.539 00:22:18.540 15:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:18.540 15:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.540 15:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:18.801 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.801 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.801 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.801 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.801 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.801 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:18.801 { 00:22:18.801 "cntlid": 77, 00:22:18.801 "qid": 0, 00:22:18.801 "state": "enabled", 00:22:18.801 "thread": "nvmf_tgt_poll_group_000", 00:22:18.801 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:18.801 "listen_address": { 00:22:18.801 "trtype": "TCP", 00:22:18.801 "adrfam": "IPv4", 00:22:18.801 "traddr": "10.0.0.2", 00:22:18.801 "trsvcid": "4420" 00:22:18.801 }, 00:22:18.801 "peer_address": { 00:22:18.801 "trtype": "TCP", 00:22:18.801 "adrfam": "IPv4", 00:22:18.801 "traddr": "10.0.0.1", 00:22:18.801 "trsvcid": "57828" 00:22:18.801 }, 00:22:18.801 "auth": { 00:22:18.801 "state": "completed", 00:22:18.801 "digest": "sha384", 00:22:18.801 "dhgroup": "ffdhe4096" 00:22:18.801 } 00:22:18.801 } 00:22:18.801 ]' 00:22:18.801 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:18.801 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:18.801 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:18.801 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:18.801 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:18.801 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.801 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.801 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.062 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTdlY2I0ODRiNzc5YjY0YjYyMmVkMmMxYTIyODU3ZjQ4Zjg3MjBmZDcwMWVmYzg2ye529Q==: --dhchap-ctrl-secret DHHC-1:01:YjAxMWFjMTExMzhjNTRiMDk5MWM5NmVjNDUyZGUxMDISQyKq: 00:22:19.062 15:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTdlY2I0ODRiNzc5YjY0YjYyMmVkMmMxYTIyODU3ZjQ4Zjg3MjBmZDcwMWVmYzg2ye529Q==: --dhchap-ctrl-secret DHHC-1:01:YjAxMWFjMTExMzhjNTRiMDk5MWM5NmVjNDUyZGUxMDISQyKq: 00:22:19.636 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:19.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:19.636 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:19.636 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.636 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.636 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.636 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:19.636 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:19.636 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:19.897 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:22:19.897 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:19.897 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:19.897 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:19.897 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:19.897 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:19.898 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:19.898 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.898 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.898 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.898 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:19.898 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:19.898 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:20.159 00:22:20.159 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:20.159 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.159 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:20.421 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.421 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:20.421 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.421 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.421 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.421 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:20.421 { 00:22:20.421 "cntlid": 79, 00:22:20.421 "qid": 0, 00:22:20.421 "state": "enabled", 00:22:20.421 "thread": "nvmf_tgt_poll_group_000", 00:22:20.421 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:20.421 "listen_address": { 00:22:20.421 "trtype": "TCP", 00:22:20.421 "adrfam": "IPv4", 00:22:20.421 "traddr": "10.0.0.2", 00:22:20.421 "trsvcid": "4420" 00:22:20.421 }, 00:22:20.421 "peer_address": { 00:22:20.421 "trtype": "TCP", 00:22:20.421 "adrfam": "IPv4", 00:22:20.421 "traddr": "10.0.0.1", 00:22:20.421 "trsvcid": "56460" 00:22:20.421 }, 00:22:20.421 "auth": { 00:22:20.421 "state": "completed", 00:22:20.421 "digest": "sha384", 00:22:20.421 "dhgroup": "ffdhe4096" 00:22:20.421 } 00:22:20.421 } 00:22:20.421 ]' 00:22:20.421 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:20.421 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:20.421 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:20.421 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:20.421 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:20.421 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:20.421 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:20.421 15:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.683 15:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjM2MTEzYzE2Njg3MmRhMjc3ZmU5YzZhOGJiOGU4YTk3MzRmNTYwMDJjZjVjNWY5YmViZTI2YzlmMWY0ODNiOQEpGbY=: 00:22:20.683 15:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjM2MTEzYzE2Njg3MmRhMjc3ZmU5YzZhOGJiOGU4YTk3MzRmNTYwMDJjZjVjNWY5YmViZTI2YzlmMWY0ODNiOQEpGbY=: 00:22:21.255 15:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:21.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:21.255 15:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:21.255 15:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.256 15:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.256 15:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.256 15:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:21.256 15:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:21.256 15:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:21.256 15:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:21.517 15:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:22:21.517 15:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:21.517 15:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:21.517 15:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:21.517 15:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:21.517 15:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:21.517 15:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.517 15:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.517 15:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.517 15:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.517 15:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.517 15:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.517 15:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.778 00:22:21.778 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:21.778 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:21.778 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:22.040 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.040 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:22.040 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.040 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.040 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.040 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:22.040 { 00:22:22.040 "cntlid": 81, 00:22:22.040 "qid": 0, 00:22:22.040 "state": "enabled", 00:22:22.040 "thread": "nvmf_tgt_poll_group_000", 00:22:22.040 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:22.040 "listen_address": { 00:22:22.040 "trtype": "TCP", 00:22:22.040 "adrfam": "IPv4", 00:22:22.040 "traddr": "10.0.0.2", 00:22:22.040 "trsvcid": "4420" 00:22:22.040 }, 00:22:22.040 "peer_address": { 00:22:22.040 "trtype": "TCP", 00:22:22.040 "adrfam": "IPv4", 00:22:22.040 "traddr": "10.0.0.1", 00:22:22.040 "trsvcid": "56494" 00:22:22.040 }, 00:22:22.040 "auth": { 00:22:22.040 "state": "completed", 00:22:22.040 "digest": "sha384", 00:22:22.040 "dhgroup": "ffdhe6144" 00:22:22.040 } 00:22:22.040 } 00:22:22.040 ]' 00:22:22.040 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:22.040 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:22.040 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:22.040 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:22.040 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:22.300 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:22.301 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:22.301 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:22.301 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjFlMTU1ZTEzOWYyYWNhNGM3MTc4YmFhYTIwYzI5YmVkMmFkNzlmNDMzZWVhNzJjxLk3pQ==: --dhchap-ctrl-secret DHHC-1:03:YmI3YjFlMjE5MDliMTU2YTk0Mjc0ZDVmMzkxNThmMWM2YzBiYWYyMWE3MjdkYjhhZThjZDMwMzNmMzhiOGYxZkbNJFg=: 00:22:22.301 15:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZjFlMTU1ZTEzOWYyYWNhNGM3MTc4YmFhYTIwYzI5YmVkMmFkNzlmNDMzZWVhNzJjxLk3pQ==: --dhchap-ctrl-secret DHHC-1:03:YmI3YjFlMjE5MDliMTU2YTk0Mjc0ZDVmMzkxNThmMWM2YzBiYWYyMWE3MjdkYjhhZThjZDMwMzNmMzhiOGYxZkbNJFg=: 00:22:23.245 15:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:23.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:23.245 15:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:23.245 15:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.245 15:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.245 15:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.245 15:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:23.245 15:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:23.245 15:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:23.245 15:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:22:23.245 15:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:23.245 15:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:23.245 15:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:23.245 15:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:23.245 15:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:23.245 15:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.245 15:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.245 15:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.245 15:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.245 15:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.245 15:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.245 15:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.506 00:22:23.506 15:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:23.506 15:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:23.506 15:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.771 15:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.771 15:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.771 15:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.771 15:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.771 15:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.771 15:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:23.771 { 00:22:23.771 "cntlid": 83, 00:22:23.771 "qid": 0, 00:22:23.771 "state": "enabled", 00:22:23.771 "thread": "nvmf_tgt_poll_group_000", 00:22:23.771 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:23.771 "listen_address": { 00:22:23.771 "trtype": "TCP", 00:22:23.771 "adrfam": "IPv4", 00:22:23.771 "traddr": "10.0.0.2", 00:22:23.771 "trsvcid": "4420" 00:22:23.771 }, 00:22:23.771 "peer_address": { 00:22:23.771 "trtype": "TCP", 00:22:23.771 "adrfam": "IPv4", 00:22:23.771 "traddr": "10.0.0.1", 00:22:23.771 "trsvcid": "56512" 00:22:23.771 }, 00:22:23.771 "auth": { 00:22:23.771 "state": "completed", 00:22:23.771 "digest": "sha384", 00:22:23.771 "dhgroup": "ffdhe6144" 00:22:23.771 } 00:22:23.771 } 00:22:23.771 ]' 00:22:23.771 15:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:23.771 15:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:23.771 15:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:23.771 15:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:23.772 15:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:24.034 15:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:24.034 15:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:24.034 15:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:24.034 15:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjRlMzk0N2IzNzE1YmU0MzJhMTMzZjJhOWYwNTNmODdWvOUz: --dhchap-ctrl-secret DHHC-1:02:OGFlNWM5NzRkZDBmZWYwNTI0YjVlM2FiNzZiNTk3N2YyZGYzZmViMjU2NjVlM2QxhGH64w==: 00:22:24.034 15:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YjRlMzk0N2IzNzE1YmU0MzJhMTMzZjJhOWYwNTNmODdWvOUz: --dhchap-ctrl-secret DHHC-1:02:OGFlNWM5NzRkZDBmZWYwNTI0YjVlM2FiNzZiNTk3N2YyZGYzZmViMjU2NjVlM2QxhGH64w==: 00:22:24.607 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:24.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:24.868 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:24.868 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.868 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.868 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.868 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:24.868 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:24.868 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:24.868 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:22:24.868 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:24.868 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:24.868 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:24.868 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:24.868 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:24.868 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:24.868 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.868 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.868 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.868 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:24.868 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:24.868 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:25.442 00:22:25.442 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:25.442 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:25.442 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:25.442 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.442 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:25.442 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.442 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.442 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.442 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:25.442 { 00:22:25.442 "cntlid": 85, 00:22:25.442 "qid": 0, 00:22:25.442 "state": "enabled", 00:22:25.442 "thread": "nvmf_tgt_poll_group_000", 00:22:25.442 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:25.442 "listen_address": { 00:22:25.442 "trtype": "TCP", 00:22:25.442 "adrfam": "IPv4", 00:22:25.442 "traddr": "10.0.0.2", 00:22:25.442 "trsvcid": "4420" 00:22:25.442 }, 00:22:25.442 "peer_address": { 00:22:25.442 "trtype": "TCP", 00:22:25.442 "adrfam": "IPv4", 00:22:25.442 "traddr": "10.0.0.1", 00:22:25.442 "trsvcid": "56526" 00:22:25.442 }, 00:22:25.442 "auth": { 00:22:25.442 "state": "completed", 00:22:25.442 "digest": "sha384", 00:22:25.442 "dhgroup": "ffdhe6144" 00:22:25.442 } 00:22:25.442 } 00:22:25.442 ]' 00:22:25.442 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:25.442 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:25.442 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:25.703 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:25.703 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:25.703 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:25.703 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:25.703 15:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:25.703 15:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTdlY2I0ODRiNzc5YjY0YjYyMmVkMmMxYTIyODU3ZjQ4Zjg3MjBmZDcwMWVmYzg2ye529Q==: --dhchap-ctrl-secret DHHC-1:01:YjAxMWFjMTExMzhjNTRiMDk5MWM5NmVjNDUyZGUxMDISQyKq: 00:22:25.703 15:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTdlY2I0ODRiNzc5YjY0YjYyMmVkMmMxYTIyODU3ZjQ4Zjg3MjBmZDcwMWVmYzg2ye529Q==: --dhchap-ctrl-secret DHHC-1:01:YjAxMWFjMTExMzhjNTRiMDk5MWM5NmVjNDUyZGUxMDISQyKq: 00:22:26.648 15:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:26.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:26.649 15:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:26.649 15:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.649 15:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.649 15:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.649 15:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:26.649 15:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:26.649 15:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:26.649 15:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:22:26.649 15:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:26.649 15:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:26.649 15:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:26.649 15:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:26.649 15:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:26.649 15:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:26.649 15:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.649 15:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.649 15:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.649 15:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:26.649 15:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:26.649 15:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:26.910 00:22:26.910 15:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:26.910 15:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:26.910 15:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.172 15:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.172 15:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:27.172 15:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.172 15:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.172 15:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.172 15:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:27.172 { 00:22:27.172 "cntlid": 87, 00:22:27.172 "qid": 0, 00:22:27.172 "state": "enabled", 00:22:27.172 "thread": "nvmf_tgt_poll_group_000", 00:22:27.172 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:27.172 "listen_address": { 00:22:27.172 "trtype": "TCP", 00:22:27.172 "adrfam": "IPv4", 00:22:27.172 "traddr": "10.0.0.2", 00:22:27.172 "trsvcid": "4420" 00:22:27.172 }, 00:22:27.172 "peer_address": { 00:22:27.172 "trtype": "TCP", 00:22:27.172 "adrfam": "IPv4", 00:22:27.172 "traddr": "10.0.0.1", 00:22:27.172 "trsvcid": "56554" 00:22:27.172 }, 00:22:27.172 "auth": { 00:22:27.172 "state": "completed", 00:22:27.172 "digest": "sha384", 00:22:27.172 "dhgroup": "ffdhe6144" 00:22:27.172 } 00:22:27.172 } 00:22:27.172 ]' 00:22:27.172 15:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:27.172 15:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:27.172 15:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:27.434 15:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:27.434 15:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:27.434 15:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:27.434 15:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:27.434 15:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:27.434 15:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjM2MTEzYzE2Njg3MmRhMjc3ZmU5YzZhOGJiOGU4YTk3MzRmNTYwMDJjZjVjNWY5YmViZTI2YzlmMWY0ODNiOQEpGbY=: 00:22:27.434 15:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjM2MTEzYzE2Njg3MmRhMjc3ZmU5YzZhOGJiOGU4YTk3MzRmNTYwMDJjZjVjNWY5YmViZTI2YzlmMWY0ODNiOQEpGbY=: 00:22:28.379 15:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:28.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:28.379 15:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:28.379 15:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.379 15:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.379 15:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.379 15:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:28.379 15:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:28.379 15:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:28.379 15:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:28.379 15:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:22:28.379 15:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:28.379 15:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:28.379 15:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:28.379 15:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:28.379 15:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:28.379 15:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:28.379 15:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.379 15:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.379 15:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.379 15:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:28.379 15:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:28.379 15:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:28.953 00:22:28.953 15:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:28.953 15:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:28.953 15:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.953 15:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.953 15:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:28.953 15:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.953 15:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.953 15:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.953 15:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:28.953 { 00:22:28.953 "cntlid": 89, 00:22:28.953 "qid": 0, 00:22:28.953 "state": "enabled", 00:22:28.953 "thread": "nvmf_tgt_poll_group_000", 00:22:28.953 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:28.953 "listen_address": { 00:22:28.953 "trtype": "TCP", 00:22:28.953 "adrfam": "IPv4", 00:22:28.953 "traddr": "10.0.0.2", 00:22:28.953 "trsvcid": "4420" 00:22:28.953 }, 00:22:28.953 "peer_address": { 00:22:28.953 "trtype": "TCP", 00:22:28.953 "adrfam": "IPv4", 00:22:28.953 "traddr": "10.0.0.1", 00:22:28.953 "trsvcid": "34520" 00:22:28.953 }, 00:22:28.953 "auth": { 00:22:28.953 "state": "completed", 00:22:28.953 "digest": "sha384", 00:22:28.953 "dhgroup": "ffdhe8192" 00:22:28.953 } 00:22:28.953 } 00:22:28.953 ]' 00:22:28.953 15:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:28.953 15:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:28.953 15:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:29.216 15:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:29.216 15:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:29.216 15:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:29.216 15:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:29.216 15:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:29.216 15:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjFlMTU1ZTEzOWYyYWNhNGM3MTc4YmFhYTIwYzI5YmVkMmFkNzlmNDMzZWVhNzJjxLk3pQ==: --dhchap-ctrl-secret DHHC-1:03:YmI3YjFlMjE5MDliMTU2YTk0Mjc0ZDVmMzkxNThmMWM2YzBiYWYyMWE3MjdkYjhhZThjZDMwMzNmMzhiOGYxZkbNJFg=: 00:22:29.216 15:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZjFlMTU1ZTEzOWYyYWNhNGM3MTc4YmFhYTIwYzI5YmVkMmFkNzlmNDMzZWVhNzJjxLk3pQ==: --dhchap-ctrl-secret DHHC-1:03:YmI3YjFlMjE5MDliMTU2YTk0Mjc0ZDVmMzkxNThmMWM2YzBiYWYyMWE3MjdkYjhhZThjZDMwMzNmMzhiOGYxZkbNJFg=: 00:22:30.160 15:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:30.160 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:30.160 15:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:30.160 15:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.160 15:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.160 15:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.160 15:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:30.160 15:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:30.160 15:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:30.160 15:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:22:30.160 15:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:30.160 15:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:30.160 15:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:30.160 15:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:30.160 15:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:30.160 15:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.160 15:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.160 15:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.160 15:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.160 15:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.160 15:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.160 15:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.733 00:22:30.733 15:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:30.733 15:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:30.733 15:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.733 15:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.733 15:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:30.733 15:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.733 15:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.733 15:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.733 15:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:30.733 { 00:22:30.733 "cntlid": 91, 00:22:30.733 "qid": 0, 00:22:30.733 "state": "enabled", 00:22:30.733 "thread": "nvmf_tgt_poll_group_000", 00:22:30.733 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:30.733 "listen_address": { 00:22:30.733 "trtype": "TCP", 00:22:30.733 "adrfam": "IPv4", 00:22:30.733 "traddr": "10.0.0.2", 00:22:30.733 "trsvcid": "4420" 00:22:30.733 }, 00:22:30.733 "peer_address": { 00:22:30.733 "trtype": "TCP", 00:22:30.733 "adrfam": "IPv4", 00:22:30.733 "traddr": "10.0.0.1", 00:22:30.733 "trsvcid": "34550" 00:22:30.733 }, 00:22:30.733 "auth": { 00:22:30.733 "state": "completed", 00:22:30.733 "digest": "sha384", 00:22:30.733 "dhgroup": "ffdhe8192" 00:22:30.733 } 00:22:30.733 } 00:22:30.733 ]' 00:22:30.733 15:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:30.994 15:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:30.994 15:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:30.994 15:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:30.994 15:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:30.994 15:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:30.994 15:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.994 15:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:31.256 15:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjRlMzk0N2IzNzE1YmU0MzJhMTMzZjJhOWYwNTNmODdWvOUz: --dhchap-ctrl-secret DHHC-1:02:OGFlNWM5NzRkZDBmZWYwNTI0YjVlM2FiNzZiNTk3N2YyZGYzZmViMjU2NjVlM2QxhGH64w==: 00:22:31.256 15:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YjRlMzk0N2IzNzE1YmU0MzJhMTMzZjJhOWYwNTNmODdWvOUz: --dhchap-ctrl-secret DHHC-1:02:OGFlNWM5NzRkZDBmZWYwNTI0YjVlM2FiNzZiNTk3N2YyZGYzZmViMjU2NjVlM2QxhGH64w==: 00:22:31.828 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:31.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:31.828 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:31.828 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.828 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.828 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.828 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:31.828 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:31.828 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:32.088 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:22:32.088 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:32.088 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:32.088 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:32.088 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:32.088 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:32.089 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:32.089 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.089 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.089 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.089 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:32.089 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:32.089 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:32.349 00:22:32.611 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:32.611 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:32.611 15:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.611 15:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.611 15:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:32.611 15:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.611 15:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.611 15:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.611 15:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:32.611 { 00:22:32.611 "cntlid": 93, 00:22:32.611 "qid": 0, 00:22:32.611 "state": "enabled", 00:22:32.611 "thread": "nvmf_tgt_poll_group_000", 00:22:32.611 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:32.611 "listen_address": { 00:22:32.611 "trtype": "TCP", 00:22:32.611 "adrfam": "IPv4", 00:22:32.611 "traddr": "10.0.0.2", 00:22:32.611 "trsvcid": "4420" 00:22:32.611 }, 00:22:32.611 "peer_address": { 00:22:32.611 "trtype": "TCP", 00:22:32.611 "adrfam": "IPv4", 00:22:32.611 "traddr": "10.0.0.1", 00:22:32.611 "trsvcid": "34590" 00:22:32.611 }, 00:22:32.611 "auth": { 00:22:32.611 "state": "completed", 00:22:32.611 "digest": "sha384", 00:22:32.611 "dhgroup": "ffdhe8192" 00:22:32.611 } 00:22:32.611 } 00:22:32.611 ]' 00:22:32.611 15:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:32.611 15:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:32.611 15:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:32.873 15:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:32.873 15:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:32.873 15:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:32.873 15:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:32.873 15:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:32.873 15:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTdlY2I0ODRiNzc5YjY0YjYyMmVkMmMxYTIyODU3ZjQ4Zjg3MjBmZDcwMWVmYzg2ye529Q==: --dhchap-ctrl-secret DHHC-1:01:YjAxMWFjMTExMzhjNTRiMDk5MWM5NmVjNDUyZGUxMDISQyKq: 00:22:32.873 15:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTdlY2I0ODRiNzc5YjY0YjYyMmVkMmMxYTIyODU3ZjQ4Zjg3MjBmZDcwMWVmYzg2ye529Q==: --dhchap-ctrl-secret DHHC-1:01:YjAxMWFjMTExMzhjNTRiMDk5MWM5NmVjNDUyZGUxMDISQyKq: 00:22:33.445 15:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:33.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:33.705 15:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:33.705 15:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.705 15:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.705 15:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.705 15:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:33.705 15:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:33.705 15:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:33.705 15:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:22:33.705 15:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:33.705 15:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:33.705 15:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:33.705 15:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:33.705 15:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:33.705 15:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:33.705 15:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.705 15:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.705 15:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.705 15:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:33.705 15:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:33.705 15:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:34.276 00:22:34.276 15:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:34.276 15:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:34.276 15:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:34.536 15:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.536 15:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:34.536 15:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.536 15:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.536 15:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.536 15:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:34.536 { 00:22:34.536 "cntlid": 95, 00:22:34.536 "qid": 0, 00:22:34.536 "state": "enabled", 00:22:34.536 "thread": "nvmf_tgt_poll_group_000", 00:22:34.536 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:34.536 "listen_address": { 00:22:34.536 "trtype": "TCP", 00:22:34.536 "adrfam": "IPv4", 00:22:34.536 "traddr": "10.0.0.2", 00:22:34.536 "trsvcid": "4420" 00:22:34.536 }, 00:22:34.536 "peer_address": { 00:22:34.536 "trtype": "TCP", 00:22:34.536 "adrfam": "IPv4", 00:22:34.536 "traddr": "10.0.0.1", 00:22:34.536 "trsvcid": "34612" 00:22:34.536 }, 00:22:34.536 "auth": { 00:22:34.536 "state": "completed", 00:22:34.536 "digest": "sha384", 00:22:34.536 "dhgroup": "ffdhe8192" 00:22:34.536 } 00:22:34.536 } 00:22:34.536 ]' 00:22:34.536 15:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:34.536 15:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:34.536 15:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:34.536 15:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:34.536 15:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:34.536 15:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:34.536 15:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:34.536 15:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:34.796 15:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjM2MTEzYzE2Njg3MmRhMjc3ZmU5YzZhOGJiOGU4YTk3MzRmNTYwMDJjZjVjNWY5YmViZTI2YzlmMWY0ODNiOQEpGbY=: 00:22:34.796 15:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjM2MTEzYzE2Njg3MmRhMjc3ZmU5YzZhOGJiOGU4YTk3MzRmNTYwMDJjZjVjNWY5YmViZTI2YzlmMWY0ODNiOQEpGbY=: 00:22:35.366 15:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:35.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:35.366 15:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:35.366 15:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.366 15:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.366 15:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.366 15:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:22:35.366 15:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:35.366 15:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:35.366 15:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:35.366 15:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:35.625 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:22:35.625 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:35.625 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:35.625 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:35.625 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:35.625 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:35.625 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.625 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.625 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.625 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.625 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.626 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.626 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.886 00:22:35.886 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:35.886 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:35.886 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:36.147 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.147 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:36.147 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.147 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.147 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.147 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:36.147 { 00:22:36.147 "cntlid": 97, 00:22:36.147 "qid": 0, 00:22:36.147 "state": "enabled", 00:22:36.147 "thread": "nvmf_tgt_poll_group_000", 00:22:36.147 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:36.147 "listen_address": { 00:22:36.147 "trtype": "TCP", 00:22:36.147 "adrfam": "IPv4", 00:22:36.147 "traddr": "10.0.0.2", 00:22:36.147 "trsvcid": "4420" 00:22:36.147 }, 00:22:36.147 "peer_address": { 00:22:36.147 "trtype": "TCP", 00:22:36.147 "adrfam": "IPv4", 00:22:36.147 "traddr": "10.0.0.1", 00:22:36.147 "trsvcid": "34632" 00:22:36.147 }, 00:22:36.147 "auth": { 00:22:36.147 "state": "completed", 00:22:36.147 "digest": "sha512", 00:22:36.147 "dhgroup": "null" 00:22:36.147 } 00:22:36.147 } 00:22:36.147 ]' 00:22:36.147 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:36.147 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:36.147 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:36.147 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:36.147 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:36.147 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:36.147 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:36.147 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:36.408 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjFlMTU1ZTEzOWYyYWNhNGM3MTc4YmFhYTIwYzI5YmVkMmFkNzlmNDMzZWVhNzJjxLk3pQ==: --dhchap-ctrl-secret DHHC-1:03:YmI3YjFlMjE5MDliMTU2YTk0Mjc0ZDVmMzkxNThmMWM2YzBiYWYyMWE3MjdkYjhhZThjZDMwMzNmMzhiOGYxZkbNJFg=: 00:22:36.408 15:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZjFlMTU1ZTEzOWYyYWNhNGM3MTc4YmFhYTIwYzI5YmVkMmFkNzlmNDMzZWVhNzJjxLk3pQ==: --dhchap-ctrl-secret DHHC-1:03:YmI3YjFlMjE5MDliMTU2YTk0Mjc0ZDVmMzkxNThmMWM2YzBiYWYyMWE3MjdkYjhhZThjZDMwMzNmMzhiOGYxZkbNJFg=: 00:22:36.980 15:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:36.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:36.980 15:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:36.980 15:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.980 15:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.980 15:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.980 15:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:36.980 15:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:36.980 15:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:37.242 15:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:22:37.242 15:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:37.242 15:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:37.242 15:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:37.242 15:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:37.242 15:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:37.242 15:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:37.242 15:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.242 15:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.242 15:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.242 15:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:37.242 15:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:37.242 15:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:37.503 00:22:37.503 15:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:37.503 15:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:37.503 15:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.763 15:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.763 15:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:37.763 15:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.763 15:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.763 15:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.763 15:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:37.763 { 00:22:37.763 "cntlid": 99, 00:22:37.763 "qid": 0, 00:22:37.763 "state": "enabled", 00:22:37.763 "thread": "nvmf_tgt_poll_group_000", 00:22:37.763 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:37.763 "listen_address": { 00:22:37.763 "trtype": "TCP", 00:22:37.763 "adrfam": "IPv4", 00:22:37.763 "traddr": "10.0.0.2", 00:22:37.763 "trsvcid": "4420" 00:22:37.763 }, 00:22:37.763 "peer_address": { 00:22:37.763 "trtype": "TCP", 00:22:37.763 "adrfam": "IPv4", 00:22:37.763 "traddr": "10.0.0.1", 00:22:37.763 "trsvcid": "34662" 00:22:37.763 }, 00:22:37.763 "auth": { 00:22:37.763 "state": "completed", 00:22:37.763 "digest": "sha512", 00:22:37.763 "dhgroup": "null" 00:22:37.763 } 00:22:37.763 } 00:22:37.763 ]' 00:22:37.763 15:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:37.763 15:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:37.763 15:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:37.763 15:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:37.763 15:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:37.763 15:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:37.763 15:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:37.763 15:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:38.024 15:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjRlMzk0N2IzNzE1YmU0MzJhMTMzZjJhOWYwNTNmODdWvOUz: --dhchap-ctrl-secret DHHC-1:02:OGFlNWM5NzRkZDBmZWYwNTI0YjVlM2FiNzZiNTk3N2YyZGYzZmViMjU2NjVlM2QxhGH64w==: 00:22:38.024 15:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YjRlMzk0N2IzNzE1YmU0MzJhMTMzZjJhOWYwNTNmODdWvOUz: --dhchap-ctrl-secret DHHC-1:02:OGFlNWM5NzRkZDBmZWYwNTI0YjVlM2FiNzZiNTk3N2YyZGYzZmViMjU2NjVlM2QxhGH64w==: 00:22:38.594 15:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:38.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:38.595 15:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:38.595 15:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.595 15:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.595 15:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.595 15:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:38.595 15:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:38.595 15:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:38.855 15:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:22:38.855 15:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:38.855 15:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:38.855 15:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:38.855 15:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:38.855 15:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:38.855 15:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:38.855 15:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.855 15:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.855 15:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.855 15:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:38.855 15:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:38.855 15:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:39.115 00:22:39.115 15:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:39.115 15:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:39.115 15:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:39.376 15:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.376 15:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:39.376 15:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.376 15:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.376 15:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.376 15:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:39.376 { 00:22:39.376 "cntlid": 101, 00:22:39.376 "qid": 0, 00:22:39.376 "state": "enabled", 00:22:39.376 "thread": "nvmf_tgt_poll_group_000", 00:22:39.376 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:39.376 "listen_address": { 00:22:39.376 "trtype": "TCP", 00:22:39.376 "adrfam": "IPv4", 00:22:39.376 "traddr": "10.0.0.2", 00:22:39.376 "trsvcid": "4420" 00:22:39.376 }, 00:22:39.376 "peer_address": { 00:22:39.376 "trtype": "TCP", 00:22:39.376 "adrfam": "IPv4", 00:22:39.376 "traddr": "10.0.0.1", 00:22:39.376 "trsvcid": "36950" 00:22:39.376 }, 00:22:39.376 "auth": { 00:22:39.376 "state": "completed", 00:22:39.376 "digest": "sha512", 00:22:39.376 "dhgroup": "null" 00:22:39.376 } 00:22:39.376 } 00:22:39.376 ]' 00:22:39.376 15:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:39.376 15:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:39.377 15:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:39.377 15:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:39.377 15:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:39.377 15:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:39.377 15:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:39.377 15:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:39.637 15:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTdlY2I0ODRiNzc5YjY0YjYyMmVkMmMxYTIyODU3ZjQ4Zjg3MjBmZDcwMWVmYzg2ye529Q==: --dhchap-ctrl-secret DHHC-1:01:YjAxMWFjMTExMzhjNTRiMDk5MWM5NmVjNDUyZGUxMDISQyKq: 00:22:39.637 15:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTdlY2I0ODRiNzc5YjY0YjYyMmVkMmMxYTIyODU3ZjQ4Zjg3MjBmZDcwMWVmYzg2ye529Q==: --dhchap-ctrl-secret DHHC-1:01:YjAxMWFjMTExMzhjNTRiMDk5MWM5NmVjNDUyZGUxMDISQyKq: 00:22:40.209 15:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:40.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:40.209 15:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:40.209 15:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.209 15:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.209 15:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.209 15:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:40.209 15:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:40.209 15:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:40.469 15:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:22:40.469 15:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:40.469 15:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:40.469 15:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:40.469 15:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:40.469 15:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:40.469 15:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:40.469 15:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.469 15:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.469 15:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.469 15:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:40.469 15:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:40.469 15:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:40.730 00:22:40.730 15:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:40.730 15:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:40.730 15:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:40.991 15:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.991 15:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:40.991 15:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.991 15:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.991 15:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.991 15:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:40.991 { 00:22:40.991 "cntlid": 103, 00:22:40.991 "qid": 0, 00:22:40.991 "state": "enabled", 00:22:40.991 "thread": "nvmf_tgt_poll_group_000", 00:22:40.991 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:40.991 "listen_address": { 00:22:40.991 "trtype": "TCP", 00:22:40.991 "adrfam": "IPv4", 00:22:40.991 "traddr": "10.0.0.2", 00:22:40.991 "trsvcid": "4420" 00:22:40.991 }, 00:22:40.991 "peer_address": { 00:22:40.991 "trtype": "TCP", 00:22:40.991 "adrfam": "IPv4", 00:22:40.991 "traddr": "10.0.0.1", 00:22:40.991 "trsvcid": "36980" 00:22:40.991 }, 00:22:40.991 "auth": { 00:22:40.991 "state": "completed", 00:22:40.991 "digest": "sha512", 00:22:40.991 "dhgroup": "null" 00:22:40.991 } 00:22:40.991 } 00:22:40.991 ]' 00:22:40.991 15:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:40.991 15:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:40.991 15:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:40.991 15:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:40.991 15:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:40.991 15:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:40.991 15:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:40.991 15:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:41.252 15:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjM2MTEzYzE2Njg3MmRhMjc3ZmU5YzZhOGJiOGU4YTk3MzRmNTYwMDJjZjVjNWY5YmViZTI2YzlmMWY0ODNiOQEpGbY=: 00:22:41.252 15:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjM2MTEzYzE2Njg3MmRhMjc3ZmU5YzZhOGJiOGU4YTk3MzRmNTYwMDJjZjVjNWY5YmViZTI2YzlmMWY0ODNiOQEpGbY=: 00:22:41.824 15:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:41.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:41.824 15:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:41.824 15:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.825 15:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.825 15:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.825 15:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:41.825 15:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:41.825 15:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:41.825 15:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:42.086 15:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:22:42.086 15:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:42.086 15:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:42.086 15:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:42.086 15:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:42.086 15:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:42.086 15:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:42.086 15:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.086 15:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.086 15:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.086 15:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:42.086 15:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:42.086 15:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:42.371 00:22:42.371 15:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:42.371 15:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:42.371 15:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:42.371 15:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.371 15:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:42.371 15:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.371 15:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.371 15:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.371 15:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:42.371 { 00:22:42.371 "cntlid": 105, 00:22:42.371 "qid": 0, 00:22:42.371 "state": "enabled", 00:22:42.371 "thread": "nvmf_tgt_poll_group_000", 00:22:42.371 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:42.371 "listen_address": { 00:22:42.371 "trtype": "TCP", 00:22:42.371 "adrfam": "IPv4", 00:22:42.371 "traddr": "10.0.0.2", 00:22:42.371 "trsvcid": "4420" 00:22:42.371 }, 00:22:42.371 "peer_address": { 00:22:42.371 "trtype": "TCP", 00:22:42.371 "adrfam": "IPv4", 00:22:42.371 "traddr": "10.0.0.1", 00:22:42.371 "trsvcid": "37004" 00:22:42.371 }, 00:22:42.371 "auth": { 00:22:42.371 "state": "completed", 00:22:42.371 "digest": "sha512", 00:22:42.372 "dhgroup": "ffdhe2048" 00:22:42.372 } 00:22:42.372 } 00:22:42.372 ]' 00:22:42.372 15:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:42.632 15:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:42.632 15:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:42.632 15:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:42.632 15:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:42.632 15:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:42.632 15:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:42.632 15:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:42.893 15:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjFlMTU1ZTEzOWYyYWNhNGM3MTc4YmFhYTIwYzI5YmVkMmFkNzlmNDMzZWVhNzJjxLk3pQ==: --dhchap-ctrl-secret DHHC-1:03:YmI3YjFlMjE5MDliMTU2YTk0Mjc0ZDVmMzkxNThmMWM2YzBiYWYyMWE3MjdkYjhhZThjZDMwMzNmMzhiOGYxZkbNJFg=: 00:22:42.893 15:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZjFlMTU1ZTEzOWYyYWNhNGM3MTc4YmFhYTIwYzI5YmVkMmFkNzlmNDMzZWVhNzJjxLk3pQ==: --dhchap-ctrl-secret DHHC-1:03:YmI3YjFlMjE5MDliMTU2YTk0Mjc0ZDVmMzkxNThmMWM2YzBiYWYyMWE3MjdkYjhhZThjZDMwMzNmMzhiOGYxZkbNJFg=: 00:22:43.464 15:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:43.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:43.464 15:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:43.464 15:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.464 15:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.464 15:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.464 15:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:43.464 15:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:43.464 15:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:43.725 15:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:22:43.725 15:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:43.725 15:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:43.725 15:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:43.725 15:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:43.725 15:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:43.725 15:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.725 15:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.725 15:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.725 15:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.725 15:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.725 15:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.725 15:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.986 00:22:43.986 15:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:43.986 15:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:43.986 15:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:44.247 15:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.247 15:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:44.247 15:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.247 15:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.247 15:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.247 15:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:44.247 { 00:22:44.247 "cntlid": 107, 00:22:44.247 "qid": 0, 00:22:44.247 "state": "enabled", 00:22:44.247 "thread": "nvmf_tgt_poll_group_000", 00:22:44.247 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:44.247 "listen_address": { 00:22:44.247 "trtype": "TCP", 00:22:44.247 "adrfam": "IPv4", 00:22:44.247 "traddr": "10.0.0.2", 00:22:44.247 "trsvcid": "4420" 00:22:44.247 }, 00:22:44.247 "peer_address": { 00:22:44.247 "trtype": "TCP", 00:22:44.247 "adrfam": "IPv4", 00:22:44.247 "traddr": "10.0.0.1", 00:22:44.247 "trsvcid": "37024" 00:22:44.247 }, 00:22:44.247 "auth": { 00:22:44.247 "state": "completed", 00:22:44.247 "digest": "sha512", 00:22:44.247 "dhgroup": "ffdhe2048" 00:22:44.247 } 00:22:44.247 } 00:22:44.247 ]' 00:22:44.247 15:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:44.247 15:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:44.247 15:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:44.247 15:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:44.247 15:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:44.247 15:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:44.247 15:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:44.247 15:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:44.508 15:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjRlMzk0N2IzNzE1YmU0MzJhMTMzZjJhOWYwNTNmODdWvOUz: --dhchap-ctrl-secret DHHC-1:02:OGFlNWM5NzRkZDBmZWYwNTI0YjVlM2FiNzZiNTk3N2YyZGYzZmViMjU2NjVlM2QxhGH64w==: 00:22:44.508 15:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YjRlMzk0N2IzNzE1YmU0MzJhMTMzZjJhOWYwNTNmODdWvOUz: --dhchap-ctrl-secret DHHC-1:02:OGFlNWM5NzRkZDBmZWYwNTI0YjVlM2FiNzZiNTk3N2YyZGYzZmViMjU2NjVlM2QxhGH64w==: 00:22:45.081 15:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:45.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:45.081 15:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:45.081 15:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.081 15:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.081 15:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.081 15:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:45.081 15:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:45.081 15:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:45.342 15:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:22:45.342 15:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:45.342 15:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:45.342 15:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:45.342 15:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:45.342 15:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:45.342 15:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:45.342 15:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.342 15:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.342 15:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.342 15:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:45.342 15:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:45.342 15:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:45.602 00:22:45.602 15:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:45.602 15:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:45.602 15:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:45.863 15:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.863 15:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:45.863 15:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.863 15:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.863 15:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.863 15:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:45.863 { 00:22:45.863 "cntlid": 109, 00:22:45.863 "qid": 0, 00:22:45.863 "state": "enabled", 00:22:45.863 "thread": "nvmf_tgt_poll_group_000", 00:22:45.863 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:45.863 "listen_address": { 00:22:45.863 "trtype": "TCP", 00:22:45.863 "adrfam": "IPv4", 00:22:45.863 "traddr": "10.0.0.2", 00:22:45.863 "trsvcid": "4420" 00:22:45.863 }, 00:22:45.863 "peer_address": { 00:22:45.863 "trtype": "TCP", 00:22:45.863 "adrfam": "IPv4", 00:22:45.863 "traddr": "10.0.0.1", 00:22:45.863 "trsvcid": "37056" 00:22:45.863 }, 00:22:45.863 "auth": { 00:22:45.863 "state": "completed", 00:22:45.863 "digest": "sha512", 00:22:45.863 "dhgroup": "ffdhe2048" 00:22:45.863 } 00:22:45.863 } 00:22:45.863 ]' 00:22:45.863 15:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:45.863 15:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:45.863 15:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:45.863 15:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:45.863 15:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:45.863 15:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:45.863 15:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:45.863 15:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:46.124 15:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTdlY2I0ODRiNzc5YjY0YjYyMmVkMmMxYTIyODU3ZjQ4Zjg3MjBmZDcwMWVmYzg2ye529Q==: --dhchap-ctrl-secret DHHC-1:01:YjAxMWFjMTExMzhjNTRiMDk5MWM5NmVjNDUyZGUxMDISQyKq: 00:22:46.124 15:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTdlY2I0ODRiNzc5YjY0YjYyMmVkMmMxYTIyODU3ZjQ4Zjg3MjBmZDcwMWVmYzg2ye529Q==: --dhchap-ctrl-secret DHHC-1:01:YjAxMWFjMTExMzhjNTRiMDk5MWM5NmVjNDUyZGUxMDISQyKq: 00:22:46.697 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:46.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:46.697 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:46.697 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.697 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.697 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.697 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:46.697 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:46.697 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:46.959 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:22:46.959 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:46.959 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:46.959 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:46.959 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:46.959 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:46.959 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:46.959 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.959 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.959 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.959 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:46.959 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:46.959 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:47.220 00:22:47.220 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:47.220 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:47.220 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:47.480 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.480 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:47.480 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.480 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.480 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.480 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:47.480 { 00:22:47.480 "cntlid": 111, 00:22:47.480 "qid": 0, 00:22:47.480 "state": "enabled", 00:22:47.480 "thread": "nvmf_tgt_poll_group_000", 00:22:47.480 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:47.480 "listen_address": { 00:22:47.480 "trtype": "TCP", 00:22:47.480 "adrfam": "IPv4", 00:22:47.480 "traddr": "10.0.0.2", 00:22:47.480 "trsvcid": "4420" 00:22:47.480 }, 00:22:47.481 "peer_address": { 00:22:47.481 "trtype": "TCP", 00:22:47.481 "adrfam": "IPv4", 00:22:47.481 "traddr": "10.0.0.1", 00:22:47.481 "trsvcid": "37092" 00:22:47.481 }, 00:22:47.481 "auth": { 00:22:47.481 "state": "completed", 00:22:47.481 "digest": "sha512", 00:22:47.481 "dhgroup": "ffdhe2048" 00:22:47.481 } 00:22:47.481 } 00:22:47.481 ]' 00:22:47.481 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:47.481 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:47.481 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:47.481 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:47.481 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:47.481 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:47.481 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:47.481 15:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:47.741 15:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjM2MTEzYzE2Njg3MmRhMjc3ZmU5YzZhOGJiOGU4YTk3MzRmNTYwMDJjZjVjNWY5YmViZTI2YzlmMWY0ODNiOQEpGbY=: 00:22:47.741 15:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjM2MTEzYzE2Njg3MmRhMjc3ZmU5YzZhOGJiOGU4YTk3MzRmNTYwMDJjZjVjNWY5YmViZTI2YzlmMWY0ODNiOQEpGbY=: 00:22:48.313 15:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:48.313 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:48.313 15:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:48.313 15:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.313 15:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.313 15:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.313 15:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:48.313 15:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:48.313 15:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:48.314 15:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:48.574 15:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:22:48.574 15:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:48.574 15:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:48.574 15:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:48.574 15:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:48.574 15:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:48.574 15:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:48.574 15:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.574 15:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.574 15:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.574 15:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:48.574 15:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:48.574 15:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:48.834 00:22:48.834 15:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:48.834 15:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:48.834 15:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:49.094 15:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.094 15:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:49.094 15:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.094 15:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.094 15:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.094 15:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:49.094 { 00:22:49.094 "cntlid": 113, 00:22:49.094 "qid": 0, 00:22:49.094 "state": "enabled", 00:22:49.094 "thread": "nvmf_tgt_poll_group_000", 00:22:49.094 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:49.094 "listen_address": { 00:22:49.094 "trtype": "TCP", 00:22:49.094 "adrfam": "IPv4", 00:22:49.094 "traddr": "10.0.0.2", 00:22:49.094 "trsvcid": "4420" 00:22:49.094 }, 00:22:49.094 "peer_address": { 00:22:49.094 "trtype": "TCP", 00:22:49.094 "adrfam": "IPv4", 00:22:49.094 "traddr": "10.0.0.1", 00:22:49.094 "trsvcid": "59626" 00:22:49.094 }, 00:22:49.094 "auth": { 00:22:49.094 "state": "completed", 00:22:49.094 "digest": "sha512", 00:22:49.094 "dhgroup": "ffdhe3072" 00:22:49.095 } 00:22:49.095 } 00:22:49.095 ]' 00:22:49.095 15:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:49.095 15:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:49.095 15:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:49.095 15:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:49.095 15:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:49.095 15:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:49.095 15:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:49.095 15:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:49.355 15:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjFlMTU1ZTEzOWYyYWNhNGM3MTc4YmFhYTIwYzI5YmVkMmFkNzlmNDMzZWVhNzJjxLk3pQ==: --dhchap-ctrl-secret DHHC-1:03:YmI3YjFlMjE5MDliMTU2YTk0Mjc0ZDVmMzkxNThmMWM2YzBiYWYyMWE3MjdkYjhhZThjZDMwMzNmMzhiOGYxZkbNJFg=: 00:22:49.355 15:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZjFlMTU1ZTEzOWYyYWNhNGM3MTc4YmFhYTIwYzI5YmVkMmFkNzlmNDMzZWVhNzJjxLk3pQ==: --dhchap-ctrl-secret DHHC-1:03:YmI3YjFlMjE5MDliMTU2YTk0Mjc0ZDVmMzkxNThmMWM2YzBiYWYyMWE3MjdkYjhhZThjZDMwMzNmMzhiOGYxZkbNJFg=: 00:22:49.926 15:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:49.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:49.926 15:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:49.926 15:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.926 15:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.926 15:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.926 15:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:49.926 15:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:49.926 15:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:50.188 15:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:22:50.188 15:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:50.188 15:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:50.188 15:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:50.188 15:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:50.188 15:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:50.188 15:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:50.188 15:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.188 15:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.188 15:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.188 15:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:50.188 15:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:50.188 15:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:50.449 00:22:50.449 15:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:50.449 15:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:50.449 15:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:50.710 15:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.710 15:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:50.710 15:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.710 15:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.710 15:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.710 15:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:50.710 { 00:22:50.710 "cntlid": 115, 00:22:50.710 "qid": 0, 00:22:50.710 "state": "enabled", 00:22:50.710 "thread": "nvmf_tgt_poll_group_000", 00:22:50.710 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:50.710 "listen_address": { 00:22:50.710 "trtype": "TCP", 00:22:50.710 "adrfam": "IPv4", 00:22:50.710 "traddr": "10.0.0.2", 00:22:50.710 "trsvcid": "4420" 00:22:50.710 }, 00:22:50.710 "peer_address": { 00:22:50.710 "trtype": "TCP", 00:22:50.710 "adrfam": "IPv4", 00:22:50.710 "traddr": "10.0.0.1", 00:22:50.710 "trsvcid": "59650" 00:22:50.710 }, 00:22:50.710 "auth": { 00:22:50.710 "state": "completed", 00:22:50.710 "digest": "sha512", 00:22:50.710 "dhgroup": "ffdhe3072" 00:22:50.710 } 00:22:50.710 } 00:22:50.710 ]' 00:22:50.710 15:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:50.710 15:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:50.710 15:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:50.710 15:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:50.710 15:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:50.710 15:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:50.710 15:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:50.710 15:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:50.973 15:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjRlMzk0N2IzNzE1YmU0MzJhMTMzZjJhOWYwNTNmODdWvOUz: --dhchap-ctrl-secret DHHC-1:02:OGFlNWM5NzRkZDBmZWYwNTI0YjVlM2FiNzZiNTk3N2YyZGYzZmViMjU2NjVlM2QxhGH64w==: 00:22:50.973 15:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YjRlMzk0N2IzNzE1YmU0MzJhMTMzZjJhOWYwNTNmODdWvOUz: --dhchap-ctrl-secret DHHC-1:02:OGFlNWM5NzRkZDBmZWYwNTI0YjVlM2FiNzZiNTk3N2YyZGYzZmViMjU2NjVlM2QxhGH64w==: 00:22:51.545 15:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:51.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:51.545 15:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:51.545 15:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.545 15:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.545 15:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.546 15:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:51.546 15:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:51.546 15:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:51.807 15:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:22:51.807 15:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:51.807 15:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:51.807 15:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:51.807 15:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:51.807 15:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:51.807 15:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.807 15:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.807 15:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.807 15:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.807 15:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.807 15:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.807 15:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:52.069 00:22:52.069 15:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:52.069 15:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:52.069 15:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:52.330 15:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.330 15:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:52.330 15:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.330 15:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.330 15:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.330 15:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:52.330 { 00:22:52.330 "cntlid": 117, 00:22:52.330 "qid": 0, 00:22:52.330 "state": "enabled", 00:22:52.330 "thread": "nvmf_tgt_poll_group_000", 00:22:52.330 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:52.330 "listen_address": { 00:22:52.330 "trtype": "TCP", 00:22:52.330 "adrfam": "IPv4", 00:22:52.330 "traddr": "10.0.0.2", 00:22:52.330 "trsvcid": "4420" 00:22:52.330 }, 00:22:52.330 "peer_address": { 00:22:52.330 "trtype": "TCP", 00:22:52.330 "adrfam": "IPv4", 00:22:52.330 "traddr": "10.0.0.1", 00:22:52.330 "trsvcid": "59664" 00:22:52.330 }, 00:22:52.330 "auth": { 00:22:52.330 "state": "completed", 00:22:52.330 "digest": "sha512", 00:22:52.330 "dhgroup": "ffdhe3072" 00:22:52.330 } 00:22:52.330 } 00:22:52.330 ]' 00:22:52.330 15:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:52.330 15:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:52.331 15:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:52.331 15:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:52.331 15:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:52.331 15:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:52.331 15:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:52.331 15:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:52.592 15:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTdlY2I0ODRiNzc5YjY0YjYyMmVkMmMxYTIyODU3ZjQ4Zjg3MjBmZDcwMWVmYzg2ye529Q==: --dhchap-ctrl-secret DHHC-1:01:YjAxMWFjMTExMzhjNTRiMDk5MWM5NmVjNDUyZGUxMDISQyKq: 00:22:52.592 15:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTdlY2I0ODRiNzc5YjY0YjYyMmVkMmMxYTIyODU3ZjQ4Zjg3MjBmZDcwMWVmYzg2ye529Q==: --dhchap-ctrl-secret DHHC-1:01:YjAxMWFjMTExMzhjNTRiMDk5MWM5NmVjNDUyZGUxMDISQyKq: 00:22:53.164 15:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:53.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:53.164 15:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:53.164 15:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.164 15:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.164 15:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.164 15:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:53.164 15:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:53.164 15:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:53.426 15:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:22:53.426 15:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:53.426 15:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:53.426 15:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:53.426 15:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:53.426 15:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:53.426 15:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:53.426 15:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.426 15:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.426 15:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.426 15:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:53.426 15:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:53.426 15:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:53.689 00:22:53.689 15:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:53.689 15:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:53.689 15:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:53.951 15:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.951 15:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:53.951 15:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.951 15:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.951 15:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.951 15:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:53.951 { 00:22:53.951 "cntlid": 119, 00:22:53.951 "qid": 0, 00:22:53.951 "state": "enabled", 00:22:53.951 "thread": "nvmf_tgt_poll_group_000", 00:22:53.951 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:53.951 "listen_address": { 00:22:53.951 "trtype": "TCP", 00:22:53.951 "adrfam": "IPv4", 00:22:53.951 "traddr": "10.0.0.2", 00:22:53.951 "trsvcid": "4420" 00:22:53.951 }, 00:22:53.951 "peer_address": { 00:22:53.951 "trtype": "TCP", 00:22:53.951 "adrfam": "IPv4", 00:22:53.951 "traddr": "10.0.0.1", 00:22:53.951 "trsvcid": "59686" 00:22:53.951 }, 00:22:53.951 "auth": { 00:22:53.951 "state": "completed", 00:22:53.951 "digest": "sha512", 00:22:53.951 "dhgroup": "ffdhe3072" 00:22:53.951 } 00:22:53.951 } 00:22:53.951 ]' 00:22:53.951 15:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:53.951 15:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:53.951 15:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:53.951 15:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:53.951 15:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:53.951 15:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:53.951 15:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:53.951 15:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:54.212 15:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjM2MTEzYzE2Njg3MmRhMjc3ZmU5YzZhOGJiOGU4YTk3MzRmNTYwMDJjZjVjNWY5YmViZTI2YzlmMWY0ODNiOQEpGbY=: 00:22:54.212 15:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjM2MTEzYzE2Njg3MmRhMjc3ZmU5YzZhOGJiOGU4YTk3MzRmNTYwMDJjZjVjNWY5YmViZTI2YzlmMWY0ODNiOQEpGbY=: 00:22:54.784 15:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:54.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:54.784 15:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:54.784 15:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.784 15:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.784 15:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.045 15:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:55.045 15:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:55.045 15:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:55.045 15:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:55.045 15:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:22:55.045 15:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:55.045 15:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:55.045 15:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:55.045 15:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:55.045 15:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:55.045 15:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:55.045 15:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.045 15:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.045 15:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.045 15:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:55.045 15:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:55.045 15:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:55.307 00:22:55.307 15:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:55.307 15:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:55.307 15:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:55.568 15:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.568 15:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:55.568 15:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.568 15:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.568 15:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.568 15:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:55.568 { 00:22:55.568 "cntlid": 121, 00:22:55.568 "qid": 0, 00:22:55.568 "state": "enabled", 00:22:55.568 "thread": "nvmf_tgt_poll_group_000", 00:22:55.568 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:55.568 "listen_address": { 00:22:55.568 "trtype": "TCP", 00:22:55.568 "adrfam": "IPv4", 00:22:55.568 "traddr": "10.0.0.2", 00:22:55.568 "trsvcid": "4420" 00:22:55.568 }, 00:22:55.568 "peer_address": { 00:22:55.568 "trtype": "TCP", 00:22:55.568 "adrfam": "IPv4", 00:22:55.568 "traddr": "10.0.0.1", 00:22:55.568 "trsvcid": "59718" 00:22:55.568 }, 00:22:55.568 "auth": { 00:22:55.568 "state": "completed", 00:22:55.568 "digest": "sha512", 00:22:55.568 "dhgroup": "ffdhe4096" 00:22:55.568 } 00:22:55.568 } 00:22:55.568 ]' 00:22:55.568 15:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:55.568 15:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:55.568 15:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:55.568 15:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:55.568 15:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:55.568 15:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:55.568 15:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:55.568 15:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:55.829 15:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjFlMTU1ZTEzOWYyYWNhNGM3MTc4YmFhYTIwYzI5YmVkMmFkNzlmNDMzZWVhNzJjxLk3pQ==: --dhchap-ctrl-secret DHHC-1:03:YmI3YjFlMjE5MDliMTU2YTk0Mjc0ZDVmMzkxNThmMWM2YzBiYWYyMWE3MjdkYjhhZThjZDMwMzNmMzhiOGYxZkbNJFg=: 00:22:55.830 15:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZjFlMTU1ZTEzOWYyYWNhNGM3MTc4YmFhYTIwYzI5YmVkMmFkNzlmNDMzZWVhNzJjxLk3pQ==: --dhchap-ctrl-secret DHHC-1:03:YmI3YjFlMjE5MDliMTU2YTk0Mjc0ZDVmMzkxNThmMWM2YzBiYWYyMWE3MjdkYjhhZThjZDMwMzNmMzhiOGYxZkbNJFg=: 00:22:56.401 15:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:56.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:56.401 15:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:56.401 15:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.401 15:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.662 15:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.662 15:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:56.662 15:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:56.662 15:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:56.662 15:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:22:56.662 15:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:56.662 15:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:56.662 15:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:56.662 15:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:56.662 15:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:56.662 15:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:56.662 15:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.662 15:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.662 15:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.662 15:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:56.662 15:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:56.662 15:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:56.923 00:22:56.923 15:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:56.923 15:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:56.923 15:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:57.184 15:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.184 15:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:57.184 15:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.184 15:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.184 15:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.184 15:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:57.184 { 00:22:57.184 "cntlid": 123, 00:22:57.184 "qid": 0, 00:22:57.184 "state": "enabled", 00:22:57.184 "thread": "nvmf_tgt_poll_group_000", 00:22:57.184 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:57.184 "listen_address": { 00:22:57.184 "trtype": "TCP", 00:22:57.184 "adrfam": "IPv4", 00:22:57.184 "traddr": "10.0.0.2", 00:22:57.184 "trsvcid": "4420" 00:22:57.184 }, 00:22:57.184 "peer_address": { 00:22:57.184 "trtype": "TCP", 00:22:57.184 "adrfam": "IPv4", 00:22:57.184 "traddr": "10.0.0.1", 00:22:57.184 "trsvcid": "59744" 00:22:57.184 }, 00:22:57.184 "auth": { 00:22:57.184 "state": "completed", 00:22:57.184 "digest": "sha512", 00:22:57.184 "dhgroup": "ffdhe4096" 00:22:57.184 } 00:22:57.184 } 00:22:57.184 ]' 00:22:57.184 15:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:57.184 15:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:57.184 15:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:57.184 15:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:57.184 15:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:57.184 15:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:57.184 15:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:57.184 15:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:57.445 15:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjRlMzk0N2IzNzE1YmU0MzJhMTMzZjJhOWYwNTNmODdWvOUz: --dhchap-ctrl-secret DHHC-1:02:OGFlNWM5NzRkZDBmZWYwNTI0YjVlM2FiNzZiNTk3N2YyZGYzZmViMjU2NjVlM2QxhGH64w==: 00:22:57.445 15:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YjRlMzk0N2IzNzE1YmU0MzJhMTMzZjJhOWYwNTNmODdWvOUz: --dhchap-ctrl-secret DHHC-1:02:OGFlNWM5NzRkZDBmZWYwNTI0YjVlM2FiNzZiNTk3N2YyZGYzZmViMjU2NjVlM2QxhGH64w==: 00:22:58.020 15:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:58.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:58.020 15:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:58.020 15:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.020 15:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.281 15:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.281 15:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:58.281 15:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:58.281 15:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:58.281 15:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:22:58.281 15:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:58.282 15:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:58.282 15:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:58.282 15:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:58.282 15:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:58.282 15:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:58.282 15:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.282 15:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.282 15:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.282 15:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:58.282 15:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:58.282 15:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:58.543 00:22:58.543 15:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:58.543 15:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:58.543 15:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:58.804 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.804 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:58.804 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.804 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.804 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.804 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:58.804 { 00:22:58.804 "cntlid": 125, 00:22:58.804 "qid": 0, 00:22:58.804 "state": "enabled", 00:22:58.804 "thread": "nvmf_tgt_poll_group_000", 00:22:58.804 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:58.804 "listen_address": { 00:22:58.804 "trtype": "TCP", 00:22:58.804 "adrfam": "IPv4", 00:22:58.805 "traddr": "10.0.0.2", 00:22:58.805 "trsvcid": "4420" 00:22:58.805 }, 00:22:58.805 "peer_address": { 00:22:58.805 "trtype": "TCP", 00:22:58.805 "adrfam": "IPv4", 00:22:58.805 "traddr": "10.0.0.1", 00:22:58.805 "trsvcid": "58752" 00:22:58.805 }, 00:22:58.805 "auth": { 00:22:58.805 "state": "completed", 00:22:58.805 "digest": "sha512", 00:22:58.805 "dhgroup": "ffdhe4096" 00:22:58.805 } 00:22:58.805 } 00:22:58.805 ]' 00:22:58.805 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:58.805 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:58.805 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:58.805 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:58.805 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:59.066 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:59.066 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:59.066 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:59.066 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTdlY2I0ODRiNzc5YjY0YjYyMmVkMmMxYTIyODU3ZjQ4Zjg3MjBmZDcwMWVmYzg2ye529Q==: --dhchap-ctrl-secret DHHC-1:01:YjAxMWFjMTExMzhjNTRiMDk5MWM5NmVjNDUyZGUxMDISQyKq: 00:22:59.067 15:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTdlY2I0ODRiNzc5YjY0YjYyMmVkMmMxYTIyODU3ZjQ4Zjg3MjBmZDcwMWVmYzg2ye529Q==: --dhchap-ctrl-secret DHHC-1:01:YjAxMWFjMTExMzhjNTRiMDk5MWM5NmVjNDUyZGUxMDISQyKq: 00:23:00.010 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:00.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:00.011 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:00.011 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.011 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.011 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.011 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:00.011 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:00.011 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:00.011 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:23:00.011 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:00.011 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:00.011 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:00.011 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:00.011 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:00.011 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:00.011 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.011 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.011 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.011 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:00.011 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:00.011 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:00.271 00:23:00.271 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:00.271 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:00.271 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:00.532 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.532 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:00.532 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.532 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.532 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.532 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:00.532 { 00:23:00.532 "cntlid": 127, 00:23:00.532 "qid": 0, 00:23:00.532 "state": "enabled", 00:23:00.532 "thread": "nvmf_tgt_poll_group_000", 00:23:00.532 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:00.532 "listen_address": { 00:23:00.532 "trtype": "TCP", 00:23:00.532 "adrfam": "IPv4", 00:23:00.532 "traddr": "10.0.0.2", 00:23:00.532 "trsvcid": "4420" 00:23:00.532 }, 00:23:00.532 "peer_address": { 00:23:00.532 "trtype": "TCP", 00:23:00.532 "adrfam": "IPv4", 00:23:00.532 "traddr": "10.0.0.1", 00:23:00.532 "trsvcid": "58778" 00:23:00.532 }, 00:23:00.532 "auth": { 00:23:00.532 "state": "completed", 00:23:00.533 "digest": "sha512", 00:23:00.533 "dhgroup": "ffdhe4096" 00:23:00.533 } 00:23:00.533 } 00:23:00.533 ]' 00:23:00.533 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:00.533 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:00.533 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:00.533 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:00.533 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:00.533 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:00.533 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:00.533 15:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:00.793 15:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjM2MTEzYzE2Njg3MmRhMjc3ZmU5YzZhOGJiOGU4YTk3MzRmNTYwMDJjZjVjNWY5YmViZTI2YzlmMWY0ODNiOQEpGbY=: 00:23:00.793 15:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjM2MTEzYzE2Njg3MmRhMjc3ZmU5YzZhOGJiOGU4YTk3MzRmNTYwMDJjZjVjNWY5YmViZTI2YzlmMWY0ODNiOQEpGbY=: 00:23:01.364 15:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:01.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:01.364 15:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:01.364 15:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.364 15:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.364 15:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.364 15:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:01.364 15:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:01.364 15:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:01.364 15:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:01.625 15:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:23:01.625 15:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:01.625 15:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:01.625 15:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:01.625 15:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:01.625 15:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:01.625 15:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:01.625 15:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.625 15:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.625 15:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.625 15:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:01.625 15:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:01.625 15:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:01.885 00:23:01.885 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:01.885 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:01.885 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:02.146 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.146 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:02.146 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.146 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.146 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.146 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:02.146 { 00:23:02.146 "cntlid": 129, 00:23:02.146 "qid": 0, 00:23:02.146 "state": "enabled", 00:23:02.146 "thread": "nvmf_tgt_poll_group_000", 00:23:02.146 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:02.146 "listen_address": { 00:23:02.146 "trtype": "TCP", 00:23:02.146 "adrfam": "IPv4", 00:23:02.146 "traddr": "10.0.0.2", 00:23:02.146 "trsvcid": "4420" 00:23:02.146 }, 00:23:02.146 "peer_address": { 00:23:02.146 "trtype": "TCP", 00:23:02.146 "adrfam": "IPv4", 00:23:02.146 "traddr": "10.0.0.1", 00:23:02.146 "trsvcid": "58808" 00:23:02.146 }, 00:23:02.146 "auth": { 00:23:02.146 "state": "completed", 00:23:02.146 "digest": "sha512", 00:23:02.146 "dhgroup": "ffdhe6144" 00:23:02.146 } 00:23:02.146 } 00:23:02.146 ]' 00:23:02.146 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:02.146 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:02.146 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:02.146 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:02.146 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:02.146 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:02.146 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:02.146 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:02.407 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjFlMTU1ZTEzOWYyYWNhNGM3MTc4YmFhYTIwYzI5YmVkMmFkNzlmNDMzZWVhNzJjxLk3pQ==: --dhchap-ctrl-secret DHHC-1:03:YmI3YjFlMjE5MDliMTU2YTk0Mjc0ZDVmMzkxNThmMWM2YzBiYWYyMWE3MjdkYjhhZThjZDMwMzNmMzhiOGYxZkbNJFg=: 00:23:02.407 15:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZjFlMTU1ZTEzOWYyYWNhNGM3MTc4YmFhYTIwYzI5YmVkMmFkNzlmNDMzZWVhNzJjxLk3pQ==: --dhchap-ctrl-secret DHHC-1:03:YmI3YjFlMjE5MDliMTU2YTk0Mjc0ZDVmMzkxNThmMWM2YzBiYWYyMWE3MjdkYjhhZThjZDMwMzNmMzhiOGYxZkbNJFg=: 00:23:02.978 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:02.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:03.237 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:03.237 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.237 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.237 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.237 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:03.237 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:03.237 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:03.237 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:23:03.237 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:03.237 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:03.237 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:03.237 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:03.237 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:03.237 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:03.238 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.238 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.238 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.238 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:03.238 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:03.238 15:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:03.807 00:23:03.807 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:03.807 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:03.807 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:03.807 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.807 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:03.807 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.807 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.807 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.807 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:03.807 { 00:23:03.807 "cntlid": 131, 00:23:03.807 "qid": 0, 00:23:03.807 "state": "enabled", 00:23:03.807 "thread": "nvmf_tgt_poll_group_000", 00:23:03.807 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:03.807 "listen_address": { 00:23:03.807 "trtype": "TCP", 00:23:03.807 "adrfam": "IPv4", 00:23:03.807 "traddr": "10.0.0.2", 00:23:03.807 "trsvcid": "4420" 00:23:03.807 }, 00:23:03.807 "peer_address": { 00:23:03.807 "trtype": "TCP", 00:23:03.807 "adrfam": "IPv4", 00:23:03.807 "traddr": "10.0.0.1", 00:23:03.807 "trsvcid": "58850" 00:23:03.807 }, 00:23:03.807 "auth": { 00:23:03.807 "state": "completed", 00:23:03.807 "digest": "sha512", 00:23:03.807 "dhgroup": "ffdhe6144" 00:23:03.807 } 00:23:03.807 } 00:23:03.807 ]' 00:23:03.807 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:03.807 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:03.807 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:03.807 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:03.807 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:04.068 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:04.068 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:04.068 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:04.068 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjRlMzk0N2IzNzE1YmU0MzJhMTMzZjJhOWYwNTNmODdWvOUz: --dhchap-ctrl-secret DHHC-1:02:OGFlNWM5NzRkZDBmZWYwNTI0YjVlM2FiNzZiNTk3N2YyZGYzZmViMjU2NjVlM2QxhGH64w==: 00:23:04.068 15:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YjRlMzk0N2IzNzE1YmU0MzJhMTMzZjJhOWYwNTNmODdWvOUz: --dhchap-ctrl-secret DHHC-1:02:OGFlNWM5NzRkZDBmZWYwNTI0YjVlM2FiNzZiNTk3N2YyZGYzZmViMjU2NjVlM2QxhGH64w==: 00:23:04.640 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:04.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:04.901 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:04.901 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.901 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.901 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.901 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:04.901 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:04.901 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:04.901 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:23:04.901 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:04.901 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:04.901 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:04.901 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:04.901 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:04.901 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:04.901 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.901 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.901 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.901 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:04.901 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:04.901 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:05.472 00:23:05.472 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:05.472 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:05.472 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:05.472 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.472 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:05.472 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.472 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.472 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.472 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:05.472 { 00:23:05.472 "cntlid": 133, 00:23:05.472 "qid": 0, 00:23:05.472 "state": "enabled", 00:23:05.472 "thread": "nvmf_tgt_poll_group_000", 00:23:05.472 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:05.472 "listen_address": { 00:23:05.472 "trtype": "TCP", 00:23:05.472 "adrfam": "IPv4", 00:23:05.472 "traddr": "10.0.0.2", 00:23:05.472 "trsvcid": "4420" 00:23:05.472 }, 00:23:05.472 "peer_address": { 00:23:05.472 "trtype": "TCP", 00:23:05.472 "adrfam": "IPv4", 00:23:05.472 "traddr": "10.0.0.1", 00:23:05.472 "trsvcid": "58866" 00:23:05.472 }, 00:23:05.472 "auth": { 00:23:05.472 "state": "completed", 00:23:05.472 "digest": "sha512", 00:23:05.472 "dhgroup": "ffdhe6144" 00:23:05.472 } 00:23:05.472 } 00:23:05.472 ]' 00:23:05.472 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:05.472 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:05.472 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:05.732 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:05.732 15:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:05.732 15:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:05.732 15:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:05.732 15:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:05.992 15:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTdlY2I0ODRiNzc5YjY0YjYyMmVkMmMxYTIyODU3ZjQ4Zjg3MjBmZDcwMWVmYzg2ye529Q==: --dhchap-ctrl-secret DHHC-1:01:YjAxMWFjMTExMzhjNTRiMDk5MWM5NmVjNDUyZGUxMDISQyKq: 00:23:05.992 15:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTdlY2I0ODRiNzc5YjY0YjYyMmVkMmMxYTIyODU3ZjQ4Zjg3MjBmZDcwMWVmYzg2ye529Q==: --dhchap-ctrl-secret DHHC-1:01:YjAxMWFjMTExMzhjNTRiMDk5MWM5NmVjNDUyZGUxMDISQyKq: 00:23:06.563 15:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:06.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:06.564 15:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:06.564 15:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.564 15:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.564 15:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.564 15:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:06.564 15:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:06.564 15:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:06.824 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:23:06.825 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:06.825 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:06.825 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:06.825 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:06.825 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:06.825 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:06.825 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.825 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.825 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.825 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:06.825 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:06.825 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:07.086 00:23:07.086 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:07.086 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:07.086 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:07.347 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.347 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:07.347 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.347 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.347 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.347 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:07.347 { 00:23:07.347 "cntlid": 135, 00:23:07.347 "qid": 0, 00:23:07.347 "state": "enabled", 00:23:07.347 "thread": "nvmf_tgt_poll_group_000", 00:23:07.347 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:07.347 "listen_address": { 00:23:07.347 "trtype": "TCP", 00:23:07.347 "adrfam": "IPv4", 00:23:07.347 "traddr": "10.0.0.2", 00:23:07.347 "trsvcid": "4420" 00:23:07.347 }, 00:23:07.347 "peer_address": { 00:23:07.347 "trtype": "TCP", 00:23:07.347 "adrfam": "IPv4", 00:23:07.347 "traddr": "10.0.0.1", 00:23:07.347 "trsvcid": "58888" 00:23:07.347 }, 00:23:07.347 "auth": { 00:23:07.347 "state": "completed", 00:23:07.347 "digest": "sha512", 00:23:07.347 "dhgroup": "ffdhe6144" 00:23:07.347 } 00:23:07.347 } 00:23:07.347 ]' 00:23:07.347 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:07.347 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:07.347 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:07.347 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:07.347 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:07.347 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:07.347 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:07.347 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:07.608 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjM2MTEzYzE2Njg3MmRhMjc3ZmU5YzZhOGJiOGU4YTk3MzRmNTYwMDJjZjVjNWY5YmViZTI2YzlmMWY0ODNiOQEpGbY=: 00:23:07.608 15:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjM2MTEzYzE2Njg3MmRhMjc3ZmU5YzZhOGJiOGU4YTk3MzRmNTYwMDJjZjVjNWY5YmViZTI2YzlmMWY0ODNiOQEpGbY=: 00:23:08.180 15:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:08.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:08.180 15:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:08.180 15:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.180 15:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.180 15:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.180 15:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:08.180 15:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:08.180 15:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:08.180 15:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:08.441 15:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:23:08.441 15:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:08.441 15:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:08.441 15:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:08.441 15:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:08.441 15:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:08.441 15:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:08.441 15:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.441 15:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.441 15:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.441 15:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:08.441 15:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:08.441 15:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:09.012 00:23:09.012 15:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:09.012 15:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:09.012 15:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:09.012 15:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.012 15:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:09.012 15:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.012 15:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.012 15:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.012 15:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:09.012 { 00:23:09.012 "cntlid": 137, 00:23:09.012 "qid": 0, 00:23:09.012 "state": "enabled", 00:23:09.012 "thread": "nvmf_tgt_poll_group_000", 00:23:09.012 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:09.012 "listen_address": { 00:23:09.012 "trtype": "TCP", 00:23:09.012 "adrfam": "IPv4", 00:23:09.012 "traddr": "10.0.0.2", 00:23:09.012 "trsvcid": "4420" 00:23:09.012 }, 00:23:09.012 "peer_address": { 00:23:09.012 "trtype": "TCP", 00:23:09.012 "adrfam": "IPv4", 00:23:09.012 "traddr": "10.0.0.1", 00:23:09.012 "trsvcid": "46864" 00:23:09.012 }, 00:23:09.012 "auth": { 00:23:09.012 "state": "completed", 00:23:09.012 "digest": "sha512", 00:23:09.012 "dhgroup": "ffdhe8192" 00:23:09.012 } 00:23:09.012 } 00:23:09.012 ]' 00:23:09.012 15:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:09.273 15:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:09.273 15:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:09.273 15:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:09.273 15:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:09.273 15:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:09.273 15:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:09.273 15:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:09.534 15:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjFlMTU1ZTEzOWYyYWNhNGM3MTc4YmFhYTIwYzI5YmVkMmFkNzlmNDMzZWVhNzJjxLk3pQ==: --dhchap-ctrl-secret DHHC-1:03:YmI3YjFlMjE5MDliMTU2YTk0Mjc0ZDVmMzkxNThmMWM2YzBiYWYyMWE3MjdkYjhhZThjZDMwMzNmMzhiOGYxZkbNJFg=: 00:23:09.534 15:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZjFlMTU1ZTEzOWYyYWNhNGM3MTc4YmFhYTIwYzI5YmVkMmFkNzlmNDMzZWVhNzJjxLk3pQ==: --dhchap-ctrl-secret DHHC-1:03:YmI3YjFlMjE5MDliMTU2YTk0Mjc0ZDVmMzkxNThmMWM2YzBiYWYyMWE3MjdkYjhhZThjZDMwMzNmMzhiOGYxZkbNJFg=: 00:23:10.107 15:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:10.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:10.107 15:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:10.107 15:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.107 15:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.107 15:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.107 15:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:10.107 15:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:10.107 15:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:10.367 15:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:23:10.367 15:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:10.367 15:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:10.367 15:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:10.367 15:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:10.367 15:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:10.368 15:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.368 15:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.368 15:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.368 15:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.368 15:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.368 15:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.368 15:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.628 00:23:10.888 15:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:10.888 15:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:10.889 15:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:10.889 15:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.889 15:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:10.889 15:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.889 15:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.889 15:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.889 15:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:10.889 { 00:23:10.889 "cntlid": 139, 00:23:10.889 "qid": 0, 00:23:10.889 "state": "enabled", 00:23:10.889 "thread": "nvmf_tgt_poll_group_000", 00:23:10.889 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:10.889 "listen_address": { 00:23:10.889 "trtype": "TCP", 00:23:10.889 "adrfam": "IPv4", 00:23:10.889 "traddr": "10.0.0.2", 00:23:10.889 "trsvcid": "4420" 00:23:10.889 }, 00:23:10.889 "peer_address": { 00:23:10.889 "trtype": "TCP", 00:23:10.889 "adrfam": "IPv4", 00:23:10.889 "traddr": "10.0.0.1", 00:23:10.889 "trsvcid": "46896" 00:23:10.889 }, 00:23:10.889 "auth": { 00:23:10.889 "state": "completed", 00:23:10.889 "digest": "sha512", 00:23:10.889 "dhgroup": "ffdhe8192" 00:23:10.889 } 00:23:10.889 } 00:23:10.889 ]' 00:23:10.889 15:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:10.889 15:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:10.889 15:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:11.150 15:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:11.150 15:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:11.150 15:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:11.150 15:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:11.150 15:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:11.150 15:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjRlMzk0N2IzNzE1YmU0MzJhMTMzZjJhOWYwNTNmODdWvOUz: --dhchap-ctrl-secret DHHC-1:02:OGFlNWM5NzRkZDBmZWYwNTI0YjVlM2FiNzZiNTk3N2YyZGYzZmViMjU2NjVlM2QxhGH64w==: 00:23:11.150 15:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YjRlMzk0N2IzNzE1YmU0MzJhMTMzZjJhOWYwNTNmODdWvOUz: --dhchap-ctrl-secret DHHC-1:02:OGFlNWM5NzRkZDBmZWYwNTI0YjVlM2FiNzZiNTk3N2YyZGYzZmViMjU2NjVlM2QxhGH64w==: 00:23:12.092 15:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:12.092 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:12.092 15:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:12.092 15:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.092 15:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.092 15:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.092 15:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:12.092 15:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:12.092 15:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:12.092 15:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:23:12.092 15:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:12.092 15:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:12.092 15:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:12.092 15:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:12.092 15:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:12.092 15:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:12.092 15:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.092 15:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.092 15:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.092 15:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:12.092 15:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:12.092 15:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:12.663 00:23:12.663 15:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:12.663 15:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:12.663 15:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:12.663 15:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.924 15:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:12.924 15:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.924 15:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.924 15:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.924 15:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:12.924 { 00:23:12.924 "cntlid": 141, 00:23:12.924 "qid": 0, 00:23:12.924 "state": "enabled", 00:23:12.924 "thread": "nvmf_tgt_poll_group_000", 00:23:12.924 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:12.924 "listen_address": { 00:23:12.924 "trtype": "TCP", 00:23:12.924 "adrfam": "IPv4", 00:23:12.924 "traddr": "10.0.0.2", 00:23:12.924 "trsvcid": "4420" 00:23:12.924 }, 00:23:12.924 "peer_address": { 00:23:12.924 "trtype": "TCP", 00:23:12.924 "adrfam": "IPv4", 00:23:12.924 "traddr": "10.0.0.1", 00:23:12.924 "trsvcid": "46924" 00:23:12.924 }, 00:23:12.924 "auth": { 00:23:12.924 "state": "completed", 00:23:12.924 "digest": "sha512", 00:23:12.924 "dhgroup": "ffdhe8192" 00:23:12.924 } 00:23:12.924 } 00:23:12.924 ]' 00:23:12.924 15:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:12.924 15:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:12.924 15:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:12.924 15:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:12.924 15:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:12.924 15:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:12.924 15:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:12.924 15:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:13.185 15:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTdlY2I0ODRiNzc5YjY0YjYyMmVkMmMxYTIyODU3ZjQ4Zjg3MjBmZDcwMWVmYzg2ye529Q==: --dhchap-ctrl-secret DHHC-1:01:YjAxMWFjMTExMzhjNTRiMDk5MWM5NmVjNDUyZGUxMDISQyKq: 00:23:13.185 15:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTdlY2I0ODRiNzc5YjY0YjYyMmVkMmMxYTIyODU3ZjQ4Zjg3MjBmZDcwMWVmYzg2ye529Q==: --dhchap-ctrl-secret DHHC-1:01:YjAxMWFjMTExMzhjNTRiMDk5MWM5NmVjNDUyZGUxMDISQyKq: 00:23:13.756 15:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:13.756 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:13.756 15:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:13.756 15:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.756 15:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.756 15:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.756 15:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:13.756 15:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:13.756 15:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:14.017 15:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:23:14.017 15:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:14.017 15:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:14.017 15:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:14.017 15:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:14.017 15:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:14.017 15:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:14.017 15:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.017 15:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.017 15:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.017 15:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:14.017 15:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:14.017 15:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:14.278 00:23:14.539 15:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:14.539 15:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:14.539 15:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:14.539 15:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.539 15:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:14.539 15:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.539 15:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.539 15:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.539 15:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:14.539 { 00:23:14.539 "cntlid": 143, 00:23:14.539 "qid": 0, 00:23:14.539 "state": "enabled", 00:23:14.539 "thread": "nvmf_tgt_poll_group_000", 00:23:14.539 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:14.539 "listen_address": { 00:23:14.539 "trtype": "TCP", 00:23:14.539 "adrfam": "IPv4", 00:23:14.539 "traddr": "10.0.0.2", 00:23:14.539 "trsvcid": "4420" 00:23:14.539 }, 00:23:14.539 "peer_address": { 00:23:14.539 "trtype": "TCP", 00:23:14.539 "adrfam": "IPv4", 00:23:14.539 "traddr": "10.0.0.1", 00:23:14.539 "trsvcid": "46960" 00:23:14.539 }, 00:23:14.539 "auth": { 00:23:14.539 "state": "completed", 00:23:14.539 "digest": "sha512", 00:23:14.539 "dhgroup": "ffdhe8192" 00:23:14.539 } 00:23:14.539 } 00:23:14.539 ]' 00:23:14.539 15:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:14.800 15:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:14.800 15:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:14.800 15:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:14.800 15:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:14.800 15:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:14.800 15:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:14.800 15:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:15.061 15:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjM2MTEzYzE2Njg3MmRhMjc3ZmU5YzZhOGJiOGU4YTk3MzRmNTYwMDJjZjVjNWY5YmViZTI2YzlmMWY0ODNiOQEpGbY=: 00:23:15.061 15:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjM2MTEzYzE2Njg3MmRhMjc3ZmU5YzZhOGJiOGU4YTk3MzRmNTYwMDJjZjVjNWY5YmViZTI2YzlmMWY0ODNiOQEpGbY=: 00:23:15.631 15:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:15.631 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:15.631 15:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:15.631 15:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.631 15:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.631 15:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.631 15:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:23:15.631 15:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:23:15.631 15:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:23:15.631 15:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:15.631 15:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:15.631 15:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:15.892 15:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:23:15.892 15:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:15.892 15:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:15.892 15:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:15.892 15:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:15.892 15:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:15.892 15:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:15.892 15:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.892 15:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.892 15:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.892 15:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:15.892 15:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:15.892 15:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:16.153 00:23:16.414 15:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:16.414 15:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:16.414 15:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:16.414 15:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.414 15:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:16.414 15:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.414 15:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.414 15:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.414 15:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:16.414 { 00:23:16.414 "cntlid": 145, 00:23:16.414 "qid": 0, 00:23:16.414 "state": "enabled", 00:23:16.414 "thread": "nvmf_tgt_poll_group_000", 00:23:16.414 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:16.414 "listen_address": { 00:23:16.414 "trtype": "TCP", 00:23:16.414 "adrfam": "IPv4", 00:23:16.414 "traddr": "10.0.0.2", 00:23:16.414 "trsvcid": "4420" 00:23:16.414 }, 00:23:16.414 "peer_address": { 00:23:16.414 "trtype": "TCP", 00:23:16.414 "adrfam": "IPv4", 00:23:16.414 "traddr": "10.0.0.1", 00:23:16.414 "trsvcid": "47004" 00:23:16.414 }, 00:23:16.414 "auth": { 00:23:16.414 "state": "completed", 00:23:16.414 "digest": "sha512", 00:23:16.414 "dhgroup": "ffdhe8192" 00:23:16.414 } 00:23:16.414 } 00:23:16.414 ]' 00:23:16.414 15:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:16.675 15:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:16.675 15:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:16.675 15:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:16.675 15:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:16.675 15:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:16.675 15:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:16.675 15:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:16.936 15:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjFlMTU1ZTEzOWYyYWNhNGM3MTc4YmFhYTIwYzI5YmVkMmFkNzlmNDMzZWVhNzJjxLk3pQ==: --dhchap-ctrl-secret DHHC-1:03:YmI3YjFlMjE5MDliMTU2YTk0Mjc0ZDVmMzkxNThmMWM2YzBiYWYyMWE3MjdkYjhhZThjZDMwMzNmMzhiOGYxZkbNJFg=: 00:23:16.936 15:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZjFlMTU1ZTEzOWYyYWNhNGM3MTc4YmFhYTIwYzI5YmVkMmFkNzlmNDMzZWVhNzJjxLk3pQ==: --dhchap-ctrl-secret DHHC-1:03:YmI3YjFlMjE5MDliMTU2YTk0Mjc0ZDVmMzkxNThmMWM2YzBiYWYyMWE3MjdkYjhhZThjZDMwMzNmMzhiOGYxZkbNJFg=: 00:23:17.508 15:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:17.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:17.508 15:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:17.508 15:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.508 15:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.508 15:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.508 15:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:23:17.508 15:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.508 15:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.508 15:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.508 15:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:23:17.508 15:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:17.508 15:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:23:17.508 15:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:17.508 15:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:17.508 15:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:17.508 15:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:17.508 15:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:23:17.508 15:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:23:17.508 15:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:23:18.080 request: 00:23:18.080 { 00:23:18.080 "name": "nvme0", 00:23:18.080 "trtype": "tcp", 00:23:18.080 "traddr": "10.0.0.2", 00:23:18.080 "adrfam": "ipv4", 00:23:18.080 "trsvcid": "4420", 00:23:18.080 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:18.080 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:18.080 "prchk_reftag": false, 00:23:18.080 "prchk_guard": false, 00:23:18.080 "hdgst": false, 00:23:18.080 "ddgst": false, 00:23:18.080 "dhchap_key": "key2", 00:23:18.080 "allow_unrecognized_csi": false, 00:23:18.080 "method": "bdev_nvme_attach_controller", 00:23:18.080 "req_id": 1 00:23:18.080 } 00:23:18.080 Got JSON-RPC error response 00:23:18.080 response: 00:23:18.080 { 00:23:18.080 "code": -5, 00:23:18.080 "message": "Input/output error" 00:23:18.080 } 00:23:18.080 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:18.080 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:18.080 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:18.080 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:18.080 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:18.080 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.080 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.080 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.080 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:18.080 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.080 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.080 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.080 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:18.080 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:18.080 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:18.080 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:18.080 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:18.080 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:18.080 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:18.080 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:18.080 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:18.080 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:18.341 request: 00:23:18.341 { 00:23:18.341 "name": "nvme0", 00:23:18.341 "trtype": "tcp", 00:23:18.341 "traddr": "10.0.0.2", 00:23:18.341 "adrfam": "ipv4", 00:23:18.341 "trsvcid": "4420", 00:23:18.341 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:18.341 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:18.341 "prchk_reftag": false, 00:23:18.341 "prchk_guard": false, 00:23:18.341 "hdgst": false, 00:23:18.341 "ddgst": false, 00:23:18.341 "dhchap_key": "key1", 00:23:18.341 "dhchap_ctrlr_key": "ckey2", 00:23:18.341 "allow_unrecognized_csi": false, 00:23:18.341 "method": "bdev_nvme_attach_controller", 00:23:18.341 "req_id": 1 00:23:18.341 } 00:23:18.341 Got JSON-RPC error response 00:23:18.341 response: 00:23:18.341 { 00:23:18.341 "code": -5, 00:23:18.341 "message": "Input/output error" 00:23:18.341 } 00:23:18.341 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:18.341 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:18.341 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:18.341 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:18.341 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:18.341 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.341 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.341 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.341 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:23:18.341 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.341 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.341 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.341 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:18.341 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:18.341 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:18.341 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:18.341 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:18.341 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:18.341 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:18.341 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:18.341 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:18.341 15:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:18.913 request: 00:23:18.913 { 00:23:18.913 "name": "nvme0", 00:23:18.913 "trtype": "tcp", 00:23:18.913 "traddr": "10.0.0.2", 00:23:18.913 "adrfam": "ipv4", 00:23:18.913 "trsvcid": "4420", 00:23:18.913 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:18.913 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:18.913 "prchk_reftag": false, 00:23:18.913 "prchk_guard": false, 00:23:18.913 "hdgst": false, 00:23:18.913 "ddgst": false, 00:23:18.913 "dhchap_key": "key1", 00:23:18.913 "dhchap_ctrlr_key": "ckey1", 00:23:18.913 "allow_unrecognized_csi": false, 00:23:18.913 "method": "bdev_nvme_attach_controller", 00:23:18.913 "req_id": 1 00:23:18.913 } 00:23:18.913 Got JSON-RPC error response 00:23:18.913 response: 00:23:18.913 { 00:23:18.913 "code": -5, 00:23:18.913 "message": "Input/output error" 00:23:18.913 } 00:23:18.913 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:18.913 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:18.913 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:18.913 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:18.913 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:18.913 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.913 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.913 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.913 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 369619 00:23:18.913 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 369619 ']' 00:23:18.913 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 369619 00:23:18.913 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:23:18.913 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:18.913 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 369619 00:23:18.913 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:18.913 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:18.913 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 369619' 00:23:18.913 killing process with pid 369619 00:23:18.913 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 369619 00:23:18.913 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 369619 00:23:19.174 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:23:19.174 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:19.174 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:19.174 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.174 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=395731 00:23:19.174 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:23:19.174 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 395731 00:23:19.174 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 395731 ']' 00:23:19.174 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:19.174 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:19.174 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:19.174 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:19.174 15:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.116 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:20.116 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:23:20.116 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:20.116 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:20.116 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.116 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:20.116 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:23:20.116 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 395731 00:23:20.116 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 395731 ']' 00:23:20.117 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:20.117 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:20.117 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:20.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:20.117 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:20.117 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.117 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:20.117 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:23:20.117 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:23:20.117 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.117 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.117 null0 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Ybg 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.AoF ]] 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.AoF 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.CwZ 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.qpH ]] 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.qpH 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.cku 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.RWD ]] 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.RWD 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.euC 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:20.378 15:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:21.319 nvme0n1 00:23:21.319 15:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:21.319 15:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:21.319 15:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:21.319 15:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.319 15:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:21.319 15:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.319 15:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.319 15:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.319 15:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:21.319 { 00:23:21.319 "cntlid": 1, 00:23:21.319 "qid": 0, 00:23:21.319 "state": "enabled", 00:23:21.319 "thread": "nvmf_tgt_poll_group_000", 00:23:21.319 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:21.319 "listen_address": { 00:23:21.319 "trtype": "TCP", 00:23:21.319 "adrfam": "IPv4", 00:23:21.319 "traddr": "10.0.0.2", 00:23:21.319 "trsvcid": "4420" 00:23:21.319 }, 00:23:21.319 "peer_address": { 00:23:21.319 "trtype": "TCP", 00:23:21.319 "adrfam": "IPv4", 00:23:21.319 "traddr": "10.0.0.1", 00:23:21.319 "trsvcid": "43152" 00:23:21.319 }, 00:23:21.319 "auth": { 00:23:21.319 "state": "completed", 00:23:21.319 "digest": "sha512", 00:23:21.319 "dhgroup": "ffdhe8192" 00:23:21.319 } 00:23:21.319 } 00:23:21.319 ]' 00:23:21.319 15:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:21.319 15:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:21.319 15:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:21.319 15:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:21.319 15:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:21.319 15:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:21.319 15:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:21.319 15:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:21.579 15:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjM2MTEzYzE2Njg3MmRhMjc3ZmU5YzZhOGJiOGU4YTk3MzRmNTYwMDJjZjVjNWY5YmViZTI2YzlmMWY0ODNiOQEpGbY=: 00:23:21.579 15:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjM2MTEzYzE2Njg3MmRhMjc3ZmU5YzZhOGJiOGU4YTk3MzRmNTYwMDJjZjVjNWY5YmViZTI2YzlmMWY0ODNiOQEpGbY=: 00:23:22.151 15:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:22.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:22.411 15:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:22.411 15:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.411 15:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.411 15:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.411 15:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:22.411 15:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.411 15:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.411 15:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.411 15:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:23:22.411 15:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:23:22.411 15:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:23:22.411 15:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:22.411 15:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:23:22.411 15:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:22.412 15:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:22.412 15:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:22.412 15:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:22.412 15:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:22.412 15:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:22.412 15:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:22.671 request: 00:23:22.671 { 00:23:22.671 "name": "nvme0", 00:23:22.671 "trtype": "tcp", 00:23:22.671 "traddr": "10.0.0.2", 00:23:22.671 "adrfam": "ipv4", 00:23:22.671 "trsvcid": "4420", 00:23:22.671 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:22.671 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:22.672 "prchk_reftag": false, 00:23:22.672 "prchk_guard": false, 00:23:22.672 "hdgst": false, 00:23:22.672 "ddgst": false, 00:23:22.672 "dhchap_key": "key3", 00:23:22.672 "allow_unrecognized_csi": false, 00:23:22.672 "method": "bdev_nvme_attach_controller", 00:23:22.672 "req_id": 1 00:23:22.672 } 00:23:22.672 Got JSON-RPC error response 00:23:22.672 response: 00:23:22.672 { 00:23:22.672 "code": -5, 00:23:22.672 "message": "Input/output error" 00:23:22.672 } 00:23:22.672 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:22.672 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:22.672 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:22.672 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:22.672 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:23:22.672 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:23:22.672 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:22.672 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:22.932 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:23:22.932 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:22.932 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:23:22.932 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:22.932 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:22.932 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:22.932 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:22.932 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:22.932 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:22.932 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:22.932 request: 00:23:22.932 { 00:23:22.932 "name": "nvme0", 00:23:22.932 "trtype": "tcp", 00:23:22.932 "traddr": "10.0.0.2", 00:23:22.932 "adrfam": "ipv4", 00:23:22.932 "trsvcid": "4420", 00:23:22.932 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:22.932 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:22.932 "prchk_reftag": false, 00:23:22.932 "prchk_guard": false, 00:23:22.932 "hdgst": false, 00:23:22.932 "ddgst": false, 00:23:22.932 "dhchap_key": "key3", 00:23:22.932 "allow_unrecognized_csi": false, 00:23:22.932 "method": "bdev_nvme_attach_controller", 00:23:22.932 "req_id": 1 00:23:22.932 } 00:23:22.932 Got JSON-RPC error response 00:23:22.932 response: 00:23:22.932 { 00:23:22.932 "code": -5, 00:23:22.932 "message": "Input/output error" 00:23:22.932 } 00:23:22.932 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:22.932 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:22.932 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:22.932 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:22.932 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:23:22.932 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:23:22.932 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:23:22.932 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:22.932 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:22.932 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:23.192 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:23.192 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.192 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.192 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.192 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:23.192 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.192 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.192 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.192 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:23.192 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:23.192 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:23.192 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:23.192 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:23.192 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:23.192 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:23.192 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:23.192 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:23.193 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:23.453 request: 00:23:23.453 { 00:23:23.453 "name": "nvme0", 00:23:23.453 "trtype": "tcp", 00:23:23.453 "traddr": "10.0.0.2", 00:23:23.453 "adrfam": "ipv4", 00:23:23.453 "trsvcid": "4420", 00:23:23.453 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:23.453 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:23.453 "prchk_reftag": false, 00:23:23.453 "prchk_guard": false, 00:23:23.453 "hdgst": false, 00:23:23.453 "ddgst": false, 00:23:23.453 "dhchap_key": "key0", 00:23:23.453 "dhchap_ctrlr_key": "key1", 00:23:23.453 "allow_unrecognized_csi": false, 00:23:23.453 "method": "bdev_nvme_attach_controller", 00:23:23.453 "req_id": 1 00:23:23.453 } 00:23:23.453 Got JSON-RPC error response 00:23:23.453 response: 00:23:23.453 { 00:23:23.453 "code": -5, 00:23:23.453 "message": "Input/output error" 00:23:23.453 } 00:23:23.453 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:23.453 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:23.453 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:23.453 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:23.453 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:23:23.453 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:23:23.453 15:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:23:23.713 nvme0n1 00:23:23.713 15:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:23:23.713 15:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:23:23.713 15:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:23.973 15:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.973 15:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:23.973 15:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:24.234 15:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:23:24.234 15:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.234 15:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.234 15:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.234 15:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:23:24.234 15:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:24.234 15:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:24.806 nvme0n1 00:23:24.806 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:23:24.806 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:23:24.806 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:25.066 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.066 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:25.066 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.066 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.066 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.066 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:23:25.066 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:23:25.066 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:25.327 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.327 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MTdlY2I0ODRiNzc5YjY0YjYyMmVkMmMxYTIyODU3ZjQ4Zjg3MjBmZDcwMWVmYzg2ye529Q==: --dhchap-ctrl-secret DHHC-1:03:YjM2MTEzYzE2Njg3MmRhMjc3ZmU5YzZhOGJiOGU4YTk3MzRmNTYwMDJjZjVjNWY5YmViZTI2YzlmMWY0ODNiOQEpGbY=: 00:23:25.327 15:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTdlY2I0ODRiNzc5YjY0YjYyMmVkMmMxYTIyODU3ZjQ4Zjg3MjBmZDcwMWVmYzg2ye529Q==: --dhchap-ctrl-secret DHHC-1:03:YjM2MTEzYzE2Njg3MmRhMjc3ZmU5YzZhOGJiOGU4YTk3MzRmNTYwMDJjZjVjNWY5YmViZTI2YzlmMWY0ODNiOQEpGbY=: 00:23:25.898 15:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:23:25.898 15:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:23:25.898 15:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:23:25.898 15:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:23:25.898 15:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:23:25.898 15:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:23:25.898 15:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:23:25.898 15:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:25.898 15:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:26.160 15:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:23:26.160 15:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:26.160 15:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:23:26.160 15:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:26.160 15:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:26.160 15:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:26.160 15:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:26.160 15:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:23:26.160 15:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:26.160 15:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:26.737 request: 00:23:26.737 { 00:23:26.737 "name": "nvme0", 00:23:26.737 "trtype": "tcp", 00:23:26.737 "traddr": "10.0.0.2", 00:23:26.737 "adrfam": "ipv4", 00:23:26.737 "trsvcid": "4420", 00:23:26.737 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:26.737 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:26.737 "prchk_reftag": false, 00:23:26.737 "prchk_guard": false, 00:23:26.737 "hdgst": false, 00:23:26.737 "ddgst": false, 00:23:26.737 "dhchap_key": "key1", 00:23:26.737 "allow_unrecognized_csi": false, 00:23:26.737 "method": "bdev_nvme_attach_controller", 00:23:26.737 "req_id": 1 00:23:26.737 } 00:23:26.737 Got JSON-RPC error response 00:23:26.737 response: 00:23:26.737 { 00:23:26.737 "code": -5, 00:23:26.737 "message": "Input/output error" 00:23:26.737 } 00:23:26.737 15:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:26.737 15:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:26.737 15:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:26.737 15:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:26.737 15:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:26.737 15:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:26.738 15:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:27.306 nvme0n1 00:23:27.306 15:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:23:27.306 15:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:27.306 15:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:23:27.566 15:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.566 15:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:27.566 15:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:27.827 15:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:27.827 15:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.827 15:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.827 15:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.827 15:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:23:27.827 15:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:23:27.827 15:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:23:27.827 nvme0n1 00:23:28.088 15:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:23:28.088 15:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:23:28.088 15:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:28.088 15:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.088 15:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:28.088 15:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:28.349 15:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:28.349 15:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.349 15:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.349 15:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.349 15:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YjRlMzk0N2IzNzE1YmU0MzJhMTMzZjJhOWYwNTNmODdWvOUz: '' 2s 00:23:28.349 15:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:23:28.349 15:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:23:28.349 15:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YjRlMzk0N2IzNzE1YmU0MzJhMTMzZjJhOWYwNTNmODdWvOUz: 00:23:28.349 15:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:23:28.349 15:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:23:28.349 15:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:23:28.349 15:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YjRlMzk0N2IzNzE1YmU0MzJhMTMzZjJhOWYwNTNmODdWvOUz: ]] 00:23:28.349 15:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YjRlMzk0N2IzNzE1YmU0MzJhMTMzZjJhOWYwNTNmODdWvOUz: 00:23:28.349 15:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:23:28.349 15:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:23:28.349 15:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:23:30.263 15:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:23:30.263 15:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:23:30.263 15:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:23:30.263 15:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:23:30.263 15:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:23:30.263 15:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:23:30.263 15:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:23:30.263 15:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key key2 00:23:30.263 15:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.263 15:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.263 15:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.263 15:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MTdlY2I0ODRiNzc5YjY0YjYyMmVkMmMxYTIyODU3ZjQ4Zjg3MjBmZDcwMWVmYzg2ye529Q==: 2s 00:23:30.263 15:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:23:30.263 15:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:23:30.263 15:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:23:30.263 15:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MTdlY2I0ODRiNzc5YjY0YjYyMmVkMmMxYTIyODU3ZjQ4Zjg3MjBmZDcwMWVmYzg2ye529Q==: 00:23:30.263 15:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:23:30.263 15:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:23:30.263 15:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:23:30.263 15:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MTdlY2I0ODRiNzc5YjY0YjYyMmVkMmMxYTIyODU3ZjQ4Zjg3MjBmZDcwMWVmYzg2ye529Q==: ]] 00:23:30.263 15:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MTdlY2I0ODRiNzc5YjY0YjYyMmVkMmMxYTIyODU3ZjQ4Zjg3MjBmZDcwMWVmYzg2ye529Q==: 00:23:30.263 15:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:23:30.263 15:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:23:32.811 15:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:23:32.811 15:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:23:32.811 15:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:23:32.811 15:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:23:32.811 15:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:23:32.811 15:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:23:32.811 15:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:23:32.811 15:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:32.811 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:32.811 15:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:32.811 15:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.811 15:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.811 15:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.811 15:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:32.811 15:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:32.811 15:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:33.072 nvme0n1 00:23:33.072 15:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:33.072 15:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.072 15:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.332 15:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.332 15:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:33.332 15:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:33.594 15:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:23:33.594 15:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:23:33.594 15:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:33.855 15:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.855 15:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:33.855 15:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.855 15:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.855 15:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.855 15:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:23:33.855 15:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:23:34.116 15:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:23:34.116 15:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:23:34.116 15:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:34.116 15:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.116 15:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:34.116 15:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.116 15:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.116 15:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.116 15:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:34.116 15:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:34.116 15:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:34.116 15:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:23:34.116 15:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:34.116 15:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:23:34.116 15:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:34.116 15:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:34.116 15:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:34.687 request: 00:23:34.687 { 00:23:34.687 "name": "nvme0", 00:23:34.687 "dhchap_key": "key1", 00:23:34.687 "dhchap_ctrlr_key": "key3", 00:23:34.687 "method": "bdev_nvme_set_keys", 00:23:34.687 "req_id": 1 00:23:34.687 } 00:23:34.687 Got JSON-RPC error response 00:23:34.687 response: 00:23:34.687 { 00:23:34.687 "code": -13, 00:23:34.687 "message": "Permission denied" 00:23:34.687 } 00:23:34.687 15:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:34.687 15:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:34.687 15:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:34.687 15:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:34.687 15:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:34.687 15:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:34.687 15:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:34.948 15:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:23:34.948 15:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:23:35.889 15:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:35.889 15:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:35.889 15:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:35.889 15:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:23:35.890 15:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:35.890 15:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.890 15:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.149 15:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.149 15:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:36.149 15:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:36.149 15:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:36.721 nvme0n1 00:23:36.721 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:36.721 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.721 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.721 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.721 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:36.721 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:36.721 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:36.721 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:23:36.721 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:36.721 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:23:36.721 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:36.721 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:36.721 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:37.293 request: 00:23:37.293 { 00:23:37.293 "name": "nvme0", 00:23:37.293 "dhchap_key": "key2", 00:23:37.293 "dhchap_ctrlr_key": "key0", 00:23:37.293 "method": "bdev_nvme_set_keys", 00:23:37.293 "req_id": 1 00:23:37.293 } 00:23:37.293 Got JSON-RPC error response 00:23:37.293 response: 00:23:37.293 { 00:23:37.293 "code": -13, 00:23:37.293 "message": "Permission denied" 00:23:37.293 } 00:23:37.293 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:37.293 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:37.293 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:37.293 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:37.293 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:37.293 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:37.293 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:37.554 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:23:37.554 15:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:23:38.498 15:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:38.498 15:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:38.498 15:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:38.498 15:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:23:38.498 15:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:23:38.498 15:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:23:38.498 15:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 369734 00:23:38.498 15:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 369734 ']' 00:23:38.498 15:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 369734 00:23:38.498 15:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:23:38.498 15:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:38.498 15:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 369734 00:23:38.759 15:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:38.759 15:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:38.759 15:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 369734' 00:23:38.759 killing process with pid 369734 00:23:38.759 15:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 369734 00:23:38.759 15:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 369734 00:23:38.759 15:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:23:38.759 15:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:38.759 15:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:23:38.759 15:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:38.759 15:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:23:38.759 15:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:38.759 15:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:38.759 rmmod nvme_tcp 00:23:39.020 rmmod nvme_fabrics 00:23:39.020 rmmod nvme_keyring 00:23:39.020 15:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:39.020 15:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:23:39.020 15:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:23:39.020 15:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@513 -- # '[' -n 395731 ']' 00:23:39.020 15:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # killprocess 395731 00:23:39.020 15:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 395731 ']' 00:23:39.020 15:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 395731 00:23:39.020 15:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:23:39.020 15:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:39.020 15:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 395731 00:23:39.020 15:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:39.020 15:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:39.020 15:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 395731' 00:23:39.020 killing process with pid 395731 00:23:39.020 15:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 395731 00:23:39.020 15:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 395731 00:23:39.020 15:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:39.020 15:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:39.020 15:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:39.020 15:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:23:39.020 15:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-save 00:23:39.020 15:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:39.020 15:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-restore 00:23:39.020 15:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:39.020 15:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:39.020 15:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.020 15:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:39.020 15:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Ybg /tmp/spdk.key-sha256.CwZ /tmp/spdk.key-sha384.cku /tmp/spdk.key-sha512.euC /tmp/spdk.key-sha512.AoF /tmp/spdk.key-sha384.qpH /tmp/spdk.key-sha256.RWD '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:23:41.571 00:23:41.571 real 2m39.815s 00:23:41.571 user 5m58.783s 00:23:41.571 sys 0m24.534s 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:41.571 ************************************ 00:23:41.571 END TEST nvmf_auth_target 00:23:41.571 ************************************ 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:41.571 ************************************ 00:23:41.571 START TEST nvmf_bdevio_no_huge 00:23:41.571 ************************************ 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:41.571 * Looking for test storage... 00:23:41.571 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:41.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:41.571 --rc genhtml_branch_coverage=1 00:23:41.571 --rc genhtml_function_coverage=1 00:23:41.571 --rc genhtml_legend=1 00:23:41.571 --rc geninfo_all_blocks=1 00:23:41.571 --rc geninfo_unexecuted_blocks=1 00:23:41.571 00:23:41.571 ' 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:41.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:41.571 --rc genhtml_branch_coverage=1 00:23:41.571 --rc genhtml_function_coverage=1 00:23:41.571 --rc genhtml_legend=1 00:23:41.571 --rc geninfo_all_blocks=1 00:23:41.571 --rc geninfo_unexecuted_blocks=1 00:23:41.571 00:23:41.571 ' 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:41.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:41.571 --rc genhtml_branch_coverage=1 00:23:41.571 --rc genhtml_function_coverage=1 00:23:41.571 --rc genhtml_legend=1 00:23:41.571 --rc geninfo_all_blocks=1 00:23:41.571 --rc geninfo_unexecuted_blocks=1 00:23:41.571 00:23:41.571 ' 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:41.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:41.571 --rc genhtml_branch_coverage=1 00:23:41.571 --rc genhtml_function_coverage=1 00:23:41.571 --rc genhtml_legend=1 00:23:41.571 --rc geninfo_all_blocks=1 00:23:41.571 --rc geninfo_unexecuted_blocks=1 00:23:41.571 00:23:41.571 ' 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:41.571 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:41.572 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:41.572 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:41.572 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:23:41.572 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:41.572 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:41.572 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:41.572 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.572 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.572 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.572 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:23:41.572 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.572 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:23:41.572 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:41.572 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:41.572 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:41.572 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:41.572 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:41.572 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:41.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:41.572 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:41.572 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:41.572 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:41.572 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:41.572 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:41.572 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:23:41.572 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:41.572 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:41.572 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:41.572 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:41.572 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:41.572 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.572 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:41.572 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.572 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:23:41.572 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:23:41.572 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:23:41.572 15:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:49.738 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:49.738 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:49.738 Found net devices under 0000:31:00.0: cvl_0_0 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:49.738 Found net devices under 0000:31:00.1: cvl_0_1 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # is_hw=yes 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:49.738 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:49.739 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:49.739 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:49.739 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:49.739 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:49.739 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:49.739 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:49.739 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:49.739 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:49.739 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:49.739 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:49.739 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.564 ms 00:23:49.739 00:23:49.739 --- 10.0.0.2 ping statistics --- 00:23:49.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:49.739 rtt min/avg/max/mdev = 0.564/0.564/0.564/0.000 ms 00:23:49.739 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:49.739 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:49.739 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:23:49.739 00:23:49.739 --- 10.0.0.1 ping statistics --- 00:23:49.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:49.739 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:23:49.739 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:49.739 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # return 0 00:23:49.739 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:49.739 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:49.739 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:49.739 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:49.739 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:49.739 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:49.739 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:49.739 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:49.739 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:49.739 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:49.739 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:49.739 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # nvmfpid=404532 00:23:49.739 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # waitforlisten 404532 00:23:49.739 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:23:49.739 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 404532 ']' 00:23:49.739 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:49.739 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:49.739 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:49.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:49.739 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:49.739 15:42:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:49.739 [2024-09-27 15:42:29.608783] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:23:49.739 [2024-09-27 15:42:29.608859] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:23:49.739 [2024-09-27 15:42:29.701225] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:49.739 [2024-09-27 15:42:29.781533] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:49.739 [2024-09-27 15:42:29.781582] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:49.739 [2024-09-27 15:42:29.781591] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:49.739 [2024-09-27 15:42:29.781599] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:49.739 [2024-09-27 15:42:29.781605] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:49.739 [2024-09-27 15:42:29.781769] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:23:49.739 [2024-09-27 15:42:29.781964] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:23:49.739 [2024-09-27 15:42:29.782008] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:23:49.739 [2024-09-27 15:42:29.782009] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:23:50.001 15:42:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:50.001 15:42:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:23:50.001 15:42:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:50.001 15:42:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:50.001 15:42:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:50.001 15:42:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:50.001 15:42:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:50.001 15:42:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.001 15:42:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:50.263 [2024-09-27 15:42:30.491357] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:50.263 15:42:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.263 15:42:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:50.263 15:42:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.263 15:42:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:50.263 Malloc0 00:23:50.263 15:42:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.263 15:42:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:50.263 15:42:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.263 15:42:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:50.263 15:42:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.263 15:42:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:50.263 15:42:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.263 15:42:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:50.263 15:42:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.263 15:42:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:50.263 15:42:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.263 15:42:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:50.263 [2024-09-27 15:42:30.545395] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:50.263 15:42:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.263 15:42:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:23:50.263 15:42:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:50.263 15:42:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # config=() 00:23:50.263 15:42:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # local subsystem config 00:23:50.263 15:42:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:23:50.263 15:42:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:23:50.263 { 00:23:50.263 "params": { 00:23:50.263 "name": "Nvme$subsystem", 00:23:50.263 "trtype": "$TEST_TRANSPORT", 00:23:50.263 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:50.263 "adrfam": "ipv4", 00:23:50.263 "trsvcid": "$NVMF_PORT", 00:23:50.263 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:50.263 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:50.263 "hdgst": ${hdgst:-false}, 00:23:50.263 "ddgst": ${ddgst:-false} 00:23:50.263 }, 00:23:50.263 "method": "bdev_nvme_attach_controller" 00:23:50.263 } 00:23:50.263 EOF 00:23:50.263 )") 00:23:50.263 15:42:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # cat 00:23:50.263 15:42:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # jq . 00:23:50.263 15:42:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@581 -- # IFS=, 00:23:50.263 15:42:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:23:50.263 "params": { 00:23:50.263 "name": "Nvme1", 00:23:50.263 "trtype": "tcp", 00:23:50.263 "traddr": "10.0.0.2", 00:23:50.263 "adrfam": "ipv4", 00:23:50.263 "trsvcid": "4420", 00:23:50.263 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:50.263 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:50.263 "hdgst": false, 00:23:50.263 "ddgst": false 00:23:50.263 }, 00:23:50.263 "method": "bdev_nvme_attach_controller" 00:23:50.263 }' 00:23:50.263 [2024-09-27 15:42:30.603237] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:23:50.263 [2024-09-27 15:42:30.603307] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid404662 ] 00:23:50.263 [2024-09-27 15:42:30.686235] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:50.525 [2024-09-27 15:42:30.766455] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:50.525 [2024-09-27 15:42:30.766619] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:50.525 [2024-09-27 15:42:30.766619] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:50.786 I/O targets: 00:23:50.786 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:50.786 00:23:50.786 00:23:50.786 CUnit - A unit testing framework for C - Version 2.1-3 00:23:50.786 http://cunit.sourceforge.net/ 00:23:50.786 00:23:50.786 00:23:50.786 Suite: bdevio tests on: Nvme1n1 00:23:50.786 Test: blockdev write read block ...passed 00:23:50.786 Test: blockdev write zeroes read block ...passed 00:23:50.786 Test: blockdev write zeroes read no split ...passed 00:23:50.786 Test: blockdev write zeroes read split ...passed 00:23:51.047 Test: blockdev write zeroes read split partial ...passed 00:23:51.047 Test: blockdev reset ...[2024-09-27 15:42:31.293558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:51.047 [2024-09-27 15:42:31.293663] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c251c0 (9): Bad file descriptor 00:23:51.047 [2024-09-27 15:42:31.435608] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:51.047 passed 00:23:51.047 Test: blockdev write read 8 blocks ...passed 00:23:51.047 Test: blockdev write read size > 128k ...passed 00:23:51.047 Test: blockdev write read invalid size ...passed 00:23:51.047 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:51.047 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:51.047 Test: blockdev write read max offset ...passed 00:23:51.308 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:51.308 Test: blockdev writev readv 8 blocks ...passed 00:23:51.308 Test: blockdev writev readv 30 x 1block ...passed 00:23:51.308 Test: blockdev writev readv block ...passed 00:23:51.308 Test: blockdev writev readv size > 128k ...passed 00:23:51.308 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:51.308 Test: blockdev comparev and writev ...[2024-09-27 15:42:31.744100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:51.308 [2024-09-27 15:42:31.744152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.308 [2024-09-27 15:42:31.744169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:51.308 [2024-09-27 15:42:31.744178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:51.308 [2024-09-27 15:42:31.744720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:51.308 [2024-09-27 15:42:31.744733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:51.308 [2024-09-27 15:42:31.744747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:51.308 [2024-09-27 15:42:31.744755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:51.308 [2024-09-27 15:42:31.745302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:51.308 [2024-09-27 15:42:31.745314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:51.308 [2024-09-27 15:42:31.745328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:51.308 [2024-09-27 15:42:31.745336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:51.308 [2024-09-27 15:42:31.745849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:51.308 [2024-09-27 15:42:31.745862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:51.308 [2024-09-27 15:42:31.745876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:51.308 [2024-09-27 15:42:31.745884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:51.308 passed 00:23:51.569 Test: blockdev nvme passthru rw ...passed 00:23:51.569 Test: blockdev nvme passthru vendor specific ...[2024-09-27 15:42:31.829650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:51.569 [2024-09-27 15:42:31.829688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:51.569 [2024-09-27 15:42:31.830078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:51.569 [2024-09-27 15:42:31.830091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:51.569 [2024-09-27 15:42:31.830461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:51.569 [2024-09-27 15:42:31.830471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:51.569 [2024-09-27 15:42:31.830818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:51.569 [2024-09-27 15:42:31.830829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:51.569 passed 00:23:51.569 Test: blockdev nvme admin passthru ...passed 00:23:51.569 Test: blockdev copy ...passed 00:23:51.569 00:23:51.569 Run Summary: Type Total Ran Passed Failed Inactive 00:23:51.569 suites 1 1 n/a 0 0 00:23:51.569 tests 23 23 23 0 0 00:23:51.569 asserts 152 152 152 0 n/a 00:23:51.569 00:23:51.569 Elapsed time = 1.647 seconds 00:23:51.830 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:51.830 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.830 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:51.830 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.830 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:51.830 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:23:51.830 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:51.830 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:23:51.830 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:51.830 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:23:51.830 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:51.830 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:51.830 rmmod nvme_tcp 00:23:51.830 rmmod nvme_fabrics 00:23:51.830 rmmod nvme_keyring 00:23:51.830 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:51.830 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:23:51.830 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:23:51.830 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@513 -- # '[' -n 404532 ']' 00:23:51.830 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # killprocess 404532 00:23:51.830 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 404532 ']' 00:23:51.830 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 404532 00:23:51.830 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:23:51.830 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:51.830 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 404532 00:23:52.091 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:23:52.091 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:23:52.091 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 404532' 00:23:52.091 killing process with pid 404532 00:23:52.091 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 404532 00:23:52.091 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 404532 00:23:52.353 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:52.353 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:52.353 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:52.353 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:23:52.353 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:52.353 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-save 00:23:52.353 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-restore 00:23:52.353 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:52.353 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:52.353 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.353 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:52.353 15:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.265 15:42:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:54.265 00:23:54.265 real 0m13.086s 00:23:54.265 user 0m16.267s 00:23:54.265 sys 0m6.893s 00:23:54.265 15:42:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:54.265 15:42:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:54.265 ************************************ 00:23:54.265 END TEST nvmf_bdevio_no_huge 00:23:54.265 ************************************ 00:23:54.526 15:42:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:54.526 15:42:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:54.526 15:42:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:54.526 15:42:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:54.526 ************************************ 00:23:54.526 START TEST nvmf_tls 00:23:54.526 ************************************ 00:23:54.526 15:42:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:54.526 * Looking for test storage... 00:23:54.526 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:54.527 15:42:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:54.527 15:42:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:23:54.527 15:42:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:54.527 15:42:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:54.527 15:42:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:54.527 15:42:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:54.527 15:42:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:54.527 15:42:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:23:54.527 15:42:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:23:54.527 15:42:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:23:54.527 15:42:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:23:54.527 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:23:54.527 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:23:54.527 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:23:54.527 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:54.527 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:23:54.527 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:23:54.527 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:54.527 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:54.527 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:23:54.527 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:23:54.527 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:54.527 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:23:54.527 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:23:54.527 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:23:54.527 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:23:54.527 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:54.527 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:54.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.789 --rc genhtml_branch_coverage=1 00:23:54.789 --rc genhtml_function_coverage=1 00:23:54.789 --rc genhtml_legend=1 00:23:54.789 --rc geninfo_all_blocks=1 00:23:54.789 --rc geninfo_unexecuted_blocks=1 00:23:54.789 00:23:54.789 ' 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:54.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.789 --rc genhtml_branch_coverage=1 00:23:54.789 --rc genhtml_function_coverage=1 00:23:54.789 --rc genhtml_legend=1 00:23:54.789 --rc geninfo_all_blocks=1 00:23:54.789 --rc geninfo_unexecuted_blocks=1 00:23:54.789 00:23:54.789 ' 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:54.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.789 --rc genhtml_branch_coverage=1 00:23:54.789 --rc genhtml_function_coverage=1 00:23:54.789 --rc genhtml_legend=1 00:23:54.789 --rc geninfo_all_blocks=1 00:23:54.789 --rc geninfo_unexecuted_blocks=1 00:23:54.789 00:23:54.789 ' 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:54.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.789 --rc genhtml_branch_coverage=1 00:23:54.789 --rc genhtml_function_coverage=1 00:23:54.789 --rc genhtml_legend=1 00:23:54.789 --rc geninfo_all_blocks=1 00:23:54.789 --rc geninfo_unexecuted_blocks=1 00:23:54.789 00:23:54.789 ' 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:54.789 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:23:54.789 15:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:02.934 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:02.934 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:24:02.934 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:02.934 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:02.934 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:02.934 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:02.934 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:02.934 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:24:02.934 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:02.934 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:24:02.934 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:24:02.934 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:24:02.934 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:24:02.934 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:24:02.934 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:24:02.934 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:02.934 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:02.934 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:02.934 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:02.934 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:02.934 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:02.934 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:02.934 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:02.934 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:02.934 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:02.934 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:02.934 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:24:02.934 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:24:02.934 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:24:02.934 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:24:02.934 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:24:02.934 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:24:02.934 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:02.934 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:02.934 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:02.934 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:24:02.934 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:24:02.934 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:02.934 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:02.934 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:24:02.934 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:02.934 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:02.934 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:02.935 Found net devices under 0000:31:00.0: cvl_0_0 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:02.935 Found net devices under 0000:31:00.1: cvl_0_1 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # is_hw=yes 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:02.935 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:02.935 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.571 ms 00:24:02.935 00:24:02.935 --- 10.0.0.2 ping statistics --- 00:24:02.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.935 rtt min/avg/max/mdev = 0.571/0.571/0.571/0.000 ms 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:02.935 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:02.935 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:24:02.935 00:24:02.935 --- 10.0.0.1 ping statistics --- 00:24:02.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.935 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # return 0 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=409298 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 409298 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 409298 ']' 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:02.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:02.935 15:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:02.935 [2024-09-27 15:42:42.810943] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:24:02.935 [2024-09-27 15:42:42.811014] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:02.935 [2024-09-27 15:42:42.902127] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.935 [2024-09-27 15:42:42.948545] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:02.935 [2024-09-27 15:42:42.948595] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:02.935 [2024-09-27 15:42:42.948604] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:02.935 [2024-09-27 15:42:42.948612] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:02.935 [2024-09-27 15:42:42.948619] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:02.935 [2024-09-27 15:42:42.948642] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:03.196 15:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:03.196 15:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:03.196 15:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:03.196 15:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:03.196 15:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:03.196 15:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:03.196 15:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:24:03.196 15:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:24:03.458 true 00:24:03.458 15:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:03.458 15:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:24:03.720 15:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:24:03.720 15:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:24:03.720 15:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:24:03.981 15:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:03.981 15:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:24:03.981 15:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:24:03.981 15:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:24:03.981 15:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:24:04.242 15:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:04.242 15:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:24:04.504 15:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:24:04.504 15:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:24:04.504 15:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:04.504 15:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:24:04.504 15:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:24:04.504 15:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:24:04.504 15:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:24:04.766 15:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:04.766 15:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:24:05.027 15:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:24:05.027 15:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:24:05.027 15:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:24:05.027 15:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:05.027 15:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:24:05.288 15:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:24:05.288 15:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:24:05.288 15:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:24:05.288 15:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:24:05.288 15:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:24:05.288 15:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:24:05.288 15:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:24:05.288 15:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:24:05.288 15:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:24:05.288 15:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:05.288 15:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:24:05.288 15:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:24:05.288 15:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:24:05.288 15:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:24:05.288 15:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=ffeeddccbbaa99887766554433221100 00:24:05.288 15:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:24:05.288 15:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:24:05.553 15:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:05.553 15:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:24:05.553 15:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.NagWxdqRwU 00:24:05.553 15:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:24:05.553 15:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.79XxuXZBEk 00:24:05.553 15:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:05.553 15:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:05.553 15:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.NagWxdqRwU 00:24:05.553 15:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.79XxuXZBEk 00:24:05.553 15:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:24:05.553 15:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:24:05.814 15:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.NagWxdqRwU 00:24:05.814 15:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.NagWxdqRwU 00:24:05.814 15:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:06.075 [2024-09-27 15:42:46.360922] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:06.075 15:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:06.075 15:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:06.336 [2024-09-27 15:42:46.677689] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:06.336 [2024-09-27 15:42:46.677884] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:06.336 15:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:06.597 malloc0 00:24:06.597 15:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:06.597 15:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.NagWxdqRwU 00:24:06.858 15:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:07.119 15:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.NagWxdqRwU 00:24:17.121 Initializing NVMe Controllers 00:24:17.121 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:17.121 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:17.121 Initialization complete. Launching workers. 00:24:17.121 ======================================================== 00:24:17.121 Latency(us) 00:24:17.121 Device Information : IOPS MiB/s Average min max 00:24:17.121 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18512.29 72.31 3457.39 1116.13 4180.52 00:24:17.121 ======================================================== 00:24:17.121 Total : 18512.29 72.31 3457.39 1116.13 4180.52 00:24:17.121 00:24:17.121 15:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NagWxdqRwU 00:24:17.121 15:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:17.121 15:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:17.121 15:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:17.121 15:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.NagWxdqRwU 00:24:17.121 15:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:17.121 15:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=412238 00:24:17.121 15:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:17.121 15:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 412238 /var/tmp/bdevperf.sock 00:24:17.121 15:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:17.121 15:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 412238 ']' 00:24:17.121 15:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:17.121 15:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:17.121 15:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:17.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:17.121 15:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:17.121 15:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:17.121 [2024-09-27 15:42:57.545259] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:24:17.121 [2024-09-27 15:42:57.545318] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid412238 ] 00:24:17.382 [2024-09-27 15:42:57.624924] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.382 [2024-09-27 15:42:57.655667] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:17.952 15:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:17.952 15:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:17.952 15:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NagWxdqRwU 00:24:18.212 15:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:18.212 [2024-09-27 15:42:58.680806] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:18.473 TLSTESTn1 00:24:18.473 15:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:18.473 Running I/O for 10 seconds... 00:24:28.537 5994.00 IOPS, 23.41 MiB/s 5221.50 IOPS, 20.40 MiB/s 4379.67 IOPS, 17.11 MiB/s 4364.75 IOPS, 17.05 MiB/s 4561.80 IOPS, 17.82 MiB/s 4572.17 IOPS, 17.86 MiB/s 4530.86 IOPS, 17.70 MiB/s 4517.62 IOPS, 17.65 MiB/s 4668.22 IOPS, 18.24 MiB/s 4506.90 IOPS, 17.61 MiB/s 00:24:28.537 Latency(us) 00:24:28.537 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.537 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:28.537 Verification LBA range: start 0x0 length 0x2000 00:24:28.537 TLSTESTn1 : 10.09 4481.49 17.51 0.00 0.00 28447.75 6007.47 81264.64 00:24:28.537 =================================================================================================================== 00:24:28.538 Total : 4481.49 17.51 0.00 0.00 28447.75 6007.47 81264.64 00:24:28.538 { 00:24:28.538 "results": [ 00:24:28.538 { 00:24:28.538 "job": "TLSTESTn1", 00:24:28.538 "core_mask": "0x4", 00:24:28.538 "workload": "verify", 00:24:28.538 "status": "finished", 00:24:28.538 "verify_range": { 00:24:28.538 "start": 0, 00:24:28.538 "length": 8192 00:24:28.538 }, 00:24:28.538 "queue_depth": 128, 00:24:28.538 "io_size": 4096, 00:24:28.538 "runtime": 10.085034, 00:24:28.538 "iops": 4481.492080244846, 00:24:28.538 "mibps": 17.50582843845643, 00:24:28.538 "io_failed": 0, 00:24:28.538 "io_timeout": 0, 00:24:28.538 "avg_latency_us": 28447.74922618521, 00:24:28.538 "min_latency_us": 6007.466666666666, 00:24:28.538 "max_latency_us": 81264.64 00:24:28.538 } 00:24:28.538 ], 00:24:28.538 "core_count": 1 00:24:28.538 } 00:24:28.538 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:28.538 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 412238 00:24:28.538 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 412238 ']' 00:24:28.538 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 412238 00:24:28.538 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:28.829 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:28.829 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 412238 00:24:28.829 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:28.829 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:28.829 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 412238' 00:24:28.829 killing process with pid 412238 00:24:28.829 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 412238 00:24:28.829 Received shutdown signal, test time was about 10.000000 seconds 00:24:28.829 00:24:28.829 Latency(us) 00:24:28.829 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.829 =================================================================================================================== 00:24:28.829 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:28.829 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 412238 00:24:28.829 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.79XxuXZBEk 00:24:28.829 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:28.829 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.79XxuXZBEk 00:24:28.829 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:28.829 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:28.829 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:28.829 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:28.829 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.79XxuXZBEk 00:24:28.829 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:28.829 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:28.829 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:28.829 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.79XxuXZBEk 00:24:28.829 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:28.829 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=414396 00:24:28.829 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:28.829 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 414396 /var/tmp/bdevperf.sock 00:24:28.829 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:28.829 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 414396 ']' 00:24:28.829 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:28.829 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:28.829 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:28.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:28.829 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:28.829 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:28.829 [2024-09-27 15:43:09.239135] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:24:28.829 [2024-09-27 15:43:09.239193] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid414396 ] 00:24:29.108 [2024-09-27 15:43:09.317061] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.108 [2024-09-27 15:43:09.344989] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:29.108 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:29.108 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:29.108 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.79XxuXZBEk 00:24:29.109 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:29.392 [2024-09-27 15:43:09.736402] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:29.392 [2024-09-27 15:43:09.746043] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:29.392 [2024-09-27 15:43:09.746544] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f4c7a0 (107): Transport endpoint is not connected 00:24:29.392 [2024-09-27 15:43:09.747540] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f4c7a0 (9): Bad file descriptor 00:24:29.392 [2024-09-27 15:43:09.748541] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.392 [2024-09-27 15:43:09.748548] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:29.392 [2024-09-27 15:43:09.748554] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:24:29.392 [2024-09-27 15:43:09.748562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.392 request: 00:24:29.392 { 00:24:29.392 "name": "TLSTEST", 00:24:29.392 "trtype": "tcp", 00:24:29.392 "traddr": "10.0.0.2", 00:24:29.392 "adrfam": "ipv4", 00:24:29.392 "trsvcid": "4420", 00:24:29.392 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:29.392 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:29.392 "prchk_reftag": false, 00:24:29.392 "prchk_guard": false, 00:24:29.392 "hdgst": false, 00:24:29.392 "ddgst": false, 00:24:29.392 "psk": "key0", 00:24:29.392 "allow_unrecognized_csi": false, 00:24:29.392 "method": "bdev_nvme_attach_controller", 00:24:29.392 "req_id": 1 00:24:29.392 } 00:24:29.392 Got JSON-RPC error response 00:24:29.392 response: 00:24:29.392 { 00:24:29.392 "code": -5, 00:24:29.392 "message": "Input/output error" 00:24:29.392 } 00:24:29.392 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 414396 00:24:29.392 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 414396 ']' 00:24:29.392 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 414396 00:24:29.392 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:29.392 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:29.392 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 414396 00:24:29.392 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:29.392 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:29.392 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 414396' 00:24:29.392 killing process with pid 414396 00:24:29.392 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 414396 00:24:29.392 Received shutdown signal, test time was about 10.000000 seconds 00:24:29.392 00:24:29.392 Latency(us) 00:24:29.392 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:29.392 =================================================================================================================== 00:24:29.392 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:29.392 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 414396 00:24:29.670 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:29.670 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:29.670 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:29.670 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:29.670 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:29.670 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.NagWxdqRwU 00:24:29.670 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:29.670 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.NagWxdqRwU 00:24:29.670 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:29.670 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:29.670 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:29.670 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:29.670 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.NagWxdqRwU 00:24:29.670 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:29.670 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:29.670 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:24:29.670 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.NagWxdqRwU 00:24:29.670 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:29.670 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=414726 00:24:29.670 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:29.670 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 414726 /var/tmp/bdevperf.sock 00:24:29.670 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:29.670 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 414726 ']' 00:24:29.670 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:29.670 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:29.670 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:29.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:29.670 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:29.670 15:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:29.670 [2024-09-27 15:43:09.988241] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:24:29.670 [2024-09-27 15:43:09.988294] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid414726 ] 00:24:29.670 [2024-09-27 15:43:10.068633] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.670 [2024-09-27 15:43:10.096357] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:30.324 15:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:30.324 15:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:30.324 15:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NagWxdqRwU 00:24:30.610 15:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:24:30.916 [2024-09-27 15:43:11.114584] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:30.916 [2024-09-27 15:43:11.119293] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:30.916 [2024-09-27 15:43:11.119312] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:30.916 [2024-09-27 15:43:11.119334] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:30.916 [2024-09-27 15:43:11.119730] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xffb7a0 (107): Transport endpoint is not connected 00:24:30.916 [2024-09-27 15:43:11.120725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xffb7a0 (9): Bad file descriptor 00:24:30.916 [2024-09-27 15:43:11.121726] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.917 [2024-09-27 15:43:11.121732] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:30.917 [2024-09-27 15:43:11.121738] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:24:30.917 [2024-09-27 15:43:11.121746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.917 request: 00:24:30.917 { 00:24:30.917 "name": "TLSTEST", 00:24:30.917 "trtype": "tcp", 00:24:30.917 "traddr": "10.0.0.2", 00:24:30.917 "adrfam": "ipv4", 00:24:30.917 "trsvcid": "4420", 00:24:30.917 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:30.917 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:30.917 "prchk_reftag": false, 00:24:30.917 "prchk_guard": false, 00:24:30.917 "hdgst": false, 00:24:30.917 "ddgst": false, 00:24:30.917 "psk": "key0", 00:24:30.917 "allow_unrecognized_csi": false, 00:24:30.917 "method": "bdev_nvme_attach_controller", 00:24:30.917 "req_id": 1 00:24:30.917 } 00:24:30.917 Got JSON-RPC error response 00:24:30.917 response: 00:24:30.917 { 00:24:30.917 "code": -5, 00:24:30.917 "message": "Input/output error" 00:24:30.917 } 00:24:30.917 15:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 414726 00:24:30.917 15:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 414726 ']' 00:24:30.917 15:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 414726 00:24:30.917 15:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:30.917 15:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:30.917 15:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 414726 00:24:30.917 15:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:30.917 15:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:30.917 15:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 414726' 00:24:30.917 killing process with pid 414726 00:24:30.917 15:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 414726 00:24:30.917 Received shutdown signal, test time was about 10.000000 seconds 00:24:30.917 00:24:30.917 Latency(us) 00:24:30.917 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:30.917 =================================================================================================================== 00:24:30.917 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:30.917 15:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 414726 00:24:30.917 15:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:30.917 15:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:30.917 15:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:30.917 15:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:30.917 15:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:30.917 15:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.NagWxdqRwU 00:24:30.917 15:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:30.917 15:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.NagWxdqRwU 00:24:30.917 15:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:30.917 15:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:30.917 15:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:30.917 15:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:30.917 15:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.NagWxdqRwU 00:24:30.917 15:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:30.917 15:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:24:30.917 15:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:30.917 15:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.NagWxdqRwU 00:24:30.917 15:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:30.917 15:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=414922 00:24:30.917 15:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:30.917 15:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 414922 /var/tmp/bdevperf.sock 00:24:30.917 15:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:30.917 15:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 414922 ']' 00:24:30.917 15:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:30.917 15:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:30.917 15:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:30.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:30.917 15:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:30.917 15:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:30.917 [2024-09-27 15:43:11.382209] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:24:30.917 [2024-09-27 15:43:11.382265] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid414922 ] 00:24:31.234 [2024-09-27 15:43:11.461507] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.234 [2024-09-27 15:43:11.488745] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:31.856 15:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:31.856 15:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:31.856 15:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NagWxdqRwU 00:24:32.121 15:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:32.121 [2024-09-27 15:43:12.485983] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:32.121 [2024-09-27 15:43:12.492599] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:32.121 [2024-09-27 15:43:12.492616] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:32.121 [2024-09-27 15:43:12.492638] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:32.121 [2024-09-27 15:43:12.493241] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f57a0 (107): Transport endpoint is not connected 00:24:32.121 [2024-09-27 15:43:12.494236] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f57a0 (9): Bad file descriptor 00:24:32.121 [2024-09-27 15:43:12.495238] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:32.121 [2024-09-27 15:43:12.495245] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:32.121 [2024-09-27 15:43:12.495251] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:24:32.121 [2024-09-27 15:43:12.495258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:32.121 request: 00:24:32.121 { 00:24:32.121 "name": "TLSTEST", 00:24:32.121 "trtype": "tcp", 00:24:32.121 "traddr": "10.0.0.2", 00:24:32.121 "adrfam": "ipv4", 00:24:32.121 "trsvcid": "4420", 00:24:32.121 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:32.121 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:32.121 "prchk_reftag": false, 00:24:32.121 "prchk_guard": false, 00:24:32.121 "hdgst": false, 00:24:32.121 "ddgst": false, 00:24:32.121 "psk": "key0", 00:24:32.121 "allow_unrecognized_csi": false, 00:24:32.121 "method": "bdev_nvme_attach_controller", 00:24:32.121 "req_id": 1 00:24:32.121 } 00:24:32.121 Got JSON-RPC error response 00:24:32.121 response: 00:24:32.121 { 00:24:32.121 "code": -5, 00:24:32.121 "message": "Input/output error" 00:24:32.121 } 00:24:32.121 15:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 414922 00:24:32.121 15:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 414922 ']' 00:24:32.121 15:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 414922 00:24:32.121 15:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:32.121 15:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:32.121 15:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 414922 00:24:32.121 15:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:32.121 15:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:32.121 15:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 414922' 00:24:32.121 killing process with pid 414922 00:24:32.121 15:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 414922 00:24:32.121 Received shutdown signal, test time was about 10.000000 seconds 00:24:32.121 00:24:32.121 Latency(us) 00:24:32.121 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.121 =================================================================================================================== 00:24:32.121 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:32.121 15:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 414922 00:24:32.383 15:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:32.383 15:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:32.383 15:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:32.383 15:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:32.383 15:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:32.383 15:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:32.383 15:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:32.383 15:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:32.383 15:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:32.383 15:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:32.383 15:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:32.383 15:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:32.383 15:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:32.383 15:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:32.383 15:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:32.383 15:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:32.383 15:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:24:32.383 15:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:32.383 15:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=415121 00:24:32.383 15:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:32.383 15:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 415121 /var/tmp/bdevperf.sock 00:24:32.383 15:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:32.383 15:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 415121 ']' 00:24:32.383 15:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:32.383 15:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:32.383 15:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:32.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:32.383 15:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:32.383 15:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:32.383 [2024-09-27 15:43:12.754478] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:24:32.383 [2024-09-27 15:43:12.754537] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid415121 ] 00:24:32.383 [2024-09-27 15:43:12.833820] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.383 [2024-09-27 15:43:12.861042] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:33.326 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:33.326 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:33.326 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:24:33.326 [2024-09-27 15:43:13.697749] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:24:33.326 [2024-09-27 15:43:13.697778] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:33.326 request: 00:24:33.326 { 00:24:33.326 "name": "key0", 00:24:33.326 "path": "", 00:24:33.326 "method": "keyring_file_add_key", 00:24:33.326 "req_id": 1 00:24:33.326 } 00:24:33.326 Got JSON-RPC error response 00:24:33.326 response: 00:24:33.326 { 00:24:33.326 "code": -1, 00:24:33.326 "message": "Operation not permitted" 00:24:33.326 } 00:24:33.326 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:33.587 [2024-09-27 15:43:13.870252] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:33.587 [2024-09-27 15:43:13.870274] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:24:33.587 request: 00:24:33.587 { 00:24:33.587 "name": "TLSTEST", 00:24:33.587 "trtype": "tcp", 00:24:33.587 "traddr": "10.0.0.2", 00:24:33.587 "adrfam": "ipv4", 00:24:33.587 "trsvcid": "4420", 00:24:33.587 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:33.587 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:33.587 "prchk_reftag": false, 00:24:33.587 "prchk_guard": false, 00:24:33.587 "hdgst": false, 00:24:33.587 "ddgst": false, 00:24:33.587 "psk": "key0", 00:24:33.587 "allow_unrecognized_csi": false, 00:24:33.587 "method": "bdev_nvme_attach_controller", 00:24:33.587 "req_id": 1 00:24:33.587 } 00:24:33.587 Got JSON-RPC error response 00:24:33.587 response: 00:24:33.587 { 00:24:33.587 "code": -126, 00:24:33.587 "message": "Required key not available" 00:24:33.587 } 00:24:33.587 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 415121 00:24:33.587 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 415121 ']' 00:24:33.587 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 415121 00:24:33.587 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:33.587 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:33.587 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 415121 00:24:33.587 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:33.587 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:33.587 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 415121' 00:24:33.587 killing process with pid 415121 00:24:33.587 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 415121 00:24:33.587 Received shutdown signal, test time was about 10.000000 seconds 00:24:33.587 00:24:33.587 Latency(us) 00:24:33.587 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.587 =================================================================================================================== 00:24:33.587 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:33.587 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 415121 00:24:33.587 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:33.587 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:33.587 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:33.587 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:33.587 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:33.587 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 409298 00:24:33.587 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 409298 ']' 00:24:33.587 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 409298 00:24:33.587 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:33.587 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:33.848 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 409298 00:24:33.848 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:33.848 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:33.848 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 409298' 00:24:33.848 killing process with pid 409298 00:24:33.848 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 409298 00:24:33.848 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 409298 00:24:33.848 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:24:33.848 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:24:33.848 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:24:33.848 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:24:33.848 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:24:33.848 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=2 00:24:33.848 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:24:33.848 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:33.848 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:24:33.848 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.UqGGInoeRK 00:24:33.848 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:33.848 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.UqGGInoeRK 00:24:33.848 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:24:33.848 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:33.848 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:33.848 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:33.848 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=415468 00:24:33.848 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 415468 00:24:33.848 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:33.848 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 415468 ']' 00:24:33.848 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:33.848 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:33.848 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:33.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:33.848 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:33.848 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:34.108 [2024-09-27 15:43:14.370147] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:24:34.108 [2024-09-27 15:43:14.370203] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:34.108 [2024-09-27 15:43:14.454992] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.108 [2024-09-27 15:43:14.494363] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:34.108 [2024-09-27 15:43:14.494417] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:34.108 [2024-09-27 15:43:14.494424] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:34.108 [2024-09-27 15:43:14.494429] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:34.108 [2024-09-27 15:43:14.494434] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:34.108 [2024-09-27 15:43:14.494455] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:34.679 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:34.679 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:34.679 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:34.679 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:34.679 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:34.939 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:34.940 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.UqGGInoeRK 00:24:34.940 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.UqGGInoeRK 00:24:34.940 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:34.940 [2024-09-27 15:43:15.357808] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:34.940 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:35.200 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:35.461 [2024-09-27 15:43:15.710674] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:35.461 [2024-09-27 15:43:15.710859] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:35.461 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:35.461 malloc0 00:24:35.461 15:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:35.721 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.UqGGInoeRK 00:24:35.981 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:35.981 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UqGGInoeRK 00:24:35.981 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:35.981 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:35.981 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:35.981 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.UqGGInoeRK 00:24:35.981 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:35.981 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=415936 00:24:35.981 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:35.981 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:35.981 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 415936 /var/tmp/bdevperf.sock 00:24:35.981 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 415936 ']' 00:24:35.981 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:35.981 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:35.981 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:35.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:35.981 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:35.981 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:36.241 [2024-09-27 15:43:16.495582] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:24:36.241 [2024-09-27 15:43:16.495624] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid415936 ] 00:24:36.241 [2024-09-27 15:43:16.538613] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.241 [2024-09-27 15:43:16.566563] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:36.241 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:36.241 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:36.241 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UqGGInoeRK 00:24:36.500 15:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:36.500 [2024-09-27 15:43:16.982253] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:36.759 TLSTESTn1 00:24:36.759 15:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:36.759 Running I/O for 10 seconds... 00:24:47.050 3133.00 IOPS, 12.24 MiB/s 4036.00 IOPS, 15.77 MiB/s 4425.00 IOPS, 17.29 MiB/s 4214.00 IOPS, 16.46 MiB/s 4227.20 IOPS, 16.51 MiB/s 4276.67 IOPS, 16.71 MiB/s 4512.43 IOPS, 17.63 MiB/s 4374.00 IOPS, 17.09 MiB/s 4268.22 IOPS, 16.67 MiB/s 4118.60 IOPS, 16.09 MiB/s 00:24:47.050 Latency(us) 00:24:47.050 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:47.050 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:47.050 Verification LBA range: start 0x0 length 0x2000 00:24:47.050 TLSTESTn1 : 10.01 4126.44 16.12 0.00 0.00 30983.96 4478.29 122333.87 00:24:47.050 =================================================================================================================== 00:24:47.050 Total : 4126.44 16.12 0.00 0.00 30983.96 4478.29 122333.87 00:24:47.050 { 00:24:47.050 "results": [ 00:24:47.050 { 00:24:47.050 "job": "TLSTESTn1", 00:24:47.050 "core_mask": "0x4", 00:24:47.050 "workload": "verify", 00:24:47.050 "status": "finished", 00:24:47.050 "verify_range": { 00:24:47.050 "start": 0, 00:24:47.050 "length": 8192 00:24:47.050 }, 00:24:47.050 "queue_depth": 128, 00:24:47.050 "io_size": 4096, 00:24:47.050 "runtime": 10.011785, 00:24:47.050 "iops": 4126.436994002568, 00:24:47.050 "mibps": 16.11889450782253, 00:24:47.050 "io_failed": 0, 00:24:47.050 "io_timeout": 0, 00:24:47.050 "avg_latency_us": 30983.958495711595, 00:24:47.050 "min_latency_us": 4478.293333333333, 00:24:47.050 "max_latency_us": 122333.86666666667 00:24:47.050 } 00:24:47.050 ], 00:24:47.050 "core_count": 1 00:24:47.050 } 00:24:47.050 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:47.050 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 415936 00:24:47.050 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 415936 ']' 00:24:47.050 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 415936 00:24:47.050 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:47.050 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:47.050 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 415936 00:24:47.050 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:47.050 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:47.050 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 415936' 00:24:47.050 killing process with pid 415936 00:24:47.050 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 415936 00:24:47.050 Received shutdown signal, test time was about 10.000000 seconds 00:24:47.050 00:24:47.050 Latency(us) 00:24:47.050 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:47.050 =================================================================================================================== 00:24:47.050 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:47.050 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 415936 00:24:47.050 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.UqGGInoeRK 00:24:47.050 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UqGGInoeRK 00:24:47.050 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:47.050 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UqGGInoeRK 00:24:47.050 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:47.050 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:47.050 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:47.050 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:47.051 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UqGGInoeRK 00:24:47.051 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:47.051 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:47.051 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:47.051 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.UqGGInoeRK 00:24:47.051 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:47.051 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=418165 00:24:47.051 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:47.051 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 418165 /var/tmp/bdevperf.sock 00:24:47.051 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:47.051 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 418165 ']' 00:24:47.051 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:47.051 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:47.051 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:47.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:47.051 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:47.051 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:47.051 [2024-09-27 15:43:27.477107] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:24:47.051 [2024-09-27 15:43:27.477171] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid418165 ] 00:24:47.311 [2024-09-27 15:43:27.554787] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.311 [2024-09-27 15:43:27.582805] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:47.882 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:47.882 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:47.882 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UqGGInoeRK 00:24:48.143 [2024-09-27 15:43:28.423105] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.UqGGInoeRK': 0100666 00:24:48.143 [2024-09-27 15:43:28.423127] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:48.143 request: 00:24:48.143 { 00:24:48.143 "name": "key0", 00:24:48.143 "path": "/tmp/tmp.UqGGInoeRK", 00:24:48.143 "method": "keyring_file_add_key", 00:24:48.143 "req_id": 1 00:24:48.143 } 00:24:48.143 Got JSON-RPC error response 00:24:48.143 response: 00:24:48.143 { 00:24:48.143 "code": -1, 00:24:48.143 "message": "Operation not permitted" 00:24:48.143 } 00:24:48.143 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:48.143 [2024-09-27 15:43:28.583573] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:48.143 [2024-09-27 15:43:28.583593] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:24:48.143 request: 00:24:48.143 { 00:24:48.143 "name": "TLSTEST", 00:24:48.143 "trtype": "tcp", 00:24:48.143 "traddr": "10.0.0.2", 00:24:48.143 "adrfam": "ipv4", 00:24:48.143 "trsvcid": "4420", 00:24:48.143 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:48.143 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:48.143 "prchk_reftag": false, 00:24:48.143 "prchk_guard": false, 00:24:48.143 "hdgst": false, 00:24:48.143 "ddgst": false, 00:24:48.143 "psk": "key0", 00:24:48.143 "allow_unrecognized_csi": false, 00:24:48.143 "method": "bdev_nvme_attach_controller", 00:24:48.143 "req_id": 1 00:24:48.143 } 00:24:48.143 Got JSON-RPC error response 00:24:48.143 response: 00:24:48.143 { 00:24:48.143 "code": -126, 00:24:48.143 "message": "Required key not available" 00:24:48.143 } 00:24:48.143 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 418165 00:24:48.143 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 418165 ']' 00:24:48.143 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 418165 00:24:48.143 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:48.143 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:48.143 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 418165 00:24:48.404 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:48.404 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:48.404 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 418165' 00:24:48.404 killing process with pid 418165 00:24:48.404 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 418165 00:24:48.404 Received shutdown signal, test time was about 10.000000 seconds 00:24:48.404 00:24:48.404 Latency(us) 00:24:48.404 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:48.404 =================================================================================================================== 00:24:48.404 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:48.404 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 418165 00:24:48.404 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:48.404 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:48.404 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:48.404 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:48.404 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:48.404 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 415468 00:24:48.404 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 415468 ']' 00:24:48.404 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 415468 00:24:48.404 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:48.404 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:48.404 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 415468 00:24:48.404 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:48.404 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:48.404 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 415468' 00:24:48.404 killing process with pid 415468 00:24:48.404 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 415468 00:24:48.404 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 415468 00:24:48.665 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:24:48.665 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:48.665 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:48.665 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:48.665 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=418421 00:24:48.665 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 418421 00:24:48.665 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:48.665 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 418421 ']' 00:24:48.665 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:48.665 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:48.665 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:48.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:48.665 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:48.665 15:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:48.665 [2024-09-27 15:43:29.033513] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:24:48.665 [2024-09-27 15:43:29.033572] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:48.665 [2024-09-27 15:43:29.117992] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.665 [2024-09-27 15:43:29.146795] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:48.665 [2024-09-27 15:43:29.146830] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:48.665 [2024-09-27 15:43:29.146835] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:48.665 [2024-09-27 15:43:29.146840] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:48.665 [2024-09-27 15:43:29.146845] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:48.665 [2024-09-27 15:43:29.146859] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:49.606 15:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:49.606 15:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:49.606 15:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:49.606 15:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:49.606 15:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:49.606 15:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:49.606 15:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.UqGGInoeRK 00:24:49.606 15:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:49.606 15:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.UqGGInoeRK 00:24:49.606 15:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:24:49.606 15:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:49.606 15:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:24:49.606 15:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:49.606 15:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.UqGGInoeRK 00:24:49.606 15:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.UqGGInoeRK 00:24:49.606 15:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:49.606 [2024-09-27 15:43:30.017097] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:49.606 15:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:49.867 15:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:50.128 [2024-09-27 15:43:30.369960] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:50.128 [2024-09-27 15:43:30.370168] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:50.128 15:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:50.128 malloc0 00:24:50.128 15:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:50.388 15:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.UqGGInoeRK 00:24:50.388 [2024-09-27 15:43:30.875124] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.UqGGInoeRK': 0100666 00:24:50.388 [2024-09-27 15:43:30.875148] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:50.649 request: 00:24:50.649 { 00:24:50.649 "name": "key0", 00:24:50.649 "path": "/tmp/tmp.UqGGInoeRK", 00:24:50.649 "method": "keyring_file_add_key", 00:24:50.649 "req_id": 1 00:24:50.649 } 00:24:50.649 Got JSON-RPC error response 00:24:50.649 response: 00:24:50.649 { 00:24:50.649 "code": -1, 00:24:50.649 "message": "Operation not permitted" 00:24:50.649 } 00:24:50.649 15:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:50.649 [2024-09-27 15:43:31.039559] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:24:50.649 [2024-09-27 15:43:31.039587] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:24:50.649 request: 00:24:50.649 { 00:24:50.649 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:50.649 "host": "nqn.2016-06.io.spdk:host1", 00:24:50.649 "psk": "key0", 00:24:50.649 "method": "nvmf_subsystem_add_host", 00:24:50.649 "req_id": 1 00:24:50.649 } 00:24:50.649 Got JSON-RPC error response 00:24:50.649 response: 00:24:50.649 { 00:24:50.649 "code": -32603, 00:24:50.649 "message": "Internal error" 00:24:50.649 } 00:24:50.649 15:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:50.649 15:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:50.649 15:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:50.649 15:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:50.649 15:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 418421 00:24:50.649 15:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 418421 ']' 00:24:50.649 15:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 418421 00:24:50.649 15:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:50.649 15:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:50.649 15:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 418421 00:24:50.649 15:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:50.649 15:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:50.649 15:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 418421' 00:24:50.649 killing process with pid 418421 00:24:50.649 15:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 418421 00:24:50.649 15:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 418421 00:24:50.910 15:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.UqGGInoeRK 00:24:50.910 15:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:24:50.910 15:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:50.910 15:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:50.910 15:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:50.910 15:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=418887 00:24:50.910 15:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 418887 00:24:50.910 15:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:50.910 15:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 418887 ']' 00:24:50.910 15:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:50.910 15:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:50.910 15:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:50.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:50.910 15:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:50.910 15:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:50.911 [2024-09-27 15:43:31.299967] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:24:50.911 [2024-09-27 15:43:31.300029] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:50.911 [2024-09-27 15:43:31.384854] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.171 [2024-09-27 15:43:31.413153] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:51.171 [2024-09-27 15:43:31.413187] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:51.171 [2024-09-27 15:43:31.413193] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:51.171 [2024-09-27 15:43:31.413197] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:51.171 [2024-09-27 15:43:31.413201] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:51.171 [2024-09-27 15:43:31.413218] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:51.742 15:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:51.742 15:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:51.742 15:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:51.742 15:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:51.742 15:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:51.742 15:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:51.742 15:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.UqGGInoeRK 00:24:51.742 15:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.UqGGInoeRK 00:24:51.742 15:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:52.002 [2024-09-27 15:43:32.279066] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:52.002 15:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:52.002 15:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:52.262 [2024-09-27 15:43:32.595845] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:52.262 [2024-09-27 15:43:32.596057] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:52.262 15:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:52.522 malloc0 00:24:52.522 15:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:52.522 15:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.UqGGInoeRK 00:24:52.782 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:53.042 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=419254 00:24:53.042 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:53.042 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:53.042 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 419254 /var/tmp/bdevperf.sock 00:24:53.042 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 419254 ']' 00:24:53.042 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:53.042 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:53.042 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:53.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:53.042 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:53.042 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:53.042 [2024-09-27 15:43:33.363162] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:24:53.042 [2024-09-27 15:43:33.363237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid419254 ] 00:24:53.042 [2024-09-27 15:43:33.442302] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.042 [2024-09-27 15:43:33.470312] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:53.983 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:53.983 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:53.983 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UqGGInoeRK 00:24:53.983 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:54.244 [2024-09-27 15:43:34.479380] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:54.244 TLSTESTn1 00:24:54.244 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:24:54.505 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:24:54.505 "subsystems": [ 00:24:54.505 { 00:24:54.505 "subsystem": "keyring", 00:24:54.505 "config": [ 00:24:54.505 { 00:24:54.505 "method": "keyring_file_add_key", 00:24:54.505 "params": { 00:24:54.505 "name": "key0", 00:24:54.505 "path": "/tmp/tmp.UqGGInoeRK" 00:24:54.505 } 00:24:54.505 } 00:24:54.505 ] 00:24:54.505 }, 00:24:54.505 { 00:24:54.505 "subsystem": "iobuf", 00:24:54.505 "config": [ 00:24:54.505 { 00:24:54.505 "method": "iobuf_set_options", 00:24:54.505 "params": { 00:24:54.505 "small_pool_count": 8192, 00:24:54.505 "large_pool_count": 1024, 00:24:54.505 "small_bufsize": 8192, 00:24:54.505 "large_bufsize": 135168 00:24:54.505 } 00:24:54.505 } 00:24:54.505 ] 00:24:54.505 }, 00:24:54.505 { 00:24:54.505 "subsystem": "sock", 00:24:54.505 "config": [ 00:24:54.505 { 00:24:54.505 "method": "sock_set_default_impl", 00:24:54.505 "params": { 00:24:54.505 "impl_name": "posix" 00:24:54.505 } 00:24:54.505 }, 00:24:54.505 { 00:24:54.505 "method": "sock_impl_set_options", 00:24:54.505 "params": { 00:24:54.505 "impl_name": "ssl", 00:24:54.505 "recv_buf_size": 4096, 00:24:54.505 "send_buf_size": 4096, 00:24:54.505 "enable_recv_pipe": true, 00:24:54.505 "enable_quickack": false, 00:24:54.505 "enable_placement_id": 0, 00:24:54.505 "enable_zerocopy_send_server": true, 00:24:54.505 "enable_zerocopy_send_client": false, 00:24:54.505 "zerocopy_threshold": 0, 00:24:54.505 "tls_version": 0, 00:24:54.505 "enable_ktls": false 00:24:54.505 } 00:24:54.505 }, 00:24:54.505 { 00:24:54.505 "method": "sock_impl_set_options", 00:24:54.505 "params": { 00:24:54.505 "impl_name": "posix", 00:24:54.505 "recv_buf_size": 2097152, 00:24:54.505 "send_buf_size": 2097152, 00:24:54.505 "enable_recv_pipe": true, 00:24:54.505 "enable_quickack": false, 00:24:54.505 "enable_placement_id": 0, 00:24:54.505 "enable_zerocopy_send_server": true, 00:24:54.505 "enable_zerocopy_send_client": false, 00:24:54.505 "zerocopy_threshold": 0, 00:24:54.505 "tls_version": 0, 00:24:54.505 "enable_ktls": false 00:24:54.505 } 00:24:54.505 } 00:24:54.505 ] 00:24:54.505 }, 00:24:54.505 { 00:24:54.505 "subsystem": "vmd", 00:24:54.505 "config": [] 00:24:54.505 }, 00:24:54.505 { 00:24:54.505 "subsystem": "accel", 00:24:54.505 "config": [ 00:24:54.505 { 00:24:54.505 "method": "accel_set_options", 00:24:54.505 "params": { 00:24:54.505 "small_cache_size": 128, 00:24:54.505 "large_cache_size": 16, 00:24:54.505 "task_count": 2048, 00:24:54.505 "sequence_count": 2048, 00:24:54.505 "buf_count": 2048 00:24:54.505 } 00:24:54.505 } 00:24:54.505 ] 00:24:54.505 }, 00:24:54.505 { 00:24:54.505 "subsystem": "bdev", 00:24:54.505 "config": [ 00:24:54.505 { 00:24:54.505 "method": "bdev_set_options", 00:24:54.505 "params": { 00:24:54.505 "bdev_io_pool_size": 65535, 00:24:54.505 "bdev_io_cache_size": 256, 00:24:54.505 "bdev_auto_examine": true, 00:24:54.505 "iobuf_small_cache_size": 128, 00:24:54.505 "iobuf_large_cache_size": 16 00:24:54.505 } 00:24:54.505 }, 00:24:54.505 { 00:24:54.505 "method": "bdev_raid_set_options", 00:24:54.505 "params": { 00:24:54.505 "process_window_size_kb": 1024, 00:24:54.505 "process_max_bandwidth_mb_sec": 0 00:24:54.505 } 00:24:54.505 }, 00:24:54.505 { 00:24:54.505 "method": "bdev_iscsi_set_options", 00:24:54.505 "params": { 00:24:54.505 "timeout_sec": 30 00:24:54.505 } 00:24:54.505 }, 00:24:54.505 { 00:24:54.505 "method": "bdev_nvme_set_options", 00:24:54.505 "params": { 00:24:54.505 "action_on_timeout": "none", 00:24:54.505 "timeout_us": 0, 00:24:54.505 "timeout_admin_us": 0, 00:24:54.505 "keep_alive_timeout_ms": 10000, 00:24:54.505 "arbitration_burst": 0, 00:24:54.505 "low_priority_weight": 0, 00:24:54.505 "medium_priority_weight": 0, 00:24:54.505 "high_priority_weight": 0, 00:24:54.505 "nvme_adminq_poll_period_us": 10000, 00:24:54.505 "nvme_ioq_poll_period_us": 0, 00:24:54.505 "io_queue_requests": 0, 00:24:54.505 "delay_cmd_submit": true, 00:24:54.505 "transport_retry_count": 4, 00:24:54.505 "bdev_retry_count": 3, 00:24:54.505 "transport_ack_timeout": 0, 00:24:54.505 "ctrlr_loss_timeout_sec": 0, 00:24:54.505 "reconnect_delay_sec": 0, 00:24:54.505 "fast_io_fail_timeout_sec": 0, 00:24:54.505 "disable_auto_failback": false, 00:24:54.505 "generate_uuids": false, 00:24:54.505 "transport_tos": 0, 00:24:54.505 "nvme_error_stat": false, 00:24:54.505 "rdma_srq_size": 0, 00:24:54.505 "io_path_stat": false, 00:24:54.505 "allow_accel_sequence": false, 00:24:54.505 "rdma_max_cq_size": 0, 00:24:54.505 "rdma_cm_event_timeout_ms": 0, 00:24:54.505 "dhchap_digests": [ 00:24:54.505 "sha256", 00:24:54.505 "sha384", 00:24:54.505 "sha512" 00:24:54.505 ], 00:24:54.505 "dhchap_dhgroups": [ 00:24:54.505 "null", 00:24:54.505 "ffdhe2048", 00:24:54.505 "ffdhe3072", 00:24:54.505 "ffdhe4096", 00:24:54.505 "ffdhe6144", 00:24:54.505 "ffdhe8192" 00:24:54.505 ] 00:24:54.505 } 00:24:54.505 }, 00:24:54.505 { 00:24:54.505 "method": "bdev_nvme_set_hotplug", 00:24:54.505 "params": { 00:24:54.505 "period_us": 100000, 00:24:54.505 "enable": false 00:24:54.505 } 00:24:54.505 }, 00:24:54.505 { 00:24:54.505 "method": "bdev_malloc_create", 00:24:54.505 "params": { 00:24:54.505 "name": "malloc0", 00:24:54.505 "num_blocks": 8192, 00:24:54.505 "block_size": 4096, 00:24:54.505 "physical_block_size": 4096, 00:24:54.505 "uuid": "9b1a934a-8337-4b82-a699-7ea8b2f99965", 00:24:54.505 "optimal_io_boundary": 0, 00:24:54.505 "md_size": 0, 00:24:54.505 "dif_type": 0, 00:24:54.505 "dif_is_head_of_md": false, 00:24:54.505 "dif_pi_format": 0 00:24:54.505 } 00:24:54.505 }, 00:24:54.505 { 00:24:54.505 "method": "bdev_wait_for_examine" 00:24:54.505 } 00:24:54.505 ] 00:24:54.505 }, 00:24:54.505 { 00:24:54.505 "subsystem": "nbd", 00:24:54.505 "config": [] 00:24:54.505 }, 00:24:54.505 { 00:24:54.505 "subsystem": "scheduler", 00:24:54.505 "config": [ 00:24:54.505 { 00:24:54.505 "method": "framework_set_scheduler", 00:24:54.505 "params": { 00:24:54.505 "name": "static" 00:24:54.505 } 00:24:54.505 } 00:24:54.505 ] 00:24:54.505 }, 00:24:54.505 { 00:24:54.505 "subsystem": "nvmf", 00:24:54.505 "config": [ 00:24:54.505 { 00:24:54.505 "method": "nvmf_set_config", 00:24:54.505 "params": { 00:24:54.505 "discovery_filter": "match_any", 00:24:54.505 "admin_cmd_passthru": { 00:24:54.506 "identify_ctrlr": false 00:24:54.506 }, 00:24:54.506 "dhchap_digests": [ 00:24:54.506 "sha256", 00:24:54.506 "sha384", 00:24:54.506 "sha512" 00:24:54.506 ], 00:24:54.506 "dhchap_dhgroups": [ 00:24:54.506 "null", 00:24:54.506 "ffdhe2048", 00:24:54.506 "ffdhe3072", 00:24:54.506 "ffdhe4096", 00:24:54.506 "ffdhe6144", 00:24:54.506 "ffdhe8192" 00:24:54.506 ] 00:24:54.506 } 00:24:54.506 }, 00:24:54.506 { 00:24:54.506 "method": "nvmf_set_max_subsystems", 00:24:54.506 "params": { 00:24:54.506 "max_subsystems": 1024 00:24:54.506 } 00:24:54.506 }, 00:24:54.506 { 00:24:54.506 "method": "nvmf_set_crdt", 00:24:54.506 "params": { 00:24:54.506 "crdt1": 0, 00:24:54.506 "crdt2": 0, 00:24:54.506 "crdt3": 0 00:24:54.506 } 00:24:54.506 }, 00:24:54.506 { 00:24:54.506 "method": "nvmf_create_transport", 00:24:54.506 "params": { 00:24:54.506 "trtype": "TCP", 00:24:54.506 "max_queue_depth": 128, 00:24:54.506 "max_io_qpairs_per_ctrlr": 127, 00:24:54.506 "in_capsule_data_size": 4096, 00:24:54.506 "max_io_size": 131072, 00:24:54.506 "io_unit_size": 131072, 00:24:54.506 "max_aq_depth": 128, 00:24:54.506 "num_shared_buffers": 511, 00:24:54.506 "buf_cache_size": 4294967295, 00:24:54.506 "dif_insert_or_strip": false, 00:24:54.506 "zcopy": false, 00:24:54.506 "c2h_success": false, 00:24:54.506 "sock_priority": 0, 00:24:54.506 "abort_timeout_sec": 1, 00:24:54.506 "ack_timeout": 0, 00:24:54.506 "data_wr_pool_size": 0 00:24:54.506 } 00:24:54.506 }, 00:24:54.506 { 00:24:54.506 "method": "nvmf_create_subsystem", 00:24:54.506 "params": { 00:24:54.506 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:54.506 "allow_any_host": false, 00:24:54.506 "serial_number": "SPDK00000000000001", 00:24:54.506 "model_number": "SPDK bdev Controller", 00:24:54.506 "max_namespaces": 10, 00:24:54.506 "min_cntlid": 1, 00:24:54.506 "max_cntlid": 65519, 00:24:54.506 "ana_reporting": false 00:24:54.506 } 00:24:54.506 }, 00:24:54.506 { 00:24:54.506 "method": "nvmf_subsystem_add_host", 00:24:54.506 "params": { 00:24:54.506 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:54.506 "host": "nqn.2016-06.io.spdk:host1", 00:24:54.506 "psk": "key0" 00:24:54.506 } 00:24:54.506 }, 00:24:54.506 { 00:24:54.506 "method": "nvmf_subsystem_add_ns", 00:24:54.506 "params": { 00:24:54.506 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:54.506 "namespace": { 00:24:54.506 "nsid": 1, 00:24:54.506 "bdev_name": "malloc0", 00:24:54.506 "nguid": "9B1A934A83374B82A6997EA8B2F99965", 00:24:54.506 "uuid": "9b1a934a-8337-4b82-a699-7ea8b2f99965", 00:24:54.506 "no_auto_visible": false 00:24:54.506 } 00:24:54.506 } 00:24:54.506 }, 00:24:54.506 { 00:24:54.506 "method": "nvmf_subsystem_add_listener", 00:24:54.506 "params": { 00:24:54.506 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:54.506 "listen_address": { 00:24:54.506 "trtype": "TCP", 00:24:54.506 "adrfam": "IPv4", 00:24:54.506 "traddr": "10.0.0.2", 00:24:54.506 "trsvcid": "4420" 00:24:54.506 }, 00:24:54.506 "secure_channel": true 00:24:54.506 } 00:24:54.506 } 00:24:54.506 ] 00:24:54.506 } 00:24:54.506 ] 00:24:54.506 }' 00:24:54.506 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:54.767 15:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:24:54.767 "subsystems": [ 00:24:54.767 { 00:24:54.767 "subsystem": "keyring", 00:24:54.767 "config": [ 00:24:54.767 { 00:24:54.767 "method": "keyring_file_add_key", 00:24:54.767 "params": { 00:24:54.767 "name": "key0", 00:24:54.767 "path": "/tmp/tmp.UqGGInoeRK" 00:24:54.767 } 00:24:54.767 } 00:24:54.767 ] 00:24:54.767 }, 00:24:54.767 { 00:24:54.767 "subsystem": "iobuf", 00:24:54.767 "config": [ 00:24:54.767 { 00:24:54.767 "method": "iobuf_set_options", 00:24:54.767 "params": { 00:24:54.767 "small_pool_count": 8192, 00:24:54.767 "large_pool_count": 1024, 00:24:54.767 "small_bufsize": 8192, 00:24:54.767 "large_bufsize": 135168 00:24:54.767 } 00:24:54.767 } 00:24:54.767 ] 00:24:54.767 }, 00:24:54.767 { 00:24:54.767 "subsystem": "sock", 00:24:54.767 "config": [ 00:24:54.767 { 00:24:54.767 "method": "sock_set_default_impl", 00:24:54.767 "params": { 00:24:54.767 "impl_name": "posix" 00:24:54.767 } 00:24:54.767 }, 00:24:54.767 { 00:24:54.767 "method": "sock_impl_set_options", 00:24:54.767 "params": { 00:24:54.767 "impl_name": "ssl", 00:24:54.767 "recv_buf_size": 4096, 00:24:54.767 "send_buf_size": 4096, 00:24:54.767 "enable_recv_pipe": true, 00:24:54.767 "enable_quickack": false, 00:24:54.767 "enable_placement_id": 0, 00:24:54.767 "enable_zerocopy_send_server": true, 00:24:54.767 "enable_zerocopy_send_client": false, 00:24:54.767 "zerocopy_threshold": 0, 00:24:54.767 "tls_version": 0, 00:24:54.767 "enable_ktls": false 00:24:54.767 } 00:24:54.767 }, 00:24:54.767 { 00:24:54.767 "method": "sock_impl_set_options", 00:24:54.767 "params": { 00:24:54.767 "impl_name": "posix", 00:24:54.767 "recv_buf_size": 2097152, 00:24:54.767 "send_buf_size": 2097152, 00:24:54.767 "enable_recv_pipe": true, 00:24:54.767 "enable_quickack": false, 00:24:54.767 "enable_placement_id": 0, 00:24:54.767 "enable_zerocopy_send_server": true, 00:24:54.767 "enable_zerocopy_send_client": false, 00:24:54.767 "zerocopy_threshold": 0, 00:24:54.767 "tls_version": 0, 00:24:54.767 "enable_ktls": false 00:24:54.767 } 00:24:54.767 } 00:24:54.767 ] 00:24:54.767 }, 00:24:54.767 { 00:24:54.767 "subsystem": "vmd", 00:24:54.767 "config": [] 00:24:54.767 }, 00:24:54.767 { 00:24:54.767 "subsystem": "accel", 00:24:54.767 "config": [ 00:24:54.767 { 00:24:54.767 "method": "accel_set_options", 00:24:54.767 "params": { 00:24:54.767 "small_cache_size": 128, 00:24:54.767 "large_cache_size": 16, 00:24:54.767 "task_count": 2048, 00:24:54.767 "sequence_count": 2048, 00:24:54.767 "buf_count": 2048 00:24:54.767 } 00:24:54.767 } 00:24:54.767 ] 00:24:54.767 }, 00:24:54.767 { 00:24:54.767 "subsystem": "bdev", 00:24:54.767 "config": [ 00:24:54.767 { 00:24:54.767 "method": "bdev_set_options", 00:24:54.767 "params": { 00:24:54.767 "bdev_io_pool_size": 65535, 00:24:54.767 "bdev_io_cache_size": 256, 00:24:54.767 "bdev_auto_examine": true, 00:24:54.767 "iobuf_small_cache_size": 128, 00:24:54.767 "iobuf_large_cache_size": 16 00:24:54.767 } 00:24:54.767 }, 00:24:54.767 { 00:24:54.767 "method": "bdev_raid_set_options", 00:24:54.767 "params": { 00:24:54.767 "process_window_size_kb": 1024, 00:24:54.767 "process_max_bandwidth_mb_sec": 0 00:24:54.767 } 00:24:54.767 }, 00:24:54.767 { 00:24:54.767 "method": "bdev_iscsi_set_options", 00:24:54.767 "params": { 00:24:54.767 "timeout_sec": 30 00:24:54.767 } 00:24:54.767 }, 00:24:54.767 { 00:24:54.767 "method": "bdev_nvme_set_options", 00:24:54.767 "params": { 00:24:54.767 "action_on_timeout": "none", 00:24:54.767 "timeout_us": 0, 00:24:54.767 "timeout_admin_us": 0, 00:24:54.767 "keep_alive_timeout_ms": 10000, 00:24:54.767 "arbitration_burst": 0, 00:24:54.767 "low_priority_weight": 0, 00:24:54.767 "medium_priority_weight": 0, 00:24:54.767 "high_priority_weight": 0, 00:24:54.767 "nvme_adminq_poll_period_us": 10000, 00:24:54.767 "nvme_ioq_poll_period_us": 0, 00:24:54.767 "io_queue_requests": 512, 00:24:54.767 "delay_cmd_submit": true, 00:24:54.767 "transport_retry_count": 4, 00:24:54.767 "bdev_retry_count": 3, 00:24:54.767 "transport_ack_timeout": 0, 00:24:54.767 "ctrlr_loss_timeout_sec": 0, 00:24:54.767 "reconnect_delay_sec": 0, 00:24:54.767 "fast_io_fail_timeout_sec": 0, 00:24:54.767 "disable_auto_failback": false, 00:24:54.767 "generate_uuids": false, 00:24:54.767 "transport_tos": 0, 00:24:54.767 "nvme_error_stat": false, 00:24:54.767 "rdma_srq_size": 0, 00:24:54.767 "io_path_stat": false, 00:24:54.767 "allow_accel_sequence": false, 00:24:54.767 "rdma_max_cq_size": 0, 00:24:54.767 "rdma_cm_event_timeout_ms": 0, 00:24:54.767 "dhchap_digests": [ 00:24:54.767 "sha256", 00:24:54.767 "sha384", 00:24:54.767 "sha512" 00:24:54.767 ], 00:24:54.767 "dhchap_dhgroups": [ 00:24:54.767 "null", 00:24:54.767 "ffdhe2048", 00:24:54.767 "ffdhe3072", 00:24:54.767 "ffdhe4096", 00:24:54.767 "ffdhe6144", 00:24:54.767 "ffdhe8192" 00:24:54.767 ] 00:24:54.767 } 00:24:54.768 }, 00:24:54.768 { 00:24:54.768 "method": "bdev_nvme_attach_controller", 00:24:54.768 "params": { 00:24:54.768 "name": "TLSTEST", 00:24:54.768 "trtype": "TCP", 00:24:54.768 "adrfam": "IPv4", 00:24:54.768 "traddr": "10.0.0.2", 00:24:54.768 "trsvcid": "4420", 00:24:54.768 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:54.768 "prchk_reftag": false, 00:24:54.768 "prchk_guard": false, 00:24:54.768 "ctrlr_loss_timeout_sec": 0, 00:24:54.768 "reconnect_delay_sec": 0, 00:24:54.768 "fast_io_fail_timeout_sec": 0, 00:24:54.768 "psk": "key0", 00:24:54.768 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:54.768 "hdgst": false, 00:24:54.768 "ddgst": false 00:24:54.768 } 00:24:54.768 }, 00:24:54.768 { 00:24:54.768 "method": "bdev_nvme_set_hotplug", 00:24:54.768 "params": { 00:24:54.768 "period_us": 100000, 00:24:54.768 "enable": false 00:24:54.768 } 00:24:54.768 }, 00:24:54.768 { 00:24:54.768 "method": "bdev_wait_for_examine" 00:24:54.768 } 00:24:54.768 ] 00:24:54.768 }, 00:24:54.768 { 00:24:54.768 "subsystem": "nbd", 00:24:54.768 "config": [] 00:24:54.768 } 00:24:54.768 ] 00:24:54.768 }' 00:24:54.768 15:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 419254 00:24:54.768 15:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 419254 ']' 00:24:54.768 15:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 419254 00:24:54.768 15:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:54.768 15:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:54.768 15:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 419254 00:24:54.768 15:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:54.768 15:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:54.768 15:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 419254' 00:24:54.768 killing process with pid 419254 00:24:54.768 15:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 419254 00:24:54.768 Received shutdown signal, test time was about 10.000000 seconds 00:24:54.768 00:24:54.768 Latency(us) 00:24:54.768 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.768 =================================================================================================================== 00:24:54.768 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:54.768 15:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 419254 00:24:54.768 15:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 418887 00:24:54.768 15:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 418887 ']' 00:24:54.768 15:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 418887 00:24:54.768 15:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:54.768 15:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:55.030 15:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 418887 00:24:55.030 15:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:55.030 15:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:55.030 15:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 418887' 00:24:55.030 killing process with pid 418887 00:24:55.030 15:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 418887 00:24:55.030 15:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 418887 00:24:55.030 15:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:55.030 15:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:55.030 15:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:55.030 15:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:55.030 15:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:24:55.030 "subsystems": [ 00:24:55.030 { 00:24:55.030 "subsystem": "keyring", 00:24:55.030 "config": [ 00:24:55.030 { 00:24:55.030 "method": "keyring_file_add_key", 00:24:55.030 "params": { 00:24:55.030 "name": "key0", 00:24:55.030 "path": "/tmp/tmp.UqGGInoeRK" 00:24:55.030 } 00:24:55.030 } 00:24:55.030 ] 00:24:55.030 }, 00:24:55.030 { 00:24:55.030 "subsystem": "iobuf", 00:24:55.030 "config": [ 00:24:55.030 { 00:24:55.030 "method": "iobuf_set_options", 00:24:55.030 "params": { 00:24:55.030 "small_pool_count": 8192, 00:24:55.030 "large_pool_count": 1024, 00:24:55.030 "small_bufsize": 8192, 00:24:55.030 "large_bufsize": 135168 00:24:55.030 } 00:24:55.030 } 00:24:55.030 ] 00:24:55.030 }, 00:24:55.030 { 00:24:55.030 "subsystem": "sock", 00:24:55.030 "config": [ 00:24:55.030 { 00:24:55.030 "method": "sock_set_default_impl", 00:24:55.030 "params": { 00:24:55.030 "impl_name": "posix" 00:24:55.030 } 00:24:55.030 }, 00:24:55.030 { 00:24:55.030 "method": "sock_impl_set_options", 00:24:55.030 "params": { 00:24:55.030 "impl_name": "ssl", 00:24:55.030 "recv_buf_size": 4096, 00:24:55.030 "send_buf_size": 4096, 00:24:55.030 "enable_recv_pipe": true, 00:24:55.030 "enable_quickack": false, 00:24:55.030 "enable_placement_id": 0, 00:24:55.030 "enable_zerocopy_send_server": true, 00:24:55.030 "enable_zerocopy_send_client": false, 00:24:55.030 "zerocopy_threshold": 0, 00:24:55.030 "tls_version": 0, 00:24:55.030 "enable_ktls": false 00:24:55.030 } 00:24:55.030 }, 00:24:55.030 { 00:24:55.030 "method": "sock_impl_set_options", 00:24:55.030 "params": { 00:24:55.030 "impl_name": "posix", 00:24:55.030 "recv_buf_size": 2097152, 00:24:55.030 "send_buf_size": 2097152, 00:24:55.030 "enable_recv_pipe": true, 00:24:55.030 "enable_quickack": false, 00:24:55.030 "enable_placement_id": 0, 00:24:55.030 "enable_zerocopy_send_server": true, 00:24:55.030 "enable_zerocopy_send_client": false, 00:24:55.030 "zerocopy_threshold": 0, 00:24:55.030 "tls_version": 0, 00:24:55.030 "enable_ktls": false 00:24:55.030 } 00:24:55.030 } 00:24:55.030 ] 00:24:55.030 }, 00:24:55.030 { 00:24:55.030 "subsystem": "vmd", 00:24:55.030 "config": [] 00:24:55.030 }, 00:24:55.030 { 00:24:55.030 "subsystem": "accel", 00:24:55.030 "config": [ 00:24:55.030 { 00:24:55.030 "method": "accel_set_options", 00:24:55.030 "params": { 00:24:55.030 "small_cache_size": 128, 00:24:55.030 "large_cache_size": 16, 00:24:55.030 "task_count": 2048, 00:24:55.030 "sequence_count": 2048, 00:24:55.030 "buf_count": 2048 00:24:55.030 } 00:24:55.030 } 00:24:55.030 ] 00:24:55.030 }, 00:24:55.030 { 00:24:55.030 "subsystem": "bdev", 00:24:55.030 "config": [ 00:24:55.030 { 00:24:55.030 "method": "bdev_set_options", 00:24:55.030 "params": { 00:24:55.030 "bdev_io_pool_size": 65535, 00:24:55.030 "bdev_io_cache_size": 256, 00:24:55.030 "bdev_auto_examine": true, 00:24:55.030 "iobuf_small_cache_size": 128, 00:24:55.030 "iobuf_large_cache_size": 16 00:24:55.030 } 00:24:55.030 }, 00:24:55.030 { 00:24:55.030 "method": "bdev_raid_set_options", 00:24:55.030 "params": { 00:24:55.030 "process_window_size_kb": 1024, 00:24:55.030 "process_max_bandwidth_mb_sec": 0 00:24:55.030 } 00:24:55.030 }, 00:24:55.030 { 00:24:55.030 "method": "bdev_iscsi_set_options", 00:24:55.030 "params": { 00:24:55.030 "timeout_sec": 30 00:24:55.030 } 00:24:55.030 }, 00:24:55.030 { 00:24:55.030 "method": "bdev_nvme_set_options", 00:24:55.030 "params": { 00:24:55.030 "action_on_timeout": "none", 00:24:55.030 "timeout_us": 0, 00:24:55.030 "timeout_admin_us": 0, 00:24:55.030 "keep_alive_timeout_ms": 10000, 00:24:55.030 "arbitration_burst": 0, 00:24:55.030 "low_priority_weight": 0, 00:24:55.030 "medium_priority_weight": 0, 00:24:55.030 "high_priority_weight": 0, 00:24:55.030 "nvme_adminq_poll_period_us": 10000, 00:24:55.030 "nvme_ioq_poll_period_us": 0, 00:24:55.030 "io_queue_requests": 0, 00:24:55.030 "delay_cmd_submit": true, 00:24:55.030 "transport_retry_count": 4, 00:24:55.030 "bdev_retry_count": 3, 00:24:55.030 "transport_ack_timeout": 0, 00:24:55.030 "ctrlr_loss_timeout_sec": 0, 00:24:55.030 "reconnect_delay_sec": 0, 00:24:55.030 "fast_io_fail_timeout_sec": 0, 00:24:55.030 "disable_auto_failback": false, 00:24:55.030 "generate_uuids": false, 00:24:55.030 "transport_tos": 0, 00:24:55.030 "nvme_error_stat": false, 00:24:55.030 "rdma_srq_size": 0, 00:24:55.030 "io_path_stat": false, 00:24:55.030 "allow_accel_sequence": false, 00:24:55.030 "rdma_max_cq_size": 0, 00:24:55.030 "rdma_cm_event_timeout_ms": 0, 00:24:55.030 "dhchap_digests": [ 00:24:55.030 "sha256", 00:24:55.030 "sha384", 00:24:55.030 "sha512" 00:24:55.030 ], 00:24:55.030 "dhchap_dhgroups": [ 00:24:55.030 "null", 00:24:55.030 "ffdhe2048", 00:24:55.030 "ffdhe3072", 00:24:55.030 "ffdhe4096", 00:24:55.030 "ffdhe6144", 00:24:55.030 "ffdhe8192" 00:24:55.030 ] 00:24:55.030 } 00:24:55.030 }, 00:24:55.030 { 00:24:55.030 "method": "bdev_nvme_set_hotplug", 00:24:55.030 "params": { 00:24:55.030 "period_us": 100000, 00:24:55.030 "enable": false 00:24:55.030 } 00:24:55.030 }, 00:24:55.030 { 00:24:55.030 "method": "bdev_malloc_create", 00:24:55.030 "params": { 00:24:55.030 "name": "malloc0", 00:24:55.030 "num_blocks": 8192, 00:24:55.030 "block_size": 4096, 00:24:55.030 "physical_block_size": 4096, 00:24:55.030 "uuid": "9b1a934a-8337-4b82-a699-7ea8b2f99965", 00:24:55.030 "optimal_io_boundary": 0, 00:24:55.030 "md_size": 0, 00:24:55.030 "dif_type": 0, 00:24:55.030 "dif_is_head_of_md": false, 00:24:55.030 "dif_pi_format": 0 00:24:55.030 } 00:24:55.030 }, 00:24:55.030 { 00:24:55.030 "method": "bdev_wait_for_examine" 00:24:55.030 } 00:24:55.030 ] 00:24:55.030 }, 00:24:55.030 { 00:24:55.030 "subsystem": "nbd", 00:24:55.030 "config": [] 00:24:55.030 }, 00:24:55.030 { 00:24:55.030 "subsystem": "scheduler", 00:24:55.030 "config": [ 00:24:55.030 { 00:24:55.030 "method": "framework_set_scheduler", 00:24:55.030 "params": { 00:24:55.030 "name": "static" 00:24:55.030 } 00:24:55.030 } 00:24:55.030 ] 00:24:55.030 }, 00:24:55.030 { 00:24:55.030 "subsystem": "nvmf", 00:24:55.030 "config": [ 00:24:55.030 { 00:24:55.031 "method": "nvmf_set_config", 00:24:55.031 "params": { 00:24:55.031 "discovery_filter": "match_any", 00:24:55.031 "admin_cmd_passthru": { 00:24:55.031 "identify_ctrlr": false 00:24:55.031 }, 00:24:55.031 "dhchap_digests": [ 00:24:55.031 "sha256", 00:24:55.031 "sha384", 00:24:55.031 "sha512" 00:24:55.031 ], 00:24:55.031 "dhchap_dhgroups": [ 00:24:55.031 "null", 00:24:55.031 "ffdhe2048", 00:24:55.031 "ffdhe3072", 00:24:55.031 "ffdhe4096", 00:24:55.031 "ffdhe6144", 00:24:55.031 "ffdhe8192" 00:24:55.031 ] 00:24:55.031 } 00:24:55.031 }, 00:24:55.031 { 00:24:55.031 "method": "nvmf_set_max_subsystems", 00:24:55.031 "params": { 00:24:55.031 "max_subsystems": 1024 00:24:55.031 } 00:24:55.031 }, 00:24:55.031 { 00:24:55.031 "method": "nvmf_set_crdt", 00:24:55.031 "params": { 00:24:55.031 "crdt1": 0, 00:24:55.031 "crdt2": 0, 00:24:55.031 "crdt3": 0 00:24:55.031 } 00:24:55.031 }, 00:24:55.031 { 00:24:55.031 "method": "nvmf_create_transport", 00:24:55.031 "params": { 00:24:55.031 "trtype": "TCP", 00:24:55.031 "max_queue_depth": 128, 00:24:55.031 "max_io_qpairs_per_ctrlr": 127, 00:24:55.031 "in_capsule_data_size": 4096, 00:24:55.031 "max_io_size": 131072, 00:24:55.031 "io_unit_size": 131072, 00:24:55.031 "max_aq_depth": 128, 00:24:55.031 "num_shared_buffers": 511, 00:24:55.031 "buf_cache_size": 4294967295, 00:24:55.031 "dif_insert_or_strip": false, 00:24:55.031 "zcopy": false, 00:24:55.031 "c2h_success": false, 00:24:55.031 "sock_priority": 0, 00:24:55.031 "abort_timeout_sec": 1, 00:24:55.031 "ack_timeout": 0, 00:24:55.031 "data_wr_pool_size": 0 00:24:55.031 } 00:24:55.031 }, 00:24:55.031 { 00:24:55.031 "method": "nvmf_create_subsystem", 00:24:55.031 "params": { 00:24:55.031 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:55.031 "allow_any_host": false, 00:24:55.031 "serial_number": "SPDK00000000000001", 00:24:55.031 "model_number": "SPDK bdev Controller", 00:24:55.031 "max_namespaces": 10, 00:24:55.031 "min_cntlid": 1, 00:24:55.031 "max_cntlid": 65519, 00:24:55.031 "ana_reporting": false 00:24:55.031 } 00:24:55.031 }, 00:24:55.031 { 00:24:55.031 "method": "nvmf_subsystem_add_host", 00:24:55.031 "params": { 00:24:55.031 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:55.031 "host": "nqn.2016-06.io.spdk:host1", 00:24:55.031 "psk": "key0" 00:24:55.031 } 00:24:55.031 }, 00:24:55.031 { 00:24:55.031 "method": "nvmf_subsystem_add_ns", 00:24:55.031 "params": { 00:24:55.031 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:55.031 "namespace": { 00:24:55.031 "nsid": 1, 00:24:55.031 "bdev_name": "malloc0", 00:24:55.031 "nguid": "9B1A934A83374B82A6997EA8B2F99965", 00:24:55.031 "uuid": "9b1a934a-8337-4b82-a699-7ea8b2f99965", 00:24:55.031 "no_auto_visible": false 00:24:55.031 } 00:24:55.031 } 00:24:55.031 }, 00:24:55.031 { 00:24:55.031 "method": "nvmf_subsystem_add_listener", 00:24:55.031 "params": { 00:24:55.031 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:55.031 "listen_address": { 00:24:55.031 "trtype": "TCP", 00:24:55.031 "adrfam": "IPv4", 00:24:55.031 "traddr": "10.0.0.2", 00:24:55.031 "trsvcid": "4420" 00:24:55.031 }, 00:24:55.031 "secure_channel": true 00:24:55.031 } 00:24:55.031 } 00:24:55.031 ] 00:24:55.031 } 00:24:55.031 ] 00:24:55.031 }' 00:24:55.031 15:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=419627 00:24:55.031 15:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 419627 00:24:55.031 15:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:55.031 15:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 419627 ']' 00:24:55.031 15:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:55.031 15:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:55.031 15:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:55.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:55.031 15:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:55.031 15:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:55.031 [2024-09-27 15:43:35.504780] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:24:55.031 [2024-09-27 15:43:35.504841] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:55.292 [2024-09-27 15:43:35.589541] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.292 [2024-09-27 15:43:35.618620] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:55.292 [2024-09-27 15:43:35.618654] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:55.292 [2024-09-27 15:43:35.618659] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:55.292 [2024-09-27 15:43:35.618664] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:55.292 [2024-09-27 15:43:35.618668] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:55.292 [2024-09-27 15:43:35.618711] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:55.555 [2024-09-27 15:43:35.815704] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:55.555 [2024-09-27 15:43:35.847697] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:55.555 [2024-09-27 15:43:35.847890] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:55.817 15:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:55.817 15:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:55.817 15:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:55.817 15:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:55.817 15:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:56.077 15:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:56.078 15:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=419962 00:24:56.078 15:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 419962 /var/tmp/bdevperf.sock 00:24:56.078 15:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 419962 ']' 00:24:56.078 15:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:56.078 15:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:56.078 15:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:56.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:56.078 15:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:56.078 15:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:56.078 15:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:56.078 15:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:24:56.078 "subsystems": [ 00:24:56.078 { 00:24:56.078 "subsystem": "keyring", 00:24:56.078 "config": [ 00:24:56.078 { 00:24:56.078 "method": "keyring_file_add_key", 00:24:56.078 "params": { 00:24:56.078 "name": "key0", 00:24:56.078 "path": "/tmp/tmp.UqGGInoeRK" 00:24:56.078 } 00:24:56.078 } 00:24:56.078 ] 00:24:56.078 }, 00:24:56.078 { 00:24:56.078 "subsystem": "iobuf", 00:24:56.078 "config": [ 00:24:56.078 { 00:24:56.078 "method": "iobuf_set_options", 00:24:56.078 "params": { 00:24:56.078 "small_pool_count": 8192, 00:24:56.078 "large_pool_count": 1024, 00:24:56.078 "small_bufsize": 8192, 00:24:56.078 "large_bufsize": 135168 00:24:56.078 } 00:24:56.078 } 00:24:56.078 ] 00:24:56.078 }, 00:24:56.078 { 00:24:56.078 "subsystem": "sock", 00:24:56.078 "config": [ 00:24:56.078 { 00:24:56.078 "method": "sock_set_default_impl", 00:24:56.078 "params": { 00:24:56.078 "impl_name": "posix" 00:24:56.078 } 00:24:56.078 }, 00:24:56.078 { 00:24:56.078 "method": "sock_impl_set_options", 00:24:56.078 "params": { 00:24:56.078 "impl_name": "ssl", 00:24:56.078 "recv_buf_size": 4096, 00:24:56.078 "send_buf_size": 4096, 00:24:56.078 "enable_recv_pipe": true, 00:24:56.078 "enable_quickack": false, 00:24:56.078 "enable_placement_id": 0, 00:24:56.078 "enable_zerocopy_send_server": true, 00:24:56.078 "enable_zerocopy_send_client": false, 00:24:56.078 "zerocopy_threshold": 0, 00:24:56.078 "tls_version": 0, 00:24:56.078 "enable_ktls": false 00:24:56.078 } 00:24:56.078 }, 00:24:56.078 { 00:24:56.078 "method": "sock_impl_set_options", 00:24:56.078 "params": { 00:24:56.078 "impl_name": "posix", 00:24:56.078 "recv_buf_size": 2097152, 00:24:56.078 "send_buf_size": 2097152, 00:24:56.078 "enable_recv_pipe": true, 00:24:56.078 "enable_quickack": false, 00:24:56.078 "enable_placement_id": 0, 00:24:56.078 "enable_zerocopy_send_server": true, 00:24:56.078 "enable_zerocopy_send_client": false, 00:24:56.078 "zerocopy_threshold": 0, 00:24:56.078 "tls_version": 0, 00:24:56.078 "enable_ktls": false 00:24:56.078 } 00:24:56.078 } 00:24:56.078 ] 00:24:56.078 }, 00:24:56.078 { 00:24:56.078 "subsystem": "vmd", 00:24:56.078 "config": [] 00:24:56.078 }, 00:24:56.078 { 00:24:56.078 "subsystem": "accel", 00:24:56.078 "config": [ 00:24:56.078 { 00:24:56.078 "method": "accel_set_options", 00:24:56.078 "params": { 00:24:56.078 "small_cache_size": 128, 00:24:56.078 "large_cache_size": 16, 00:24:56.078 "task_count": 2048, 00:24:56.078 "sequence_count": 2048, 00:24:56.078 "buf_count": 2048 00:24:56.078 } 00:24:56.078 } 00:24:56.078 ] 00:24:56.078 }, 00:24:56.078 { 00:24:56.078 "subsystem": "bdev", 00:24:56.078 "config": [ 00:24:56.078 { 00:24:56.078 "method": "bdev_set_options", 00:24:56.078 "params": { 00:24:56.078 "bdev_io_pool_size": 65535, 00:24:56.078 "bdev_io_cache_size": 256, 00:24:56.078 "bdev_auto_examine": true, 00:24:56.078 "iobuf_small_cache_size": 128, 00:24:56.078 "iobuf_large_cache_size": 16 00:24:56.078 } 00:24:56.078 }, 00:24:56.078 { 00:24:56.078 "method": "bdev_raid_set_options", 00:24:56.078 "params": { 00:24:56.078 "process_window_size_kb": 1024, 00:24:56.078 "process_max_bandwidth_mb_sec": 0 00:24:56.078 } 00:24:56.078 }, 00:24:56.078 { 00:24:56.078 "method": "bdev_iscsi_set_options", 00:24:56.078 "params": { 00:24:56.078 "timeout_sec": 30 00:24:56.078 } 00:24:56.078 }, 00:24:56.078 { 00:24:56.078 "method": "bdev_nvme_set_options", 00:24:56.078 "params": { 00:24:56.078 "action_on_timeout": "none", 00:24:56.078 "timeout_us": 0, 00:24:56.078 "timeout_admin_us": 0, 00:24:56.078 "keep_alive_timeout_ms": 10000, 00:24:56.078 "arbitration_burst": 0, 00:24:56.078 "low_priority_weight": 0, 00:24:56.078 "medium_priority_weight": 0, 00:24:56.078 "high_priority_weight": 0, 00:24:56.078 "nvme_adminq_poll_period_us": 10000, 00:24:56.078 "nvme_ioq_poll_period_us": 0, 00:24:56.078 "io_queue_requests": 512, 00:24:56.078 "delay_cmd_submit": true, 00:24:56.078 "transport_retry_count": 4, 00:24:56.078 "bdev_retry_count": 3, 00:24:56.078 "transport_ack_timeout": 0, 00:24:56.078 "ctrlr_loss_timeout_sec": 0, 00:24:56.078 "reconnect_delay_sec": 0, 00:24:56.078 "fast_io_fail_timeout_sec": 0, 00:24:56.078 "disable_auto_failback": false, 00:24:56.078 "generate_uuids": false, 00:24:56.078 "transport_tos": 0, 00:24:56.078 "nvme_error_stat": false, 00:24:56.078 "rdma_srq_size": 0, 00:24:56.078 "io_path_stat": false, 00:24:56.078 "allow_accel_sequence": false, 00:24:56.078 "rdma_max_cq_size": 0, 00:24:56.078 "rdma_cm_event_timeout_ms": 0, 00:24:56.078 "dhchap_digests": [ 00:24:56.078 "sha256", 00:24:56.078 "sha384", 00:24:56.078 "sha512" 00:24:56.078 ], 00:24:56.078 "dhchap_dhgroups": [ 00:24:56.078 "null", 00:24:56.078 "ffdhe2048", 00:24:56.078 "ffdhe3072", 00:24:56.078 "ffdhe4096", 00:24:56.078 "ffdhe6144", 00:24:56.078 "ffdhe8192" 00:24:56.078 ] 00:24:56.078 } 00:24:56.078 }, 00:24:56.078 { 00:24:56.078 "method": "bdev_nvme_attach_controller", 00:24:56.078 "params": { 00:24:56.078 "name": "TLSTEST", 00:24:56.078 "trtype": "TCP", 00:24:56.078 "adrfam": "IPv4", 00:24:56.078 "traddr": "10.0.0.2", 00:24:56.078 "trsvcid": "4420", 00:24:56.078 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:56.078 "prchk_reftag": false, 00:24:56.078 "prchk_guard": false, 00:24:56.078 "ctrlr_loss_timeout_sec": 0, 00:24:56.078 "reconnect_delay_sec": 0, 00:24:56.078 "fast_io_fail_timeout_sec": 0, 00:24:56.078 "psk": "key0", 00:24:56.078 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:56.078 "hdgst": false, 00:24:56.078 "ddgst": false 00:24:56.078 } 00:24:56.078 }, 00:24:56.078 { 00:24:56.078 "method": "bdev_nvme_set_hotplug", 00:24:56.078 "params": { 00:24:56.078 "period_us": 100000, 00:24:56.078 "enable": false 00:24:56.078 } 00:24:56.078 }, 00:24:56.078 { 00:24:56.078 "method": "bdev_wait_for_examine" 00:24:56.078 } 00:24:56.078 ] 00:24:56.078 }, 00:24:56.078 { 00:24:56.078 "subsystem": "nbd", 00:24:56.078 "config": [] 00:24:56.078 } 00:24:56.078 ] 00:24:56.078 }' 00:24:56.078 [2024-09-27 15:43:36.373320] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:24:56.078 [2024-09-27 15:43:36.373360] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid419962 ] 00:24:56.078 [2024-09-27 15:43:36.416589] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.078 [2024-09-27 15:43:36.444731] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:56.338 [2024-09-27 15:43:36.573172] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:56.908 15:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:56.908 15:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:56.908 15:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:56.908 Running I/O for 10 seconds... 00:25:07.202 5366.00 IOPS, 20.96 MiB/s 5676.50 IOPS, 22.17 MiB/s 5827.00 IOPS, 22.76 MiB/s 5578.25 IOPS, 21.79 MiB/s 5470.40 IOPS, 21.37 MiB/s 5372.83 IOPS, 20.99 MiB/s 5418.00 IOPS, 21.16 MiB/s 5425.75 IOPS, 21.19 MiB/s 5500.89 IOPS, 21.49 MiB/s 5332.40 IOPS, 20.83 MiB/s 00:25:07.202 Latency(us) 00:25:07.202 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.202 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:07.202 Verification LBA range: start 0x0 length 0x2000 00:25:07.202 TLSTESTn1 : 10.09 5297.68 20.69 0.00 0.00 24062.99 4860.59 88692.05 00:25:07.202 =================================================================================================================== 00:25:07.202 Total : 5297.68 20.69 0.00 0.00 24062.99 4860.59 88692.05 00:25:07.202 { 00:25:07.202 "results": [ 00:25:07.202 { 00:25:07.202 "job": "TLSTESTn1", 00:25:07.202 "core_mask": "0x4", 00:25:07.202 "workload": "verify", 00:25:07.202 "status": "finished", 00:25:07.202 "verify_range": { 00:25:07.202 "start": 0, 00:25:07.202 "length": 8192 00:25:07.202 }, 00:25:07.202 "queue_depth": 128, 00:25:07.202 "io_size": 4096, 00:25:07.202 "runtime": 10.089702, 00:25:07.202 "iops": 5297.678761969382, 00:25:07.202 "mibps": 20.6940576639429, 00:25:07.202 "io_failed": 0, 00:25:07.202 "io_timeout": 0, 00:25:07.202 "avg_latency_us": 24062.985680860085, 00:25:07.202 "min_latency_us": 4860.586666666667, 00:25:07.202 "max_latency_us": 88692.05333333333 00:25:07.202 } 00:25:07.202 ], 00:25:07.202 "core_count": 1 00:25:07.202 } 00:25:07.202 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:07.202 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 419962 00:25:07.202 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 419962 ']' 00:25:07.202 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 419962 00:25:07.202 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:07.202 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:07.202 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 419962 00:25:07.202 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:07.202 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:07.202 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 419962' 00:25:07.202 killing process with pid 419962 00:25:07.202 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 419962 00:25:07.202 Received shutdown signal, test time was about 10.000000 seconds 00:25:07.202 00:25:07.202 Latency(us) 00:25:07.202 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.202 =================================================================================================================== 00:25:07.202 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:07.202 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 419962 00:25:07.202 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 419627 00:25:07.202 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 419627 ']' 00:25:07.202 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 419627 00:25:07.202 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:07.202 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:07.202 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 419627 00:25:07.202 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:07.202 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:07.202 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 419627' 00:25:07.202 killing process with pid 419627 00:25:07.202 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 419627 00:25:07.202 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 419627 00:25:07.464 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:25:07.464 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:07.464 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:07.464 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:07.464 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=422040 00:25:07.464 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 422040 00:25:07.464 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:07.464 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 422040 ']' 00:25:07.464 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:07.464 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:07.464 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:07.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:07.464 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:07.464 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:07.464 [2024-09-27 15:43:47.800180] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:25:07.464 [2024-09-27 15:43:47.800239] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:07.464 [2024-09-27 15:43:47.884368] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:07.464 [2024-09-27 15:43:47.926998] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:07.464 [2024-09-27 15:43:47.927052] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:07.464 [2024-09-27 15:43:47.927060] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:07.464 [2024-09-27 15:43:47.927068] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:07.464 [2024-09-27 15:43:47.927080] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:07.464 [2024-09-27 15:43:47.927102] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.409 15:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:08.409 15:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:25:08.409 15:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:08.409 15:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:08.409 15:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:08.409 15:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:08.409 15:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.UqGGInoeRK 00:25:08.409 15:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.UqGGInoeRK 00:25:08.409 15:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:08.409 [2024-09-27 15:43:48.833656] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:08.409 15:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:08.670 15:43:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:08.931 [2024-09-27 15:43:49.230651] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:08.931 [2024-09-27 15:43:49.230968] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:08.931 15:43:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:09.192 malloc0 00:25:09.192 15:43:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:09.453 15:43:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.UqGGInoeRK 00:25:09.453 15:43:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:25:09.713 15:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=422650 00:25:09.713 15:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:09.713 15:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:09.713 15:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 422650 /var/tmp/bdevperf.sock 00:25:09.713 15:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 422650 ']' 00:25:09.713 15:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:09.713 15:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:09.714 15:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:09.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:09.714 15:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:09.714 15:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:09.714 [2024-09-27 15:43:50.110027] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:25:09.714 [2024-09-27 15:43:50.110092] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid422650 ] 00:25:09.714 [2024-09-27 15:43:50.187774] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.973 [2024-09-27 15:43:50.217530] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:09.973 15:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:09.973 15:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:25:09.973 15:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UqGGInoeRK 00:25:10.233 15:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:10.233 [2024-09-27 15:43:50.627013] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:10.233 nvme0n1 00:25:10.493 15:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:10.493 Running I/O for 1 seconds... 00:25:11.433 3368.00 IOPS, 13.16 MiB/s 00:25:11.433 Latency(us) 00:25:11.433 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.433 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:11.433 Verification LBA range: start 0x0 length 0x2000 00:25:11.433 nvme0n1 : 1.02 3441.20 13.44 0.00 0.00 36909.48 5870.93 192238.93 00:25:11.433 =================================================================================================================== 00:25:11.433 Total : 3441.20 13.44 0.00 0.00 36909.48 5870.93 192238.93 00:25:11.433 { 00:25:11.433 "results": [ 00:25:11.433 { 00:25:11.433 "job": "nvme0n1", 00:25:11.433 "core_mask": "0x2", 00:25:11.433 "workload": "verify", 00:25:11.433 "status": "finished", 00:25:11.433 "verify_range": { 00:25:11.433 "start": 0, 00:25:11.433 "length": 8192 00:25:11.433 }, 00:25:11.433 "queue_depth": 128, 00:25:11.433 "io_size": 4096, 00:25:11.433 "runtime": 1.015924, 00:25:11.433 "iops": 3441.2022946598368, 00:25:11.433 "mibps": 13.442196463514987, 00:25:11.433 "io_failed": 0, 00:25:11.433 "io_timeout": 0, 00:25:11.433 "avg_latency_us": 36909.48247139588, 00:25:11.433 "min_latency_us": 5870.933333333333, 00:25:11.433 "max_latency_us": 192238.93333333332 00:25:11.433 } 00:25:11.433 ], 00:25:11.433 "core_count": 1 00:25:11.433 } 00:25:11.433 15:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 422650 00:25:11.433 15:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 422650 ']' 00:25:11.433 15:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 422650 00:25:11.433 15:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:11.433 15:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:11.433 15:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 422650 00:25:11.694 15:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:11.694 15:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:11.694 15:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 422650' 00:25:11.694 killing process with pid 422650 00:25:11.694 15:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 422650 00:25:11.694 Received shutdown signal, test time was about 1.000000 seconds 00:25:11.694 00:25:11.694 Latency(us) 00:25:11.694 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.694 =================================================================================================================== 00:25:11.694 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:11.694 15:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 422650 00:25:11.694 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 422040 00:25:11.694 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 422040 ']' 00:25:11.694 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 422040 00:25:11.694 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:11.694 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:11.694 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 422040 00:25:11.694 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:11.694 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:11.694 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 422040' 00:25:11.694 killing process with pid 422040 00:25:11.694 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 422040 00:25:11.694 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 422040 00:25:11.954 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:25:11.954 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:11.954 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:11.954 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:11.954 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=423016 00:25:11.954 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 423016 00:25:11.954 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:11.954 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 423016 ']' 00:25:11.954 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:11.954 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:11.954 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:11.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:11.954 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:11.954 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:11.954 [2024-09-27 15:43:52.283186] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:25:11.954 [2024-09-27 15:43:52.283236] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:11.954 [2024-09-27 15:43:52.364078] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.954 [2024-09-27 15:43:52.391256] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:11.954 [2024-09-27 15:43:52.391291] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:11.954 [2024-09-27 15:43:52.391297] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:11.954 [2024-09-27 15:43:52.391301] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:11.954 [2024-09-27 15:43:52.391308] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:11.954 [2024-09-27 15:43:52.391323] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:12.895 15:43:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:12.895 15:43:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:25:12.895 15:43:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:12.895 15:43:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:12.895 15:43:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:12.895 15:43:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:12.895 15:43:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:25:12.895 15:43:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.895 15:43:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:12.895 [2024-09-27 15:43:53.129174] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:12.895 malloc0 00:25:12.895 [2024-09-27 15:43:53.174717] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:12.895 [2024-09-27 15:43:53.174963] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:12.895 15:43:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.895 15:43:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=423193 00:25:12.895 15:43:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 423193 /var/tmp/bdevperf.sock 00:25:12.895 15:43:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:12.895 15:43:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 423193 ']' 00:25:12.895 15:43:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:12.895 15:43:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:12.895 15:43:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:12.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:12.895 15:43:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:12.895 15:43:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:12.895 [2024-09-27 15:43:53.264272] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:25:12.895 [2024-09-27 15:43:53.264320] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid423193 ] 00:25:12.895 [2024-09-27 15:43:53.340823] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.895 [2024-09-27 15:43:53.369451] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:13.836 15:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:13.836 15:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:25:13.836 15:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UqGGInoeRK 00:25:13.836 15:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:14.097 [2024-09-27 15:43:54.375494] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:14.097 nvme0n1 00:25:14.097 15:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:14.097 Running I/O for 1 seconds... 00:25:15.480 2231.00 IOPS, 8.71 MiB/s 00:25:15.480 Latency(us) 00:25:15.480 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.480 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:15.480 Verification LBA range: start 0x0 length 0x2000 00:25:15.480 nvme0n1 : 1.02 2324.22 9.08 0.00 0.00 54662.59 4505.60 191365.12 00:25:15.480 =================================================================================================================== 00:25:15.480 Total : 2324.22 9.08 0.00 0.00 54662.59 4505.60 191365.12 00:25:15.480 { 00:25:15.480 "results": [ 00:25:15.480 { 00:25:15.480 "job": "nvme0n1", 00:25:15.481 "core_mask": "0x2", 00:25:15.481 "workload": "verify", 00:25:15.481 "status": "finished", 00:25:15.481 "verify_range": { 00:25:15.481 "start": 0, 00:25:15.481 "length": 8192 00:25:15.481 }, 00:25:15.481 "queue_depth": 128, 00:25:15.481 "io_size": 4096, 00:25:15.481 "runtime": 1.015396, 00:25:15.481 "iops": 2324.2163648468186, 00:25:15.481 "mibps": 9.078970175182885, 00:25:15.481 "io_failed": 0, 00:25:15.481 "io_timeout": 0, 00:25:15.481 "avg_latency_us": 54662.58946892655, 00:25:15.481 "min_latency_us": 4505.6, 00:25:15.481 "max_latency_us": 191365.12 00:25:15.481 } 00:25:15.481 ], 00:25:15.481 "core_count": 1 00:25:15.481 } 00:25:15.481 15:43:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:25:15.481 15:43:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.481 15:43:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:15.481 15:43:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.481 15:43:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:25:15.481 "subsystems": [ 00:25:15.481 { 00:25:15.481 "subsystem": "keyring", 00:25:15.481 "config": [ 00:25:15.481 { 00:25:15.481 "method": "keyring_file_add_key", 00:25:15.481 "params": { 00:25:15.481 "name": "key0", 00:25:15.481 "path": "/tmp/tmp.UqGGInoeRK" 00:25:15.481 } 00:25:15.481 } 00:25:15.481 ] 00:25:15.481 }, 00:25:15.481 { 00:25:15.481 "subsystem": "iobuf", 00:25:15.481 "config": [ 00:25:15.481 { 00:25:15.481 "method": "iobuf_set_options", 00:25:15.481 "params": { 00:25:15.481 "small_pool_count": 8192, 00:25:15.481 "large_pool_count": 1024, 00:25:15.481 "small_bufsize": 8192, 00:25:15.481 "large_bufsize": 135168 00:25:15.481 } 00:25:15.481 } 00:25:15.481 ] 00:25:15.481 }, 00:25:15.481 { 00:25:15.481 "subsystem": "sock", 00:25:15.481 "config": [ 00:25:15.481 { 00:25:15.481 "method": "sock_set_default_impl", 00:25:15.481 "params": { 00:25:15.481 "impl_name": "posix" 00:25:15.481 } 00:25:15.481 }, 00:25:15.481 { 00:25:15.481 "method": "sock_impl_set_options", 00:25:15.481 "params": { 00:25:15.481 "impl_name": "ssl", 00:25:15.481 "recv_buf_size": 4096, 00:25:15.481 "send_buf_size": 4096, 00:25:15.481 "enable_recv_pipe": true, 00:25:15.481 "enable_quickack": false, 00:25:15.481 "enable_placement_id": 0, 00:25:15.481 "enable_zerocopy_send_server": true, 00:25:15.481 "enable_zerocopy_send_client": false, 00:25:15.481 "zerocopy_threshold": 0, 00:25:15.481 "tls_version": 0, 00:25:15.481 "enable_ktls": false 00:25:15.481 } 00:25:15.481 }, 00:25:15.481 { 00:25:15.481 "method": "sock_impl_set_options", 00:25:15.481 "params": { 00:25:15.481 "impl_name": "posix", 00:25:15.481 "recv_buf_size": 2097152, 00:25:15.481 "send_buf_size": 2097152, 00:25:15.481 "enable_recv_pipe": true, 00:25:15.481 "enable_quickack": false, 00:25:15.481 "enable_placement_id": 0, 00:25:15.481 "enable_zerocopy_send_server": true, 00:25:15.481 "enable_zerocopy_send_client": false, 00:25:15.481 "zerocopy_threshold": 0, 00:25:15.481 "tls_version": 0, 00:25:15.481 "enable_ktls": false 00:25:15.481 } 00:25:15.481 } 00:25:15.481 ] 00:25:15.481 }, 00:25:15.481 { 00:25:15.481 "subsystem": "vmd", 00:25:15.481 "config": [] 00:25:15.481 }, 00:25:15.481 { 00:25:15.481 "subsystem": "accel", 00:25:15.481 "config": [ 00:25:15.481 { 00:25:15.481 "method": "accel_set_options", 00:25:15.481 "params": { 00:25:15.481 "small_cache_size": 128, 00:25:15.481 "large_cache_size": 16, 00:25:15.481 "task_count": 2048, 00:25:15.481 "sequence_count": 2048, 00:25:15.481 "buf_count": 2048 00:25:15.481 } 00:25:15.481 } 00:25:15.481 ] 00:25:15.481 }, 00:25:15.481 { 00:25:15.481 "subsystem": "bdev", 00:25:15.481 "config": [ 00:25:15.481 { 00:25:15.481 "method": "bdev_set_options", 00:25:15.481 "params": { 00:25:15.481 "bdev_io_pool_size": 65535, 00:25:15.481 "bdev_io_cache_size": 256, 00:25:15.481 "bdev_auto_examine": true, 00:25:15.481 "iobuf_small_cache_size": 128, 00:25:15.481 "iobuf_large_cache_size": 16 00:25:15.481 } 00:25:15.481 }, 00:25:15.481 { 00:25:15.481 "method": "bdev_raid_set_options", 00:25:15.481 "params": { 00:25:15.481 "process_window_size_kb": 1024, 00:25:15.481 "process_max_bandwidth_mb_sec": 0 00:25:15.481 } 00:25:15.481 }, 00:25:15.481 { 00:25:15.481 "method": "bdev_iscsi_set_options", 00:25:15.481 "params": { 00:25:15.481 "timeout_sec": 30 00:25:15.481 } 00:25:15.481 }, 00:25:15.481 { 00:25:15.481 "method": "bdev_nvme_set_options", 00:25:15.481 "params": { 00:25:15.481 "action_on_timeout": "none", 00:25:15.481 "timeout_us": 0, 00:25:15.481 "timeout_admin_us": 0, 00:25:15.481 "keep_alive_timeout_ms": 10000, 00:25:15.481 "arbitration_burst": 0, 00:25:15.481 "low_priority_weight": 0, 00:25:15.481 "medium_priority_weight": 0, 00:25:15.481 "high_priority_weight": 0, 00:25:15.481 "nvme_adminq_poll_period_us": 10000, 00:25:15.481 "nvme_ioq_poll_period_us": 0, 00:25:15.481 "io_queue_requests": 0, 00:25:15.481 "delay_cmd_submit": true, 00:25:15.481 "transport_retry_count": 4, 00:25:15.481 "bdev_retry_count": 3, 00:25:15.481 "transport_ack_timeout": 0, 00:25:15.481 "ctrlr_loss_timeout_sec": 0, 00:25:15.481 "reconnect_delay_sec": 0, 00:25:15.481 "fast_io_fail_timeout_sec": 0, 00:25:15.481 "disable_auto_failback": false, 00:25:15.481 "generate_uuids": false, 00:25:15.481 "transport_tos": 0, 00:25:15.481 "nvme_error_stat": false, 00:25:15.481 "rdma_srq_size": 0, 00:25:15.481 "io_path_stat": false, 00:25:15.481 "allow_accel_sequence": false, 00:25:15.481 "rdma_max_cq_size": 0, 00:25:15.481 "rdma_cm_event_timeout_ms": 0, 00:25:15.481 "dhchap_digests": [ 00:25:15.481 "sha256", 00:25:15.481 "sha384", 00:25:15.481 "sha512" 00:25:15.481 ], 00:25:15.481 "dhchap_dhgroups": [ 00:25:15.481 "null", 00:25:15.481 "ffdhe2048", 00:25:15.481 "ffdhe3072", 00:25:15.481 "ffdhe4096", 00:25:15.481 "ffdhe6144", 00:25:15.481 "ffdhe8192" 00:25:15.481 ] 00:25:15.481 } 00:25:15.481 }, 00:25:15.481 { 00:25:15.481 "method": "bdev_nvme_set_hotplug", 00:25:15.481 "params": { 00:25:15.481 "period_us": 100000, 00:25:15.481 "enable": false 00:25:15.481 } 00:25:15.481 }, 00:25:15.481 { 00:25:15.481 "method": "bdev_malloc_create", 00:25:15.481 "params": { 00:25:15.481 "name": "malloc0", 00:25:15.481 "num_blocks": 8192, 00:25:15.481 "block_size": 4096, 00:25:15.481 "physical_block_size": 4096, 00:25:15.481 "uuid": "b07fb501-47cd-4c93-a28a-92c46904b146", 00:25:15.481 "optimal_io_boundary": 0, 00:25:15.481 "md_size": 0, 00:25:15.481 "dif_type": 0, 00:25:15.481 "dif_is_head_of_md": false, 00:25:15.481 "dif_pi_format": 0 00:25:15.481 } 00:25:15.481 }, 00:25:15.481 { 00:25:15.481 "method": "bdev_wait_for_examine" 00:25:15.481 } 00:25:15.481 ] 00:25:15.481 }, 00:25:15.481 { 00:25:15.481 "subsystem": "nbd", 00:25:15.481 "config": [] 00:25:15.481 }, 00:25:15.481 { 00:25:15.481 "subsystem": "scheduler", 00:25:15.481 "config": [ 00:25:15.481 { 00:25:15.481 "method": "framework_set_scheduler", 00:25:15.481 "params": { 00:25:15.481 "name": "static" 00:25:15.481 } 00:25:15.481 } 00:25:15.481 ] 00:25:15.481 }, 00:25:15.481 { 00:25:15.481 "subsystem": "nvmf", 00:25:15.481 "config": [ 00:25:15.481 { 00:25:15.481 "method": "nvmf_set_config", 00:25:15.481 "params": { 00:25:15.481 "discovery_filter": "match_any", 00:25:15.481 "admin_cmd_passthru": { 00:25:15.481 "identify_ctrlr": false 00:25:15.481 }, 00:25:15.481 "dhchap_digests": [ 00:25:15.481 "sha256", 00:25:15.481 "sha384", 00:25:15.481 "sha512" 00:25:15.481 ], 00:25:15.481 "dhchap_dhgroups": [ 00:25:15.481 "null", 00:25:15.481 "ffdhe2048", 00:25:15.481 "ffdhe3072", 00:25:15.481 "ffdhe4096", 00:25:15.481 "ffdhe6144", 00:25:15.481 "ffdhe8192" 00:25:15.481 ] 00:25:15.481 } 00:25:15.481 }, 00:25:15.481 { 00:25:15.481 "method": "nvmf_set_max_subsystems", 00:25:15.481 "params": { 00:25:15.481 "max_subsystems": 1024 00:25:15.481 } 00:25:15.481 }, 00:25:15.481 { 00:25:15.481 "method": "nvmf_set_crdt", 00:25:15.481 "params": { 00:25:15.481 "crdt1": 0, 00:25:15.481 "crdt2": 0, 00:25:15.481 "crdt3": 0 00:25:15.481 } 00:25:15.481 }, 00:25:15.481 { 00:25:15.481 "method": "nvmf_create_transport", 00:25:15.481 "params": { 00:25:15.481 "trtype": "TCP", 00:25:15.481 "max_queue_depth": 128, 00:25:15.481 "max_io_qpairs_per_ctrlr": 127, 00:25:15.481 "in_capsule_data_size": 4096, 00:25:15.481 "max_io_size": 131072, 00:25:15.481 "io_unit_size": 131072, 00:25:15.481 "max_aq_depth": 128, 00:25:15.481 "num_shared_buffers": 511, 00:25:15.481 "buf_cache_size": 4294967295, 00:25:15.481 "dif_insert_or_strip": false, 00:25:15.481 "zcopy": false, 00:25:15.481 "c2h_success": false, 00:25:15.481 "sock_priority": 0, 00:25:15.481 "abort_timeout_sec": 1, 00:25:15.481 "ack_timeout": 0, 00:25:15.482 "data_wr_pool_size": 0 00:25:15.482 } 00:25:15.482 }, 00:25:15.482 { 00:25:15.482 "method": "nvmf_create_subsystem", 00:25:15.482 "params": { 00:25:15.482 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:15.482 "allow_any_host": false, 00:25:15.482 "serial_number": "00000000000000000000", 00:25:15.482 "model_number": "SPDK bdev Controller", 00:25:15.482 "max_namespaces": 32, 00:25:15.482 "min_cntlid": 1, 00:25:15.482 "max_cntlid": 65519, 00:25:15.482 "ana_reporting": false 00:25:15.482 } 00:25:15.482 }, 00:25:15.482 { 00:25:15.482 "method": "nvmf_subsystem_add_host", 00:25:15.482 "params": { 00:25:15.482 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:15.482 "host": "nqn.2016-06.io.spdk:host1", 00:25:15.482 "psk": "key0" 00:25:15.482 } 00:25:15.482 }, 00:25:15.482 { 00:25:15.482 "method": "nvmf_subsystem_add_ns", 00:25:15.482 "params": { 00:25:15.482 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:15.482 "namespace": { 00:25:15.482 "nsid": 1, 00:25:15.482 "bdev_name": "malloc0", 00:25:15.482 "nguid": "B07FB50147CD4C93A28A92C46904B146", 00:25:15.482 "uuid": "b07fb501-47cd-4c93-a28a-92c46904b146", 00:25:15.482 "no_auto_visible": false 00:25:15.482 } 00:25:15.482 } 00:25:15.482 }, 00:25:15.482 { 00:25:15.482 "method": "nvmf_subsystem_add_listener", 00:25:15.482 "params": { 00:25:15.482 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:15.482 "listen_address": { 00:25:15.482 "trtype": "TCP", 00:25:15.482 "adrfam": "IPv4", 00:25:15.482 "traddr": "10.0.0.2", 00:25:15.482 "trsvcid": "4420" 00:25:15.482 }, 00:25:15.482 "secure_channel": false, 00:25:15.482 "sock_impl": "ssl" 00:25:15.482 } 00:25:15.482 } 00:25:15.482 ] 00:25:15.482 } 00:25:15.482 ] 00:25:15.482 }' 00:25:15.482 15:43:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:25:15.743 15:43:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:25:15.743 "subsystems": [ 00:25:15.743 { 00:25:15.743 "subsystem": "keyring", 00:25:15.743 "config": [ 00:25:15.743 { 00:25:15.743 "method": "keyring_file_add_key", 00:25:15.743 "params": { 00:25:15.743 "name": "key0", 00:25:15.743 "path": "/tmp/tmp.UqGGInoeRK" 00:25:15.743 } 00:25:15.743 } 00:25:15.743 ] 00:25:15.743 }, 00:25:15.743 { 00:25:15.743 "subsystem": "iobuf", 00:25:15.743 "config": [ 00:25:15.743 { 00:25:15.743 "method": "iobuf_set_options", 00:25:15.743 "params": { 00:25:15.743 "small_pool_count": 8192, 00:25:15.743 "large_pool_count": 1024, 00:25:15.743 "small_bufsize": 8192, 00:25:15.743 "large_bufsize": 135168 00:25:15.743 } 00:25:15.743 } 00:25:15.743 ] 00:25:15.743 }, 00:25:15.743 { 00:25:15.743 "subsystem": "sock", 00:25:15.743 "config": [ 00:25:15.743 { 00:25:15.743 "method": "sock_set_default_impl", 00:25:15.743 "params": { 00:25:15.743 "impl_name": "posix" 00:25:15.743 } 00:25:15.743 }, 00:25:15.743 { 00:25:15.743 "method": "sock_impl_set_options", 00:25:15.743 "params": { 00:25:15.743 "impl_name": "ssl", 00:25:15.743 "recv_buf_size": 4096, 00:25:15.743 "send_buf_size": 4096, 00:25:15.743 "enable_recv_pipe": true, 00:25:15.743 "enable_quickack": false, 00:25:15.743 "enable_placement_id": 0, 00:25:15.743 "enable_zerocopy_send_server": true, 00:25:15.743 "enable_zerocopy_send_client": false, 00:25:15.743 "zerocopy_threshold": 0, 00:25:15.743 "tls_version": 0, 00:25:15.743 "enable_ktls": false 00:25:15.743 } 00:25:15.743 }, 00:25:15.743 { 00:25:15.743 "method": "sock_impl_set_options", 00:25:15.743 "params": { 00:25:15.743 "impl_name": "posix", 00:25:15.743 "recv_buf_size": 2097152, 00:25:15.743 "send_buf_size": 2097152, 00:25:15.743 "enable_recv_pipe": true, 00:25:15.743 "enable_quickack": false, 00:25:15.743 "enable_placement_id": 0, 00:25:15.743 "enable_zerocopy_send_server": true, 00:25:15.743 "enable_zerocopy_send_client": false, 00:25:15.743 "zerocopy_threshold": 0, 00:25:15.743 "tls_version": 0, 00:25:15.743 "enable_ktls": false 00:25:15.743 } 00:25:15.743 } 00:25:15.743 ] 00:25:15.743 }, 00:25:15.743 { 00:25:15.743 "subsystem": "vmd", 00:25:15.743 "config": [] 00:25:15.743 }, 00:25:15.743 { 00:25:15.743 "subsystem": "accel", 00:25:15.743 "config": [ 00:25:15.743 { 00:25:15.743 "method": "accel_set_options", 00:25:15.743 "params": { 00:25:15.743 "small_cache_size": 128, 00:25:15.743 "large_cache_size": 16, 00:25:15.743 "task_count": 2048, 00:25:15.743 "sequence_count": 2048, 00:25:15.743 "buf_count": 2048 00:25:15.743 } 00:25:15.743 } 00:25:15.743 ] 00:25:15.743 }, 00:25:15.743 { 00:25:15.743 "subsystem": "bdev", 00:25:15.743 "config": [ 00:25:15.743 { 00:25:15.743 "method": "bdev_set_options", 00:25:15.743 "params": { 00:25:15.743 "bdev_io_pool_size": 65535, 00:25:15.743 "bdev_io_cache_size": 256, 00:25:15.743 "bdev_auto_examine": true, 00:25:15.743 "iobuf_small_cache_size": 128, 00:25:15.743 "iobuf_large_cache_size": 16 00:25:15.743 } 00:25:15.743 }, 00:25:15.743 { 00:25:15.743 "method": "bdev_raid_set_options", 00:25:15.743 "params": { 00:25:15.743 "process_window_size_kb": 1024, 00:25:15.743 "process_max_bandwidth_mb_sec": 0 00:25:15.743 } 00:25:15.743 }, 00:25:15.743 { 00:25:15.743 "method": "bdev_iscsi_set_options", 00:25:15.743 "params": { 00:25:15.743 "timeout_sec": 30 00:25:15.743 } 00:25:15.743 }, 00:25:15.743 { 00:25:15.743 "method": "bdev_nvme_set_options", 00:25:15.743 "params": { 00:25:15.743 "action_on_timeout": "none", 00:25:15.743 "timeout_us": 0, 00:25:15.743 "timeout_admin_us": 0, 00:25:15.743 "keep_alive_timeout_ms": 10000, 00:25:15.743 "arbitration_burst": 0, 00:25:15.743 "low_priority_weight": 0, 00:25:15.743 "medium_priority_weight": 0, 00:25:15.743 "high_priority_weight": 0, 00:25:15.743 "nvme_adminq_poll_period_us": 10000, 00:25:15.743 "nvme_ioq_poll_period_us": 0, 00:25:15.743 "io_queue_requests": 512, 00:25:15.743 "delay_cmd_submit": true, 00:25:15.743 "transport_retry_count": 4, 00:25:15.743 "bdev_retry_count": 3, 00:25:15.743 "transport_ack_timeout": 0, 00:25:15.743 "ctrlr_loss_timeout_sec": 0, 00:25:15.743 "reconnect_delay_sec": 0, 00:25:15.743 "fast_io_fail_timeout_sec": 0, 00:25:15.743 "disable_auto_failback": false, 00:25:15.743 "generate_uuids": false, 00:25:15.743 "transport_tos": 0, 00:25:15.743 "nvme_error_stat": false, 00:25:15.743 "rdma_srq_size": 0, 00:25:15.743 "io_path_stat": false, 00:25:15.743 "allow_accel_sequence": false, 00:25:15.743 "rdma_max_cq_size": 0, 00:25:15.743 "rdma_cm_event_timeout_ms": 0, 00:25:15.743 "dhchap_digests": [ 00:25:15.743 "sha256", 00:25:15.743 "sha384", 00:25:15.743 "sha512" 00:25:15.743 ], 00:25:15.743 "dhchap_dhgroups": [ 00:25:15.743 "null", 00:25:15.743 "ffdhe2048", 00:25:15.743 "ffdhe3072", 00:25:15.743 "ffdhe4096", 00:25:15.743 "ffdhe6144", 00:25:15.743 "ffdhe8192" 00:25:15.743 ] 00:25:15.743 } 00:25:15.743 }, 00:25:15.743 { 00:25:15.743 "method": "bdev_nvme_attach_controller", 00:25:15.743 "params": { 00:25:15.743 "name": "nvme0", 00:25:15.743 "trtype": "TCP", 00:25:15.743 "adrfam": "IPv4", 00:25:15.743 "traddr": "10.0.0.2", 00:25:15.743 "trsvcid": "4420", 00:25:15.743 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:15.743 "prchk_reftag": false, 00:25:15.743 "prchk_guard": false, 00:25:15.743 "ctrlr_loss_timeout_sec": 0, 00:25:15.743 "reconnect_delay_sec": 0, 00:25:15.743 "fast_io_fail_timeout_sec": 0, 00:25:15.743 "psk": "key0", 00:25:15.743 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:15.743 "hdgst": false, 00:25:15.743 "ddgst": false 00:25:15.743 } 00:25:15.743 }, 00:25:15.743 { 00:25:15.743 "method": "bdev_nvme_set_hotplug", 00:25:15.743 "params": { 00:25:15.743 "period_us": 100000, 00:25:15.743 "enable": false 00:25:15.743 } 00:25:15.743 }, 00:25:15.743 { 00:25:15.743 "method": "bdev_enable_histogram", 00:25:15.743 "params": { 00:25:15.743 "name": "nvme0n1", 00:25:15.743 "enable": true 00:25:15.743 } 00:25:15.743 }, 00:25:15.743 { 00:25:15.743 "method": "bdev_wait_for_examine" 00:25:15.743 } 00:25:15.743 ] 00:25:15.743 }, 00:25:15.743 { 00:25:15.743 "subsystem": "nbd", 00:25:15.743 "config": [] 00:25:15.743 } 00:25:15.743 ] 00:25:15.743 }' 00:25:15.743 15:43:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 423193 00:25:15.743 15:43:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 423193 ']' 00:25:15.743 15:43:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 423193 00:25:15.743 15:43:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:15.743 15:43:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:15.743 15:43:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 423193 00:25:15.743 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:15.743 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:15.743 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 423193' 00:25:15.743 killing process with pid 423193 00:25:15.743 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 423193 00:25:15.743 Received shutdown signal, test time was about 1.000000 seconds 00:25:15.743 00:25:15.743 Latency(us) 00:25:15.743 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.744 =================================================================================================================== 00:25:15.744 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:15.744 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 423193 00:25:15.744 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 423016 00:25:15.744 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 423016 ']' 00:25:15.744 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 423016 00:25:15.744 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:15.744 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:15.744 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 423016 00:25:15.744 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:15.744 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:15.744 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 423016' 00:25:15.744 killing process with pid 423016 00:25:15.744 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 423016 00:25:15.744 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 423016 00:25:16.004 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:25:16.004 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:16.004 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:16.004 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:25:16.004 "subsystems": [ 00:25:16.004 { 00:25:16.004 "subsystem": "keyring", 00:25:16.004 "config": [ 00:25:16.004 { 00:25:16.004 "method": "keyring_file_add_key", 00:25:16.004 "params": { 00:25:16.004 "name": "key0", 00:25:16.004 "path": "/tmp/tmp.UqGGInoeRK" 00:25:16.004 } 00:25:16.004 } 00:25:16.004 ] 00:25:16.004 }, 00:25:16.004 { 00:25:16.004 "subsystem": "iobuf", 00:25:16.004 "config": [ 00:25:16.004 { 00:25:16.004 "method": "iobuf_set_options", 00:25:16.004 "params": { 00:25:16.004 "small_pool_count": 8192, 00:25:16.004 "large_pool_count": 1024, 00:25:16.004 "small_bufsize": 8192, 00:25:16.004 "large_bufsize": 135168 00:25:16.004 } 00:25:16.004 } 00:25:16.004 ] 00:25:16.004 }, 00:25:16.004 { 00:25:16.004 "subsystem": "sock", 00:25:16.004 "config": [ 00:25:16.004 { 00:25:16.004 "method": "sock_set_default_impl", 00:25:16.004 "params": { 00:25:16.004 "impl_name": "posix" 00:25:16.004 } 00:25:16.004 }, 00:25:16.004 { 00:25:16.004 "method": "sock_impl_set_options", 00:25:16.004 "params": { 00:25:16.004 "impl_name": "ssl", 00:25:16.004 "recv_buf_size": 4096, 00:25:16.004 "send_buf_size": 4096, 00:25:16.005 "enable_recv_pipe": true, 00:25:16.005 "enable_quickack": false, 00:25:16.005 "enable_placement_id": 0, 00:25:16.005 "enable_zerocopy_send_server": true, 00:25:16.005 "enable_zerocopy_send_client": false, 00:25:16.005 "zerocopy_threshold": 0, 00:25:16.005 "tls_version": 0, 00:25:16.005 "enable_ktls": false 00:25:16.005 } 00:25:16.005 }, 00:25:16.005 { 00:25:16.005 "method": "sock_impl_set_options", 00:25:16.005 "params": { 00:25:16.005 "impl_name": "posix", 00:25:16.005 "recv_buf_size": 2097152, 00:25:16.005 "send_buf_size": 2097152, 00:25:16.005 "enable_recv_pipe": true, 00:25:16.005 "enable_quickack": false, 00:25:16.005 "enable_placement_id": 0, 00:25:16.005 "enable_zerocopy_send_server": true, 00:25:16.005 "enable_zerocopy_send_client": false, 00:25:16.005 "zerocopy_threshold": 0, 00:25:16.005 "tls_version": 0, 00:25:16.005 "enable_ktls": false 00:25:16.005 } 00:25:16.005 } 00:25:16.005 ] 00:25:16.005 }, 00:25:16.005 { 00:25:16.005 "subsystem": "vmd", 00:25:16.005 "config": [] 00:25:16.005 }, 00:25:16.005 { 00:25:16.005 "subsystem": "accel", 00:25:16.005 "config": [ 00:25:16.005 { 00:25:16.005 "method": "accel_set_options", 00:25:16.005 "params": { 00:25:16.005 "small_cache_size": 128, 00:25:16.005 "large_cache_size": 16, 00:25:16.005 "task_count": 2048, 00:25:16.005 "sequence_count": 2048, 00:25:16.005 "buf_count": 2048 00:25:16.005 } 00:25:16.005 } 00:25:16.005 ] 00:25:16.005 }, 00:25:16.005 { 00:25:16.005 "subsystem": "bdev", 00:25:16.005 "config": [ 00:25:16.005 { 00:25:16.005 "method": "bdev_set_options", 00:25:16.005 "params": { 00:25:16.005 "bdev_io_pool_size": 65535, 00:25:16.005 "bdev_io_cache_size": 256, 00:25:16.005 "bdev_auto_examine": true, 00:25:16.005 "iobuf_small_cache_size": 128, 00:25:16.005 "iobuf_large_cache_size": 16 00:25:16.005 } 00:25:16.005 }, 00:25:16.005 { 00:25:16.005 "method": "bdev_raid_set_options", 00:25:16.005 "params": { 00:25:16.005 "process_window_size_kb": 1024, 00:25:16.005 "process_max_bandwidth_mb_sec": 0 00:25:16.005 } 00:25:16.005 }, 00:25:16.005 { 00:25:16.005 "method": "bdev_iscsi_set_options", 00:25:16.005 "params": { 00:25:16.005 "timeout_sec": 30 00:25:16.005 } 00:25:16.005 }, 00:25:16.005 { 00:25:16.005 "method": "bdev_nvme_set_options", 00:25:16.005 "params": { 00:25:16.005 "action_on_timeout": "none", 00:25:16.005 "timeout_us": 0, 00:25:16.005 "timeout_admin_us": 0, 00:25:16.005 "keep_alive_timeout_ms": 10000, 00:25:16.005 "arbitration_burst": 0, 00:25:16.005 "low_priority_weight": 0, 00:25:16.005 "medium_priority_weight": 0, 00:25:16.005 "high_priority_weight": 0, 00:25:16.005 "nvme_adminq_poll_period_us": 10000, 00:25:16.005 "nvme_ioq_poll_period_us": 0, 00:25:16.005 "io_queue_requests": 0, 00:25:16.005 "delay_cmd_submit": true, 00:25:16.005 "transport_retry_count": 4, 00:25:16.005 "bdev_retry_count": 3, 00:25:16.005 "transport_ack_timeout": 0, 00:25:16.005 "ctrlr_loss_timeout_sec": 0, 00:25:16.005 "reconnect_delay_sec": 0, 00:25:16.005 "fast_io_fail_timeout_sec": 0, 00:25:16.005 "disable_auto_failback": false, 00:25:16.005 "generate_uuids": false, 00:25:16.005 "transport_tos": 0, 00:25:16.005 "nvme_error_stat": false, 00:25:16.005 "rdma_srq_size": 0, 00:25:16.005 "io_path_stat": false, 00:25:16.005 "allow_accel_sequence": false, 00:25:16.005 "rdma_max_cq_size": 0, 00:25:16.005 "rdma_cm_event_timeout_ms": 0, 00:25:16.005 "dhchap_digests": [ 00:25:16.005 "sha256", 00:25:16.005 "sha384", 00:25:16.005 "sha512" 00:25:16.005 ], 00:25:16.005 "dhchap_dhgroups": [ 00:25:16.005 "null", 00:25:16.005 "ffdhe2048", 00:25:16.005 "ffdhe3072", 00:25:16.005 "ffdhe4096", 00:25:16.005 "ffdhe6144", 00:25:16.005 "ffdhe8192" 00:25:16.005 ] 00:25:16.005 } 00:25:16.005 }, 00:25:16.005 { 00:25:16.005 "method": "bdev_nvme_set_hotplug", 00:25:16.005 "params": { 00:25:16.005 "period_us": 100000, 00:25:16.005 "enable": false 00:25:16.005 } 00:25:16.005 }, 00:25:16.005 { 00:25:16.005 "method": "bdev_malloc_create", 00:25:16.005 "params": { 00:25:16.005 "name": "malloc0", 00:25:16.005 "num_blocks": 8192, 00:25:16.005 "block_size": 4096, 00:25:16.005 "physical_block_size": 4096, 00:25:16.005 "uuid": "b07fb501-47cd-4c93-a28a-92c46904b146", 00:25:16.005 "optimal_io_boundary": 0, 00:25:16.005 "md_size": 0, 00:25:16.005 "dif_type": 0, 00:25:16.005 "dif_is_head_of_md": false, 00:25:16.005 "dif_pi_format": 0 00:25:16.005 } 00:25:16.005 }, 00:25:16.005 { 00:25:16.005 "method": "bdev_wait_for_examine" 00:25:16.005 } 00:25:16.005 ] 00:25:16.005 }, 00:25:16.005 { 00:25:16.005 "subsystem": "nbd", 00:25:16.005 "config": [] 00:25:16.005 }, 00:25:16.005 { 00:25:16.005 "subsystem": "scheduler", 00:25:16.005 "config": [ 00:25:16.005 { 00:25:16.005 "method": "framework_set_scheduler", 00:25:16.005 "params": { 00:25:16.005 "name": "static" 00:25:16.005 } 00:25:16.005 } 00:25:16.005 ] 00:25:16.005 }, 00:25:16.005 { 00:25:16.005 "subsystem": "nvmf", 00:25:16.005 "config": [ 00:25:16.005 { 00:25:16.005 "method": "nvmf_set_config", 00:25:16.005 "params": { 00:25:16.005 "discovery_filter": "match_any", 00:25:16.005 "admin_cmd_passthru": { 00:25:16.005 "identify_ctrlr": false 00:25:16.005 }, 00:25:16.005 "dhchap_digests": [ 00:25:16.005 "sha256", 00:25:16.005 "sha384", 00:25:16.005 "sha512" 00:25:16.005 ], 00:25:16.005 "dhchap_dhgroups": [ 00:25:16.005 "null", 00:25:16.005 "ffdhe2048", 00:25:16.005 "ffdhe3072", 00:25:16.005 "ffdhe4096", 00:25:16.005 "ffdhe6144", 00:25:16.005 "ffdhe8192" 00:25:16.005 ] 00:25:16.005 } 00:25:16.005 }, 00:25:16.005 { 00:25:16.005 "method": "nvmf_set_max_subsystems", 00:25:16.005 "params": { 00:25:16.005 "max_subsystems": 1024 00:25:16.005 } 00:25:16.005 }, 00:25:16.005 { 00:25:16.005 "method": "nvmf_set_crdt", 00:25:16.005 "params": { 00:25:16.005 "crdt1": 0, 00:25:16.005 "crdt2": 0, 00:25:16.005 "crdt3": 0 00:25:16.005 } 00:25:16.005 }, 00:25:16.005 { 00:25:16.005 "method": "nvmf_create_transport", 00:25:16.005 "params": { 00:25:16.005 "trtype": "TCP", 00:25:16.005 "max_queue_depth": 128, 00:25:16.005 "max_io_qpairs_per_ctrlr": 127, 00:25:16.005 "in_capsule_data_size": 4096, 00:25:16.005 "max_io_size": 131072, 00:25:16.005 "io_unit_size": 131072, 00:25:16.005 "max_aq_depth": 128, 00:25:16.005 "num_shared_buffers": 511, 00:25:16.005 "buf_cache_size": 4294967295, 00:25:16.005 "dif_insert_or_strip": false, 00:25:16.005 "zcopy": false, 00:25:16.005 "c2h_success": false, 00:25:16.005 "sock_priority": 0, 00:25:16.005 "abort_timeout_sec": 1, 00:25:16.005 "ack_timeout": 0, 00:25:16.005 "data_wr_pool_size": 0 00:25:16.005 } 00:25:16.005 }, 00:25:16.005 { 00:25:16.005 "method": "nvmf_create_subsystem", 00:25:16.005 "params": { 00:25:16.005 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:16.005 "allow_any_host": false, 00:25:16.005 "serial_number": "00000000000000000000", 00:25:16.005 "model_number": "SPDK bdev Controller", 00:25:16.005 "max_namespaces": 32, 00:25:16.005 "min_cntlid": 1, 00:25:16.005 "max_cntlid": 65519, 00:25:16.005 "ana_reporting": false 00:25:16.005 } 00:25:16.005 }, 00:25:16.005 { 00:25:16.005 "method": "nvmf_subsystem_add_host", 00:25:16.005 "params": { 00:25:16.005 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:16.005 "host": "nqn.2016-06.io.spdk:host1", 00:25:16.005 "psk": "key0" 00:25:16.005 } 00:25:16.005 }, 00:25:16.005 { 00:25:16.005 "method": "nvmf_subsystem_add_ns", 00:25:16.005 "params": { 00:25:16.005 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:16.005 "namespace": { 00:25:16.005 "nsid": 1, 00:25:16.005 "bdev_name": "malloc0", 00:25:16.005 "nguid": "B07FB50147CD4C93A28A92C46904B146", 00:25:16.005 "uuid": "b07fb501-47cd-4c93-a28a-92c46904b146", 00:25:16.005 "no_auto_visible": false 00:25:16.005 } 00:25:16.005 } 00:25:16.005 }, 00:25:16.005 { 00:25:16.005 "method": "nvmf_subsystem_add_listener", 00:25:16.005 "params": { 00:25:16.005 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:16.005 "listen_address": { 00:25:16.005 "trtype": "TCP", 00:25:16.005 "adrfam": "IPv4", 00:25:16.005 "traddr": "10.0.0.2", 00:25:16.005 "trsvcid": "4420" 00:25:16.005 }, 00:25:16.005 "secure_channel": false, 00:25:16.005 "sock_impl": "ssl" 00:25:16.005 } 00:25:16.005 } 00:25:16.005 ] 00:25:16.005 } 00:25:16.005 ] 00:25:16.005 }' 00:25:16.005 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:16.005 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=423746 00:25:16.005 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 423746 00:25:16.005 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:25:16.005 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 423746 ']' 00:25:16.005 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:16.005 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:16.006 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:16.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:16.006 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:16.006 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:16.006 [2024-09-27 15:43:56.407042] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:25:16.006 [2024-09-27 15:43:56.407096] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:16.006 [2024-09-27 15:43:56.490774] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.266 [2024-09-27 15:43:56.518575] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:16.266 [2024-09-27 15:43:56.518609] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:16.266 [2024-09-27 15:43:56.518614] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:16.266 [2024-09-27 15:43:56.518619] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:16.267 [2024-09-27 15:43:56.518623] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:16.267 [2024-09-27 15:43:56.518663] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:16.267 [2024-09-27 15:43:56.719049] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:16.267 [2024-09-27 15:43:56.751006] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:16.267 [2024-09-27 15:43:56.751196] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:16.839 15:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:16.839 15:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:25:16.839 15:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:16.839 15:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:16.839 15:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:16.839 15:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:16.839 15:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=424078 00:25:16.839 15:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 424078 /var/tmp/bdevperf.sock 00:25:16.839 15:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 424078 ']' 00:25:16.839 15:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:16.839 15:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:16.839 15:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:16.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:16.839 15:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:25:16.839 15:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:16.839 15:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:16.839 15:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:25:16.839 "subsystems": [ 00:25:16.839 { 00:25:16.839 "subsystem": "keyring", 00:25:16.839 "config": [ 00:25:16.839 { 00:25:16.839 "method": "keyring_file_add_key", 00:25:16.839 "params": { 00:25:16.839 "name": "key0", 00:25:16.839 "path": "/tmp/tmp.UqGGInoeRK" 00:25:16.839 } 00:25:16.839 } 00:25:16.839 ] 00:25:16.839 }, 00:25:16.839 { 00:25:16.839 "subsystem": "iobuf", 00:25:16.839 "config": [ 00:25:16.839 { 00:25:16.839 "method": "iobuf_set_options", 00:25:16.839 "params": { 00:25:16.839 "small_pool_count": 8192, 00:25:16.839 "large_pool_count": 1024, 00:25:16.839 "small_bufsize": 8192, 00:25:16.839 "large_bufsize": 135168 00:25:16.839 } 00:25:16.839 } 00:25:16.839 ] 00:25:16.839 }, 00:25:16.839 { 00:25:16.839 "subsystem": "sock", 00:25:16.839 "config": [ 00:25:16.839 { 00:25:16.839 "method": "sock_set_default_impl", 00:25:16.839 "params": { 00:25:16.839 "impl_name": "posix" 00:25:16.839 } 00:25:16.839 }, 00:25:16.839 { 00:25:16.839 "method": "sock_impl_set_options", 00:25:16.839 "params": { 00:25:16.839 "impl_name": "ssl", 00:25:16.839 "recv_buf_size": 4096, 00:25:16.839 "send_buf_size": 4096, 00:25:16.839 "enable_recv_pipe": true, 00:25:16.839 "enable_quickack": false, 00:25:16.839 "enable_placement_id": 0, 00:25:16.839 "enable_zerocopy_send_server": true, 00:25:16.839 "enable_zerocopy_send_client": false, 00:25:16.839 "zerocopy_threshold": 0, 00:25:16.839 "tls_version": 0, 00:25:16.839 "enable_ktls": false 00:25:16.839 } 00:25:16.839 }, 00:25:16.839 { 00:25:16.839 "method": "sock_impl_set_options", 00:25:16.839 "params": { 00:25:16.839 "impl_name": "posix", 00:25:16.839 "recv_buf_size": 2097152, 00:25:16.839 "send_buf_size": 2097152, 00:25:16.839 "enable_recv_pipe": true, 00:25:16.839 "enable_quickack": false, 00:25:16.839 "enable_placement_id": 0, 00:25:16.839 "enable_zerocopy_send_server": true, 00:25:16.839 "enable_zerocopy_send_client": false, 00:25:16.839 "zerocopy_threshold": 0, 00:25:16.839 "tls_version": 0, 00:25:16.839 "enable_ktls": false 00:25:16.839 } 00:25:16.839 } 00:25:16.839 ] 00:25:16.839 }, 00:25:16.839 { 00:25:16.839 "subsystem": "vmd", 00:25:16.839 "config": [] 00:25:16.839 }, 00:25:16.839 { 00:25:16.839 "subsystem": "accel", 00:25:16.839 "config": [ 00:25:16.839 { 00:25:16.839 "method": "accel_set_options", 00:25:16.839 "params": { 00:25:16.839 "small_cache_size": 128, 00:25:16.839 "large_cache_size": 16, 00:25:16.839 "task_count": 2048, 00:25:16.839 "sequence_count": 2048, 00:25:16.839 "buf_count": 2048 00:25:16.839 } 00:25:16.839 } 00:25:16.839 ] 00:25:16.839 }, 00:25:16.839 { 00:25:16.839 "subsystem": "bdev", 00:25:16.839 "config": [ 00:25:16.839 { 00:25:16.839 "method": "bdev_set_options", 00:25:16.839 "params": { 00:25:16.839 "bdev_io_pool_size": 65535, 00:25:16.839 "bdev_io_cache_size": 256, 00:25:16.839 "bdev_auto_examine": true, 00:25:16.839 "iobuf_small_cache_size": 128, 00:25:16.839 "iobuf_large_cache_size": 16 00:25:16.839 } 00:25:16.839 }, 00:25:16.839 { 00:25:16.839 "method": "bdev_raid_set_options", 00:25:16.839 "params": { 00:25:16.839 "process_window_size_kb": 1024, 00:25:16.839 "process_max_bandwidth_mb_sec": 0 00:25:16.839 } 00:25:16.839 }, 00:25:16.839 { 00:25:16.839 "method": "bdev_iscsi_set_options", 00:25:16.839 "params": { 00:25:16.839 "timeout_sec": 30 00:25:16.839 } 00:25:16.839 }, 00:25:16.839 { 00:25:16.839 "method": "bdev_nvme_set_options", 00:25:16.839 "params": { 00:25:16.839 "action_on_timeout": "none", 00:25:16.839 "timeout_us": 0, 00:25:16.839 "timeout_admin_us": 0, 00:25:16.839 "keep_alive_timeout_ms": 10000, 00:25:16.839 "arbitration_burst": 0, 00:25:16.840 "low_priority_weight": 0, 00:25:16.840 "medium_priority_weight": 0, 00:25:16.840 "high_priority_weight": 0, 00:25:16.840 "nvme_adminq_poll_period_us": 10000, 00:25:16.840 "nvme_ioq_poll_period_us": 0, 00:25:16.840 "io_queue_requests": 512, 00:25:16.840 "delay_cmd_submit": true, 00:25:16.840 "transport_retry_count": 4, 00:25:16.840 "bdev_retry_count": 3, 00:25:16.840 "transport_ack_timeout": 0, 00:25:16.840 "ctrlr_loss_timeout_sec": 0, 00:25:16.840 "reconnect_delay_sec": 0, 00:25:16.840 "fast_io_fail_timeout_sec": 0, 00:25:16.840 "disable_auto_failback": false, 00:25:16.840 "generate_uuids": false, 00:25:16.840 "transport_tos": 0, 00:25:16.840 "nvme_error_stat": false, 00:25:16.840 "rdma_srq_size": 0, 00:25:16.840 "io_path_stat": false, 00:25:16.840 "allow_accel_sequence": false, 00:25:16.840 "rdma_max_cq_size": 0, 00:25:16.840 "rdma_cm_event_timeout_ms": 0, 00:25:16.840 "dhchap_digests": [ 00:25:16.840 "sha256", 00:25:16.840 "sha384", 00:25:16.840 "sha512" 00:25:16.840 ], 00:25:16.840 "dhchap_dhgroups": [ 00:25:16.840 "null", 00:25:16.840 "ffdhe2048", 00:25:16.840 "ffdhe3072", 00:25:16.840 "ffdhe4096", 00:25:16.840 "ffdhe6144", 00:25:16.840 "ffdhe8192" 00:25:16.840 ] 00:25:16.840 } 00:25:16.840 }, 00:25:16.840 { 00:25:16.840 "method": "bdev_nvme_attach_controller", 00:25:16.840 "params": { 00:25:16.840 "name": "nvme0", 00:25:16.840 "trtype": "TCP", 00:25:16.840 "adrfam": "IPv4", 00:25:16.840 "traddr": "10.0.0.2", 00:25:16.840 "trsvcid": "4420", 00:25:16.840 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:16.840 "prchk_reftag": false, 00:25:16.840 "prchk_guard": false, 00:25:16.840 "ctrlr_loss_timeout_sec": 0, 00:25:16.840 "reconnect_delay_sec": 0, 00:25:16.840 "fast_io_fail_timeout_sec": 0, 00:25:16.840 "psk": "key0", 00:25:16.840 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:16.840 "hdgst": false, 00:25:16.840 "ddgst": false 00:25:16.840 } 00:25:16.840 }, 00:25:16.840 { 00:25:16.840 "method": "bdev_nvme_set_hotplug", 00:25:16.840 "params": { 00:25:16.840 "period_us": 100000, 00:25:16.840 "enable": false 00:25:16.840 } 00:25:16.840 }, 00:25:16.840 { 00:25:16.840 "method": "bdev_enable_histogram", 00:25:16.840 "params": { 00:25:16.840 "name": "nvme0n1", 00:25:16.840 "enable": true 00:25:16.840 } 00:25:16.840 }, 00:25:16.840 { 00:25:16.840 "method": "bdev_wait_for_examine" 00:25:16.840 } 00:25:16.840 ] 00:25:16.840 }, 00:25:16.840 { 00:25:16.840 "subsystem": "nbd", 00:25:16.840 "config": [] 00:25:16.840 } 00:25:16.840 ] 00:25:16.840 }' 00:25:16.840 [2024-09-27 15:43:57.287843] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:25:16.840 [2024-09-27 15:43:57.287903] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid424078 ] 00:25:17.101 [2024-09-27 15:43:57.365909] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:17.101 [2024-09-27 15:43:57.394593] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:17.101 [2024-09-27 15:43:57.523962] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:17.673 15:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:17.673 15:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:25:17.673 15:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:17.673 15:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:25:17.934 15:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.934 15:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:17.934 Running I/O for 1 seconds... 00:25:19.320 2612.00 IOPS, 10.20 MiB/s 00:25:19.320 Latency(us) 00:25:19.320 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:19.320 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:19.320 Verification LBA range: start 0x0 length 0x2000 00:25:19.320 nvme0n1 : 1.01 2700.84 10.55 0.00 0.00 47067.72 4478.29 191365.12 00:25:19.320 =================================================================================================================== 00:25:19.320 Total : 2700.84 10.55 0.00 0.00 47067.72 4478.29 191365.12 00:25:19.320 { 00:25:19.320 "results": [ 00:25:19.320 { 00:25:19.320 "job": "nvme0n1", 00:25:19.320 "core_mask": "0x2", 00:25:19.320 "workload": "verify", 00:25:19.320 "status": "finished", 00:25:19.320 "verify_range": { 00:25:19.320 "start": 0, 00:25:19.320 "length": 8192 00:25:19.320 }, 00:25:19.320 "queue_depth": 128, 00:25:19.320 "io_size": 4096, 00:25:19.320 "runtime": 1.014501, 00:25:19.320 "iops": 2700.8351889253927, 00:25:19.320 "mibps": 10.550137456739815, 00:25:19.320 "io_failed": 0, 00:25:19.320 "io_timeout": 0, 00:25:19.320 "avg_latency_us": 47067.719007299274, 00:25:19.320 "min_latency_us": 4478.293333333333, 00:25:19.320 "max_latency_us": 191365.12 00:25:19.320 } 00:25:19.320 ], 00:25:19.320 "core_count": 1 00:25:19.320 } 00:25:19.320 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:25:19.320 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:25:19.320 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:25:19.320 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:25:19.320 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:25:19.320 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:25:19.320 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:19.320 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:25:19.320 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:25:19.320 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:25:19.320 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:19.320 nvmf_trace.0 00:25:19.320 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:25:19.320 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 424078 00:25:19.320 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 424078 ']' 00:25:19.320 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 424078 00:25:19.320 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:19.320 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:19.320 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 424078 00:25:19.320 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:19.320 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:19.320 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 424078' 00:25:19.320 killing process with pid 424078 00:25:19.320 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 424078 00:25:19.320 Received shutdown signal, test time was about 1.000000 seconds 00:25:19.320 00:25:19.320 Latency(us) 00:25:19.320 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:19.320 =================================================================================================================== 00:25:19.320 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:19.320 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 424078 00:25:19.320 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:25:19.320 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:19.320 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:25:19.320 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:19.320 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:25:19.320 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:19.320 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:19.320 rmmod nvme_tcp 00:25:19.320 rmmod nvme_fabrics 00:25:19.320 rmmod nvme_keyring 00:25:19.320 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:19.320 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:25:19.320 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:25:19.320 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@513 -- # '[' -n 423746 ']' 00:25:19.320 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # killprocess 423746 00:25:19.320 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 423746 ']' 00:25:19.320 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 423746 00:25:19.320 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:19.320 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:19.320 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 423746 00:25:19.320 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:19.320 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:19.320 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 423746' 00:25:19.320 killing process with pid 423746 00:25:19.320 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 423746 00:25:19.320 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 423746 00:25:19.582 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:19.582 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:19.582 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:19.582 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:25:19.582 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-save 00:25:19.582 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:19.582 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-restore 00:25:19.582 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:19.582 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:19.582 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.582 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:19.582 15:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.493 15:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:21.493 15:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.NagWxdqRwU /tmp/tmp.79XxuXZBEk /tmp/tmp.UqGGInoeRK 00:25:21.493 00:25:21.493 real 1m27.154s 00:25:21.493 user 2m17.979s 00:25:21.493 sys 0m25.859s 00:25:21.493 15:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:21.493 15:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:21.493 ************************************ 00:25:21.493 END TEST nvmf_tls 00:25:21.493 ************************************ 00:25:21.753 15:44:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:21.753 15:44:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:21.753 15:44:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:21.753 15:44:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:21.753 ************************************ 00:25:21.753 START TEST nvmf_fips 00:25:21.753 ************************************ 00:25:21.753 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:21.753 * Looking for test storage... 00:25:21.753 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:25:21.753 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:21.753 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:25:21.753 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:21.753 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:21.753 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:21.753 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:21.753 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:21.753 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:25:21.753 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:25:21.753 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:25:21.753 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:25:21.753 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:25:21.753 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:25:21.753 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:25:21.753 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:21.753 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:25:21.753 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:25:21.753 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:22.014 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:22.014 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:25:22.014 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:25:22.014 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:22.014 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:25:22.014 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:22.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.015 --rc genhtml_branch_coverage=1 00:25:22.015 --rc genhtml_function_coverage=1 00:25:22.015 --rc genhtml_legend=1 00:25:22.015 --rc geninfo_all_blocks=1 00:25:22.015 --rc geninfo_unexecuted_blocks=1 00:25:22.015 00:25:22.015 ' 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:22.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.015 --rc genhtml_branch_coverage=1 00:25:22.015 --rc genhtml_function_coverage=1 00:25:22.015 --rc genhtml_legend=1 00:25:22.015 --rc geninfo_all_blocks=1 00:25:22.015 --rc geninfo_unexecuted_blocks=1 00:25:22.015 00:25:22.015 ' 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:22.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.015 --rc genhtml_branch_coverage=1 00:25:22.015 --rc genhtml_function_coverage=1 00:25:22.015 --rc genhtml_legend=1 00:25:22.015 --rc geninfo_all_blocks=1 00:25:22.015 --rc geninfo_unexecuted_blocks=1 00:25:22.015 00:25:22.015 ' 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:22.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.015 --rc genhtml_branch_coverage=1 00:25:22.015 --rc genhtml_function_coverage=1 00:25:22.015 --rc genhtml_legend=1 00:25:22.015 --rc geninfo_all_blocks=1 00:25:22.015 --rc geninfo_unexecuted_blocks=1 00:25:22.015 00:25:22.015 ' 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:22.015 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:25:22.015 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:22.016 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:25:22.277 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:22.277 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:25:22.277 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:22.277 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:25:22.277 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:25:22.277 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:25:22.277 Error setting digest 00:25:22.277 40A21858017F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:25:22.277 40A21858017F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:25:22.277 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:25:22.277 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:22.277 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:22.277 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:22.277 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:25:22.277 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:25:22.277 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:22.277 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:22.277 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:22.277 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:22.277 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:22.277 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:22.277 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:22.277 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:25:22.277 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:25:22.277 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:25:22.277 15:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:30.418 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:30.418 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:25:30.418 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:30.418 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:30.418 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:30.418 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:30.418 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:30.418 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:25:30.418 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:30.418 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:25:30.418 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:30.419 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:30.419 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:30.419 Found net devices under 0000:31:00.0: cvl_0_0 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:30.419 Found net devices under 0000:31:00.1: cvl_0_1 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # is_hw=yes 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:30.419 15:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:30.419 15:44:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:30.419 15:44:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:30.419 15:44:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:30.419 15:44:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:30.419 15:44:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:30.419 15:44:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:30.419 15:44:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:30.419 15:44:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:30.419 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:30.419 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.559 ms 00:25:30.419 00:25:30.419 --- 10.0.0.2 ping statistics --- 00:25:30.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.419 rtt min/avg/max/mdev = 0.559/0.559/0.559/0.000 ms 00:25:30.419 15:44:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:30.419 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:30.419 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:25:30.419 00:25:30.419 --- 10.0.0.1 ping statistics --- 00:25:30.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.420 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:25:30.420 15:44:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:30.420 15:44:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # return 0 00:25:30.420 15:44:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:30.420 15:44:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:30.420 15:44:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:25:30.420 15:44:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:25:30.420 15:44:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:30.420 15:44:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:25:30.420 15:44:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:25:30.420 15:44:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:25:30.420 15:44:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:30.420 15:44:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:30.420 15:44:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:30.420 15:44:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # nvmfpid=428854 00:25:30.420 15:44:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # waitforlisten 428854 00:25:30.420 15:44:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:30.420 15:44:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 428854 ']' 00:25:30.420 15:44:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:30.420 15:44:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:30.420 15:44:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:30.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:30.420 15:44:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:30.420 15:44:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:30.420 [2024-09-27 15:44:10.303831] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:25:30.420 [2024-09-27 15:44:10.303913] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:30.420 [2024-09-27 15:44:10.391867] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.420 [2024-09-27 15:44:10.437687] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:30.420 [2024-09-27 15:44:10.437743] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:30.420 [2024-09-27 15:44:10.437751] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:30.420 [2024-09-27 15:44:10.437758] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:30.420 [2024-09-27 15:44:10.437764] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:30.420 [2024-09-27 15:44:10.437787] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:30.681 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:30.682 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:25:30.682 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:30.682 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:30.682 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:30.682 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:30.682 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:25:30.682 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:30.682 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:25:30.682 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.W4c 00:25:30.682 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:30.682 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.W4c 00:25:30.682 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.W4c 00:25:30.682 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.W4c 00:25:30.682 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:30.944 [2024-09-27 15:44:11.320845] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:30.944 [2024-09-27 15:44:11.336838] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:30.944 [2024-09-27 15:44:11.337182] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:30.944 malloc0 00:25:30.944 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:30.944 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=429204 00:25:30.944 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 429204 /var/tmp/bdevperf.sock 00:25:30.944 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:30.944 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 429204 ']' 00:25:30.944 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:30.944 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:30.944 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:30.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:30.944 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:30.944 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:31.205 [2024-09-27 15:44:11.486215] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:25:31.205 [2024-09-27 15:44:11.486297] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid429204 ] 00:25:31.205 [2024-09-27 15:44:11.555498] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.205 [2024-09-27 15:44:11.616093] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:25:31.466 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:31.466 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:25:31.466 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.W4c 00:25:31.466 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:31.727 [2024-09-27 15:44:12.077310] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:31.727 TLSTESTn1 00:25:31.727 15:44:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:31.989 Running I/O for 10 seconds... 00:25:42.108 2324.00 IOPS, 9.08 MiB/s 2018.50 IOPS, 7.88 MiB/s 1863.33 IOPS, 7.28 MiB/s 2923.50 IOPS, 11.42 MiB/s 3028.20 IOPS, 11.83 MiB/s 2935.33 IOPS, 11.47 MiB/s 3114.71 IOPS, 12.17 MiB/s 3468.62 IOPS, 13.55 MiB/s 3563.56 IOPS, 13.92 MiB/s 3408.50 IOPS, 13.31 MiB/s 00:25:42.108 Latency(us) 00:25:42.108 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:42.108 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:42.108 Verification LBA range: start 0x0 length 0x2000 00:25:42.108 TLSTESTn1 : 10.10 3386.75 13.23 0.00 0.00 37624.02 7427.41 97430.19 00:25:42.108 =================================================================================================================== 00:25:42.108 Total : 3386.75 13.23 0.00 0.00 37624.02 7427.41 97430.19 00:25:42.108 { 00:25:42.108 "results": [ 00:25:42.108 { 00:25:42.108 "job": "TLSTESTn1", 00:25:42.108 "core_mask": "0x4", 00:25:42.108 "workload": "verify", 00:25:42.108 "status": "finished", 00:25:42.108 "verify_range": { 00:25:42.108 "start": 0, 00:25:42.108 "length": 8192 00:25:42.108 }, 00:25:42.108 "queue_depth": 128, 00:25:42.108 "io_size": 4096, 00:25:42.108 "runtime": 10.102023, 00:25:42.108 "iops": 3386.7473871322604, 00:25:42.108 "mibps": 13.229481980985392, 00:25:42.108 "io_failed": 0, 00:25:42.108 "io_timeout": 0, 00:25:42.108 "avg_latency_us": 37624.018927698045, 00:25:42.108 "min_latency_us": 7427.413333333333, 00:25:42.108 "max_latency_us": 97430.18666666666 00:25:42.108 } 00:25:42.108 ], 00:25:42.108 "core_count": 1 00:25:42.108 } 00:25:42.108 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:25:42.108 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:25:42.108 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:25:42.108 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:25:42.108 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:25:42.108 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:42.108 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:25:42.108 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:25:42.108 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:25:42.108 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:42.108 nvmf_trace.0 00:25:42.108 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:25:42.108 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 429204 00:25:42.108 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 429204 ']' 00:25:42.108 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 429204 00:25:42.108 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:25:42.108 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:42.108 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 429204 00:25:42.368 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:42.368 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:42.368 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 429204' 00:25:42.368 killing process with pid 429204 00:25:42.368 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 429204 00:25:42.368 Received shutdown signal, test time was about 10.000000 seconds 00:25:42.368 00:25:42.368 Latency(us) 00:25:42.368 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:42.369 =================================================================================================================== 00:25:42.369 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:42.369 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 429204 00:25:42.369 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:25:42.369 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:42.369 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:25:42.369 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:42.369 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:25:42.369 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:42.369 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:42.369 rmmod nvme_tcp 00:25:42.369 rmmod nvme_fabrics 00:25:42.369 rmmod nvme_keyring 00:25:42.369 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:42.369 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:25:42.369 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:25:42.369 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@513 -- # '[' -n 428854 ']' 00:25:42.369 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # killprocess 428854 00:25:42.369 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 428854 ']' 00:25:42.369 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 428854 00:25:42.369 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:25:42.369 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:42.369 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 428854 00:25:42.369 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:42.369 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:42.369 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 428854' 00:25:42.369 killing process with pid 428854 00:25:42.369 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 428854 00:25:42.369 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 428854 00:25:42.629 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:42.629 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:42.629 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:42.629 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:25:42.629 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-save 00:25:42.629 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:42.629 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-restore 00:25:42.629 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:42.629 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:42.629 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.629 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:42.629 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.174 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:45.174 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.W4c 00:25:45.174 00:25:45.174 real 0m22.995s 00:25:45.174 user 0m24.127s 00:25:45.174 sys 0m9.587s 00:25:45.174 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:45.174 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:45.174 ************************************ 00:25:45.174 END TEST nvmf_fips 00:25:45.175 ************************************ 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:45.175 ************************************ 00:25:45.175 START TEST nvmf_control_msg_list 00:25:45.175 ************************************ 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:45.175 * Looking for test storage... 00:25:45.175 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:45.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.175 --rc genhtml_branch_coverage=1 00:25:45.175 --rc genhtml_function_coverage=1 00:25:45.175 --rc genhtml_legend=1 00:25:45.175 --rc geninfo_all_blocks=1 00:25:45.175 --rc geninfo_unexecuted_blocks=1 00:25:45.175 00:25:45.175 ' 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:45.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.175 --rc genhtml_branch_coverage=1 00:25:45.175 --rc genhtml_function_coverage=1 00:25:45.175 --rc genhtml_legend=1 00:25:45.175 --rc geninfo_all_blocks=1 00:25:45.175 --rc geninfo_unexecuted_blocks=1 00:25:45.175 00:25:45.175 ' 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:45.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.175 --rc genhtml_branch_coverage=1 00:25:45.175 --rc genhtml_function_coverage=1 00:25:45.175 --rc genhtml_legend=1 00:25:45.175 --rc geninfo_all_blocks=1 00:25:45.175 --rc geninfo_unexecuted_blocks=1 00:25:45.175 00:25:45.175 ' 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:45.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.175 --rc genhtml_branch_coverage=1 00:25:45.175 --rc genhtml_function_coverage=1 00:25:45.175 --rc genhtml_legend=1 00:25:45.175 --rc geninfo_all_blocks=1 00:25:45.175 --rc geninfo_unexecuted_blocks=1 00:25:45.175 00:25:45.175 ' 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.175 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:25:45.176 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.176 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:25:45.176 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:45.176 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:45.176 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:45.176 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:45.176 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:45.176 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:45.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:45.176 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:45.176 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:45.176 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:45.176 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:25:45.176 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:25:45.176 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:45.176 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:45.176 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:45.176 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:45.176 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:45.176 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:45.176 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.176 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:25:45.176 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:25:45.176 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:25:45.176 15:44:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:53.314 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:53.314 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:25:53.314 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:53.314 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:53.314 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:53.314 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:53.314 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:53.314 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:25:53.314 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:53.314 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:25:53.314 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:25:53.314 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:25:53.314 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:25:53.314 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:25:53.314 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:25:53.314 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:53.315 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:53.315 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:53.315 Found net devices under 0000:31:00.0: cvl_0_0 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:53.315 Found net devices under 0000:31:00.1: cvl_0_1 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # is_hw=yes 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:53.315 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:53.316 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:53.316 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:53.316 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:53.316 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:53.316 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:53.316 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:53.316 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:53.316 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:53.316 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:53.316 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:53.316 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.567 ms 00:25:53.316 00:25:53.316 --- 10.0.0.2 ping statistics --- 00:25:53.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.316 rtt min/avg/max/mdev = 0.567/0.567/0.567/0.000 ms 00:25:53.316 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:53.316 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:53.316 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:25:53.316 00:25:53.316 --- 10.0.0.1 ping statistics --- 00:25:53.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.316 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:25:53.316 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:53.316 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # return 0 00:25:53.316 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:53.316 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:53.316 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:25:53.316 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:25:53.316 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:53.316 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:25:53.316 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:25:53.316 15:44:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:25:53.316 15:44:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:53.316 15:44:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:53.316 15:44:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:53.316 15:44:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # nvmfpid=435615 00:25:53.316 15:44:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # waitforlisten 435615 00:25:53.316 15:44:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:53.316 15:44:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 435615 ']' 00:25:53.316 15:44:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:53.316 15:44:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:53.316 15:44:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:53.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:53.316 15:44:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:53.316 15:44:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:53.316 [2024-09-27 15:44:33.087073] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:25:53.316 [2024-09-27 15:44:33.087156] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:53.316 [2024-09-27 15:44:33.176515] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.316 [2024-09-27 15:44:33.221837] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:53.316 [2024-09-27 15:44:33.221892] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:53.316 [2024-09-27 15:44:33.221930] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:53.316 [2024-09-27 15:44:33.221937] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:53.316 [2024-09-27 15:44:33.221943] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:53.316 [2024-09-27 15:44:33.221971] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.578 15:44:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:53.578 15:44:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:25:53.578 15:44:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:53.578 15:44:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:53.578 15:44:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:53.578 15:44:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:53.578 15:44:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:53.578 15:44:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:53.578 15:44:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:25:53.578 15:44:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.578 15:44:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:53.578 [2024-09-27 15:44:33.945165] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:53.578 15:44:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.578 15:44:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:25:53.578 15:44:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.578 15:44:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:53.578 15:44:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.578 15:44:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:53.578 15:44:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.578 15:44:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:53.578 Malloc0 00:25:53.578 15:44:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.578 15:44:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:53.578 15:44:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.578 15:44:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:53.578 15:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.578 15:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:53.578 15:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.578 15:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:53.578 [2024-09-27 15:44:34.009571] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:53.578 15:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.578 15:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=435781 00:25:53.578 15:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:53.578 15:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=435782 00:25:53.579 15:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:53.579 15:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=435784 00:25:53.579 15:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 435781 00:25:53.579 15:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:53.839 [2024-09-27 15:44:34.100466] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:53.839 [2024-09-27 15:44:34.100709] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:53.839 [2024-09-27 15:44:34.101203] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:54.780 Initializing NVMe Controllers 00:25:54.780 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:54.780 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:25:54.780 Initialization complete. Launching workers. 00:25:54.780 ======================================================== 00:25:54.780 Latency(us) 00:25:54.780 Device Information : IOPS MiB/s Average min max 00:25:54.780 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40893.81 40733.95 40966.65 00:25:54.780 ======================================================== 00:25:54.780 Total : 25.00 0.10 40893.81 40733.95 40966.65 00:25:54.780 00:25:54.780 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 435782 00:25:55.041 Initializing NVMe Controllers 00:25:55.041 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:55.041 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:25:55.041 Initialization complete. Launching workers. 00:25:55.041 ======================================================== 00:25:55.041 Latency(us) 00:25:55.041 Device Information : IOPS MiB/s Average min max 00:25:55.041 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 1532.00 5.98 652.82 160.47 849.97 00:25:55.041 ======================================================== 00:25:55.041 Total : 1532.00 5.98 652.82 160.47 849.97 00:25:55.041 00:25:55.041 Initializing NVMe Controllers 00:25:55.041 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:55.041 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:25:55.041 Initialization complete. Launching workers. 00:25:55.041 ======================================================== 00:25:55.041 Latency(us) 00:25:55.041 Device Information : IOPS MiB/s Average min max 00:25:55.041 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 1557.00 6.08 641.99 149.35 809.27 00:25:55.041 ======================================================== 00:25:55.041 Total : 1557.00 6.08 641.99 149.35 809.27 00:25:55.041 00:25:55.041 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 435784 00:25:55.041 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:55.041 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:25:55.041 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:55.041 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:25:55.041 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:55.041 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:25:55.041 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:55.041 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:55.041 rmmod nvme_tcp 00:25:55.041 rmmod nvme_fabrics 00:25:55.041 rmmod nvme_keyring 00:25:55.041 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:55.041 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:25:55.041 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:25:55.042 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@513 -- # '[' -n 435615 ']' 00:25:55.042 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # killprocess 435615 00:25:55.042 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 435615 ']' 00:25:55.042 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 435615 00:25:55.042 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:25:55.042 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:55.042 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 435615 00:25:55.042 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:55.042 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:55.042 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 435615' 00:25:55.042 killing process with pid 435615 00:25:55.042 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 435615 00:25:55.042 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 435615 00:25:55.303 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:55.303 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:55.303 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:55.303 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:25:55.303 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-save 00:25:55.303 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:55.303 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-restore 00:25:55.303 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:55.303 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:55.303 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:55.303 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:55.303 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.863 15:44:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:57.863 00:25:57.863 real 0m12.616s 00:25:57.863 user 0m8.208s 00:25:57.863 sys 0m6.602s 00:25:57.863 15:44:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:57.863 15:44:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:57.863 ************************************ 00:25:57.863 END TEST nvmf_control_msg_list 00:25:57.863 ************************************ 00:25:57.863 15:44:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:57.863 15:44:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:57.863 15:44:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:57.863 15:44:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:57.863 ************************************ 00:25:57.863 START TEST nvmf_wait_for_buf 00:25:57.863 ************************************ 00:25:57.863 15:44:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:57.863 * Looking for test storage... 00:25:57.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:57.863 15:44:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:57.863 15:44:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:25:57.863 15:44:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:57.863 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:57.863 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:57.863 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:57.863 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:57.863 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:25:57.863 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:25:57.863 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:25:57.863 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:25:57.863 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:25:57.863 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:25:57.863 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:25:57.863 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:57.863 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:25:57.863 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:25:57.863 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:57.863 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:57.863 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:25:57.863 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:25:57.863 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:57.863 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:25:57.863 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:57.863 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:25:57.863 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:25:57.863 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:57.863 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:25:57.863 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:57.863 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:57.863 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:57.863 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:25:57.863 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:57.863 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:57.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.863 --rc genhtml_branch_coverage=1 00:25:57.863 --rc genhtml_function_coverage=1 00:25:57.863 --rc genhtml_legend=1 00:25:57.863 --rc geninfo_all_blocks=1 00:25:57.863 --rc geninfo_unexecuted_blocks=1 00:25:57.863 00:25:57.863 ' 00:25:57.863 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:57.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.863 --rc genhtml_branch_coverage=1 00:25:57.863 --rc genhtml_function_coverage=1 00:25:57.863 --rc genhtml_legend=1 00:25:57.863 --rc geninfo_all_blocks=1 00:25:57.863 --rc geninfo_unexecuted_blocks=1 00:25:57.863 00:25:57.863 ' 00:25:57.863 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:57.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.863 --rc genhtml_branch_coverage=1 00:25:57.863 --rc genhtml_function_coverage=1 00:25:57.863 --rc genhtml_legend=1 00:25:57.863 --rc geninfo_all_blocks=1 00:25:57.863 --rc geninfo_unexecuted_blocks=1 00:25:57.863 00:25:57.863 ' 00:25:57.863 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:57.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.863 --rc genhtml_branch_coverage=1 00:25:57.863 --rc genhtml_function_coverage=1 00:25:57.863 --rc genhtml_legend=1 00:25:57.863 --rc geninfo_all_blocks=1 00:25:57.863 --rc geninfo_unexecuted_blocks=1 00:25:57.863 00:25:57.863 ' 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:57.864 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:57.864 15:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:06.006 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:06.006 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:26:06.006 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:06.006 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:06.006 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:06.006 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:06.006 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:06.006 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:26:06.006 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:06.006 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:26:06.006 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:26:06.006 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:26:06.006 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:26:06.006 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:26:06.006 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:26:06.006 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:06.006 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:06.006 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:06.006 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:06.006 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:06.006 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:06.006 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:06.006 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:06.006 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:06.006 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:06.006 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:06.006 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:26:06.006 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:26:06.006 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:26:06.006 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:26:06.006 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:26:06.006 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:26:06.006 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:26:06.006 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:06.006 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:06.006 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:26:06.006 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:26:06.006 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:06.006 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:06.006 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:26:06.006 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:26:06.006 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:06.006 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:06.006 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:26:06.006 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:06.007 Found net devices under 0000:31:00.0: cvl_0_0 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:06.007 Found net devices under 0000:31:00.1: cvl_0_1 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # is_hw=yes 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:06.007 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:06.007 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.579 ms 00:26:06.007 00:26:06.007 --- 10.0.0.2 ping statistics --- 00:26:06.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:06.007 rtt min/avg/max/mdev = 0.579/0.579/0.579/0.000 ms 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:06.007 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:06.007 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:26:06.007 00:26:06.007 --- 10.0.0.1 ping statistics --- 00:26:06.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:06.007 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # return 0 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # nvmfpid=440366 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # waitforlisten 440366 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 440366 ']' 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:06.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:06.007 15:44:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:06.007 [2024-09-27 15:44:45.831347] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:26:06.007 [2024-09-27 15:44:45.831416] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:06.007 [2024-09-27 15:44:45.919855] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.007 [2024-09-27 15:44:45.965756] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:06.007 [2024-09-27 15:44:45.965810] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:06.007 [2024-09-27 15:44:45.965819] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:06.007 [2024-09-27 15:44:45.965826] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:06.007 [2024-09-27 15:44:45.965832] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:06.007 [2024-09-27 15:44:45.965853] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.269 15:44:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:06.269 15:44:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:26:06.269 15:44:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:26:06.269 15:44:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:06.269 15:44:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:06.269 15:44:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:06.269 15:44:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:26:06.269 15:44:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:06.269 15:44:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:26:06.269 15:44:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.269 15:44:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:06.269 15:44:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.269 15:44:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:26:06.269 15:44:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.269 15:44:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:06.269 15:44:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.269 15:44:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:26:06.269 15:44:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.269 15:44:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:06.530 15:44:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.530 15:44:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:26:06.530 15:44:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.530 15:44:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:06.530 Malloc0 00:26:06.530 15:44:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.530 15:44:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:26:06.530 15:44:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.530 15:44:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:06.530 [2024-09-27 15:44:46.821029] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:06.530 15:44:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.530 15:44:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:26:06.530 15:44:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.530 15:44:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:06.530 15:44:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.530 15:44:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:26:06.530 15:44:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.530 15:44:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:06.530 15:44:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.530 15:44:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:06.530 15:44:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.530 15:44:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:06.530 [2024-09-27 15:44:46.857350] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:06.530 15:44:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.530 15:44:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:06.530 [2024-09-27 15:44:46.942007] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:08.459 Initializing NVMe Controllers 00:26:08.459 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:26:08.459 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:26:08.459 Initialization complete. Launching workers. 00:26:08.459 ======================================================== 00:26:08.459 Latency(us) 00:26:08.459 Device Information : IOPS MiB/s Average min max 00:26:08.459 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 118.00 14.75 35272.12 8010.24 75821.48 00:26:08.459 ======================================================== 00:26:08.459 Total : 118.00 14.75 35272.12 8010.24 75821.48 00:26:08.459 00:26:08.459 15:44:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:26:08.459 15:44:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:26:08.459 15:44:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.459 15:44:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:08.459 15:44:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.459 15:44:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1862 00:26:08.459 15:44:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1862 -eq 0 ]] 00:26:08.459 15:44:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:26:08.459 15:44:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:26:08.459 15:44:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # nvmfcleanup 00:26:08.459 15:44:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:26:08.459 15:44:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:08.459 15:44:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:26:08.459 15:44:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:08.459 15:44:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:08.459 rmmod nvme_tcp 00:26:08.459 rmmod nvme_fabrics 00:26:08.459 rmmod nvme_keyring 00:26:08.459 15:44:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:08.459 15:44:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:26:08.459 15:44:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:26:08.459 15:44:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@513 -- # '[' -n 440366 ']' 00:26:08.459 15:44:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # killprocess 440366 00:26:08.459 15:44:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 440366 ']' 00:26:08.459 15:44:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 440366 00:26:08.459 15:44:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:26:08.459 15:44:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:08.459 15:44:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 440366 00:26:08.459 15:44:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:08.459 15:44:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:08.459 15:44:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 440366' 00:26:08.459 killing process with pid 440366 00:26:08.459 15:44:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 440366 00:26:08.459 15:44:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 440366 00:26:08.459 15:44:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:26:08.459 15:44:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:26:08.459 15:44:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:26:08.459 15:44:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:26:08.459 15:44:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-save 00:26:08.459 15:44:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:26:08.459 15:44:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-restore 00:26:08.459 15:44:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:08.460 15:44:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:08.460 15:44:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:08.460 15:44:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:08.460 15:44:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:11.007 15:44:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:11.007 00:26:11.007 real 0m13.105s 00:26:11.007 user 0m5.408s 00:26:11.007 sys 0m6.280s 00:26:11.007 15:44:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:11.007 15:44:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:11.007 ************************************ 00:26:11.007 END TEST nvmf_wait_for_buf 00:26:11.007 ************************************ 00:26:11.007 15:44:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:26:11.007 15:44:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:26:11.007 15:44:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:11.007 15:44:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:11.007 15:44:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:11.007 ************************************ 00:26:11.007 START TEST nvmf_fuzz 00:26:11.007 ************************************ 00:26:11.007 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:26:11.007 * Looking for test storage... 00:26:11.007 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:11.007 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:11.007 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:26:11.007 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:11.007 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:11.007 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:11.007 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:11.007 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:11.007 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:26:11.007 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:26:11.007 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:26:11.007 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:26:11.007 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:26:11.007 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:26:11.007 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:26:11.007 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:11.007 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:26:11.007 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:26:11.007 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:11.007 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:11.007 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:26:11.007 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:26:11.007 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:11.007 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:26:11.007 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:26:11.007 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:26:11.007 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:26:11.007 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:11.007 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:26:11.007 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:26:11.007 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:11.007 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:11.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.008 --rc genhtml_branch_coverage=1 00:26:11.008 --rc genhtml_function_coverage=1 00:26:11.008 --rc genhtml_legend=1 00:26:11.008 --rc geninfo_all_blocks=1 00:26:11.008 --rc geninfo_unexecuted_blocks=1 00:26:11.008 00:26:11.008 ' 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:11.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.008 --rc genhtml_branch_coverage=1 00:26:11.008 --rc genhtml_function_coverage=1 00:26:11.008 --rc genhtml_legend=1 00:26:11.008 --rc geninfo_all_blocks=1 00:26:11.008 --rc geninfo_unexecuted_blocks=1 00:26:11.008 00:26:11.008 ' 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:11.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.008 --rc genhtml_branch_coverage=1 00:26:11.008 --rc genhtml_function_coverage=1 00:26:11.008 --rc genhtml_legend=1 00:26:11.008 --rc geninfo_all_blocks=1 00:26:11.008 --rc geninfo_unexecuted_blocks=1 00:26:11.008 00:26:11.008 ' 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:11.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.008 --rc genhtml_branch_coverage=1 00:26:11.008 --rc genhtml_function_coverage=1 00:26:11.008 --rc genhtml_legend=1 00:26:11.008 --rc geninfo_all_blocks=1 00:26:11.008 --rc geninfo_unexecuted_blocks=1 00:26:11.008 00:26:11.008 ' 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:11.008 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@472 -- # prepare_net_devs 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@434 -- # local -g is_hw=no 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@436 -- # remove_spdk_ns 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:26:11.008 15:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:19.157 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:19.157 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:19.157 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ up == up ]] 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:19.158 Found net devices under 0000:31:00.0: cvl_0_0 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ up == up ]] 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:19.158 Found net devices under 0000:31:00.1: cvl_0_1 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # is_hw=yes 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:19.158 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:19.158 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.554 ms 00:26:19.158 00:26:19.158 --- 10.0.0.2 ping statistics --- 00:26:19.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.158 rtt min/avg/max/mdev = 0.554/0.554/0.554/0.000 ms 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:19.158 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:19.158 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:26:19.158 00:26:19.158 --- 10.0.0.1 ping statistics --- 00:26:19.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.158 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # return 0 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=445121 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 445121 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 445121 ']' 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:19.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:19.158 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:19.419 15:44:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:19.419 15:44:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:26:19.419 15:44:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:19.419 15:44:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.420 15:44:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:19.420 15:44:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.420 15:44:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:26:19.420 15:44:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.420 15:44:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:19.420 Malloc0 00:26:19.420 15:44:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.420 15:44:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:19.420 15:44:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.420 15:44:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:19.681 15:44:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.681 15:44:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:19.681 15:44:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.681 15:44:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:19.681 15:44:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.681 15:44:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:19.681 15:44:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.681 15:44:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:19.681 15:44:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.681 15:44:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:26:19.681 15:44:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:26:51.788 Fuzzing completed. Shutting down the fuzz application 00:26:51.788 00:26:51.788 Dumping successful admin opcodes: 00:26:51.788 8, 9, 10, 24, 00:26:51.788 Dumping successful io opcodes: 00:26:51.788 0, 9, 00:26:51.788 NS: 0x200003aeff00 I/O qp, Total commands completed: 1136217, total successful commands: 6680, random_seed: 1006407296 00:26:51.788 NS: 0x200003aeff00 admin qp, Total commands completed: 140857, total successful commands: 1144, random_seed: 524513920 00:26:51.788 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:26:51.788 Fuzzing completed. Shutting down the fuzz application 00:26:51.788 00:26:51.788 Dumping successful admin opcodes: 00:26:51.788 24, 00:26:51.788 Dumping successful io opcodes: 00:26:51.788 00:26:51.788 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 238056065 00:26:51.788 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 238130773 00:26:51.788 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:51.788 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.788 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:51.788 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.788 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:26:51.788 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:26:51.788 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@512 -- # nvmfcleanup 00:26:51.788 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:26:51.788 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:51.788 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:26:51.788 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:51.788 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:51.788 rmmod nvme_tcp 00:26:51.788 rmmod nvme_fabrics 00:26:51.788 rmmod nvme_keyring 00:26:51.788 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:51.788 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:26:51.788 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:26:51.788 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@513 -- # '[' -n 445121 ']' 00:26:51.788 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@514 -- # killprocess 445121 00:26:51.788 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 445121 ']' 00:26:51.788 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 445121 00:26:51.788 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:26:51.788 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:51.788 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 445121 00:26:51.788 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:51.788 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:51.788 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 445121' 00:26:51.788 killing process with pid 445121 00:26:51.788 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 445121 00:26:51.788 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 445121 00:26:51.788 15:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:26:51.788 15:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:26:51.788 15:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:26:51.788 15:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:26:51.788 15:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # iptables-restore 00:26:51.788 15:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # iptables-save 00:26:51.788 15:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:26:51.788 15:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:51.788 15:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:51.788 15:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.788 15:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:51.788 15:45:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:53.698 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:53.698 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:26:53.698 00:26:53.698 real 0m43.162s 00:26:53.698 user 0m56.576s 00:26:53.698 sys 0m15.884s 00:26:53.698 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:53.698 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:53.698 ************************************ 00:26:53.698 END TEST nvmf_fuzz 00:26:53.698 ************************************ 00:26:53.958 15:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:53.958 15:45:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:53.958 15:45:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:53.958 15:45:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:53.958 ************************************ 00:26:53.959 START TEST nvmf_multiconnection 00:26:53.959 ************************************ 00:26:53.959 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:53.959 * Looking for test storage... 00:26:53.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:53.959 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:53.959 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:53.959 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lcov --version 00:26:54.220 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:54.220 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:54.220 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:54.220 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:54.220 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:26:54.220 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:26:54.220 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:26:54.220 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:26:54.220 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:26:54.220 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:26:54.220 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:26:54.220 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:54.220 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:26:54.220 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:26:54.220 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:54.220 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:54.220 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:26:54.220 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:26:54.220 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:54.220 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:26:54.220 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:26:54.220 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:26:54.220 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:26:54.220 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:54.220 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:26:54.220 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:26:54.220 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:54.220 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:54.220 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:26:54.220 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:54.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.221 --rc genhtml_branch_coverage=1 00:26:54.221 --rc genhtml_function_coverage=1 00:26:54.221 --rc genhtml_legend=1 00:26:54.221 --rc geninfo_all_blocks=1 00:26:54.221 --rc geninfo_unexecuted_blocks=1 00:26:54.221 00:26:54.221 ' 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:54.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.221 --rc genhtml_branch_coverage=1 00:26:54.221 --rc genhtml_function_coverage=1 00:26:54.221 --rc genhtml_legend=1 00:26:54.221 --rc geninfo_all_blocks=1 00:26:54.221 --rc geninfo_unexecuted_blocks=1 00:26:54.221 00:26:54.221 ' 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:54.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.221 --rc genhtml_branch_coverage=1 00:26:54.221 --rc genhtml_function_coverage=1 00:26:54.221 --rc genhtml_legend=1 00:26:54.221 --rc geninfo_all_blocks=1 00:26:54.221 --rc geninfo_unexecuted_blocks=1 00:26:54.221 00:26:54.221 ' 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:54.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.221 --rc genhtml_branch_coverage=1 00:26:54.221 --rc genhtml_function_coverage=1 00:26:54.221 --rc genhtml_legend=1 00:26:54.221 --rc geninfo_all_blocks=1 00:26:54.221 --rc geninfo_unexecuted_blocks=1 00:26:54.221 00:26:54.221 ' 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:54.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@472 -- # prepare_net_devs 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@434 -- # local -g is_hw=no 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@436 -- # remove_spdk_ns 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:26:54.221 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:02.367 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:02.367 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:27:02.367 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:02.367 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:02.367 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:02.367 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:02.367 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:02.367 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:27:02.367 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:02.367 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:27:02.367 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:27:02.367 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:27:02.367 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:27:02.367 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:27:02.367 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:27:02.367 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:02.367 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:02.367 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:02.367 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:02.367 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:02.367 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:02.367 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:02.368 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:02.368 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ up == up ]] 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:02.368 Found net devices under 0000:31:00.0: cvl_0_0 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ up == up ]] 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:02.368 Found net devices under 0000:31:00.1: cvl_0_1 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # is_hw=yes 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:02.368 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:02.368 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:02.368 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:02.368 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:02.368 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:02.368 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:02.368 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:02.368 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:02.368 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:02.368 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:02.368 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.561 ms 00:27:02.368 00:27:02.368 --- 10.0.0.2 ping statistics --- 00:27:02.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:02.368 rtt min/avg/max/mdev = 0.561/0.561/0.561/0.000 ms 00:27:02.368 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:02.368 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:02.368 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:27:02.368 00:27:02.368 --- 10.0.0.1 ping statistics --- 00:27:02.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:02.368 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:27:02.368 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:02.368 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # return 0 00:27:02.368 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:27:02.368 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:02.368 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:27:02.368 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:27:02.368 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:02.368 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:27:02.368 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:27:02.368 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:27:02.368 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:27:02.368 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:02.368 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:02.368 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@505 -- # nvmfpid=456307 00:27:02.368 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@506 -- # waitforlisten 456307 00:27:02.368 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:02.368 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 456307 ']' 00:27:02.368 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:02.369 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:02.369 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:02.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:02.369 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:02.369 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:02.369 [2024-09-27 15:45:42.297297] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:27:02.369 [2024-09-27 15:45:42.297366] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:02.369 [2024-09-27 15:45:42.386750] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:02.369 [2024-09-27 15:45:42.436000] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:02.369 [2024-09-27 15:45:42.436055] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:02.369 [2024-09-27 15:45:42.436063] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:02.369 [2024-09-27 15:45:42.436071] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:02.369 [2024-09-27 15:45:42.436077] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:02.369 [2024-09-27 15:45:42.436226] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:02.369 [2024-09-27 15:45:42.436375] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:27:02.369 [2024-09-27 15:45:42.436529] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.369 [2024-09-27 15:45:42.436530] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:02.941 [2024-09-27 15:45:43.177763] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:02.941 Malloc1 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:02.941 [2024-09-27 15:45:43.251501] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:02.941 Malloc2 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:02.941 Malloc3 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:02.941 Malloc4 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.941 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:03.203 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.203 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:03.203 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:27:03.203 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.203 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:03.203 Malloc5 00:27:03.203 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.203 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:27:03.203 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.203 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:03.203 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.203 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:27:03.203 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.203 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:03.203 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.203 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:27:03.203 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.203 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:03.203 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.203 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:03.203 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:27:03.203 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.203 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:03.203 Malloc6 00:27:03.203 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.203 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:27:03.203 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.203 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:03.203 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.203 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:27:03.203 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.203 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:03.203 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.204 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:27:03.204 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.204 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:03.204 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.204 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:03.204 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:27:03.204 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.204 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:03.204 Malloc7 00:27:03.204 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.204 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:27:03.204 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.204 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:03.204 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.204 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:27:03.204 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.204 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:03.204 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.204 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:27:03.204 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.204 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:03.204 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.204 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:03.204 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:27:03.204 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.204 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:03.204 Malloc8 00:27:03.204 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.204 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:27:03.204 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.204 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:03.204 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.204 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:27:03.204 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.204 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:03.204 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.204 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:27:03.204 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.204 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:03.466 Malloc9 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:03.466 Malloc10 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:03.466 Malloc11 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:03.466 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:05.381 15:45:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:27:05.381 15:45:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:05.381 15:45:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:05.381 15:45:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:05.381 15:45:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:07.290 15:45:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:07.290 15:45:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:07.290 15:45:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:27:07.290 15:45:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:07.290 15:45:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:07.290 15:45:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:07.290 15:45:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:07.290 15:45:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:27:08.677 15:45:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:27:08.677 15:45:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:08.677 15:45:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:08.677 15:45:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:08.677 15:45:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:10.586 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:10.586 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:10.586 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:27:10.586 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:10.586 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:10.586 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:10.586 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:10.586 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:27:11.970 15:45:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:27:11.970 15:45:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:11.970 15:45:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:11.970 15:45:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:11.970 15:45:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:14.514 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:14.514 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:14.514 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:27:14.514 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:14.514 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:14.514 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:14.514 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:14.514 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:27:15.898 15:45:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:27:15.898 15:45:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:15.898 15:45:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:15.898 15:45:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:15.898 15:45:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:17.809 15:45:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:17.809 15:45:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:17.809 15:45:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:27:17.809 15:45:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:17.809 15:45:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:17.809 15:45:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:17.809 15:45:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:17.809 15:45:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:27:19.720 15:45:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:27:19.720 15:45:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:19.720 15:45:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:19.720 15:45:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:19.720 15:45:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:21.631 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:21.631 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:21.631 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:27:21.631 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:21.631 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:21.631 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:21.631 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:21.631 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:27:23.014 15:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:27:23.014 15:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:23.014 15:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:23.014 15:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:23.014 15:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:25.556 15:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:25.556 15:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:25.556 15:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:27:25.556 15:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:25.556 15:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:25.556 15:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:25.556 15:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:25.556 15:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:27:26.939 15:46:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:27:26.939 15:46:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:26.939 15:46:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:26.939 15:46:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:26.939 15:46:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:28.852 15:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:28.852 15:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:28.852 15:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:27:28.852 15:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:28.852 15:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:28.852 15:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:28.852 15:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:28.852 15:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:27:30.769 15:46:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:27:30.769 15:46:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:30.769 15:46:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:30.769 15:46:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:30.769 15:46:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:32.679 15:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:32.679 15:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:32.679 15:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:27:32.679 15:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:32.679 15:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:32.679 15:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:32.679 15:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:32.679 15:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:27:34.590 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:27:34.591 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:34.591 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:34.591 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:34.591 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:36.500 15:46:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:36.500 15:46:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:36.500 15:46:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:27:36.500 15:46:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:36.500 15:46:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:36.500 15:46:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:36.500 15:46:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:36.500 15:46:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:27:38.410 15:46:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:27:38.410 15:46:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:38.410 15:46:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:38.410 15:46:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:38.410 15:46:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:40.320 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:40.320 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:40.320 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:27:40.320 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:40.320 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:40.320 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:40.320 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:40.320 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:27:42.231 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:27:42.231 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:42.231 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:42.231 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:42.231 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:44.140 15:46:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:44.140 15:46:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:44.141 15:46:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:27:44.401 15:46:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:44.401 15:46:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:44.401 15:46:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:44.401 15:46:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:27:44.401 [global] 00:27:44.401 thread=1 00:27:44.401 invalidate=1 00:27:44.401 rw=read 00:27:44.401 time_based=1 00:27:44.401 runtime=10 00:27:44.401 ioengine=libaio 00:27:44.401 direct=1 00:27:44.401 bs=262144 00:27:44.401 iodepth=64 00:27:44.401 norandommap=1 00:27:44.401 numjobs=1 00:27:44.401 00:27:44.401 [job0] 00:27:44.401 filename=/dev/nvme0n1 00:27:44.401 [job1] 00:27:44.401 filename=/dev/nvme10n1 00:27:44.401 [job2] 00:27:44.401 filename=/dev/nvme1n1 00:27:44.401 [job3] 00:27:44.401 filename=/dev/nvme2n1 00:27:44.401 [job4] 00:27:44.401 filename=/dev/nvme3n1 00:27:44.401 [job5] 00:27:44.401 filename=/dev/nvme4n1 00:27:44.401 [job6] 00:27:44.401 filename=/dev/nvme5n1 00:27:44.401 [job7] 00:27:44.401 filename=/dev/nvme6n1 00:27:44.401 [job8] 00:27:44.401 filename=/dev/nvme7n1 00:27:44.401 [job9] 00:27:44.401 filename=/dev/nvme8n1 00:27:44.401 [job10] 00:27:44.401 filename=/dev/nvme9n1 00:27:44.401 Could not set queue depth (nvme0n1) 00:27:44.401 Could not set queue depth (nvme10n1) 00:27:44.401 Could not set queue depth (nvme1n1) 00:27:44.401 Could not set queue depth (nvme2n1) 00:27:44.401 Could not set queue depth (nvme3n1) 00:27:44.401 Could not set queue depth (nvme4n1) 00:27:44.401 Could not set queue depth (nvme5n1) 00:27:44.401 Could not set queue depth (nvme6n1) 00:27:44.401 Could not set queue depth (nvme7n1) 00:27:44.401 Could not set queue depth (nvme8n1) 00:27:44.401 Could not set queue depth (nvme9n1) 00:27:44.983 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:44.983 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:44.983 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:44.983 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:44.983 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:44.983 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:44.983 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:44.983 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:44.983 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:44.983 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:44.983 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:44.983 fio-3.35 00:27:44.983 Starting 11 threads 00:27:57.213 00:27:57.213 job0: (groupid=0, jobs=1): err= 0: pid=464805: Fri Sep 27 15:46:35 2024 00:27:57.213 read: IOPS=241, BW=60.4MiB/s (63.4MB/s)(612MiB/10120msec) 00:27:57.213 slat (usec): min=12, max=510513, avg=2274.29, stdev=16773.57 00:27:57.213 clat (msec): min=11, max=821, avg=262.08, stdev=180.04 00:27:57.213 lat (msec): min=11, max=1135, avg=264.35, stdev=182.02 00:27:57.213 clat percentiles (msec): 00:27:57.213 | 1.00th=[ 18], 5.00th=[ 46], 10.00th=[ 87], 20.00th=[ 132], 00:27:57.213 | 30.00th=[ 144], 40.00th=[ 157], 50.00th=[ 178], 60.00th=[ 253], 00:27:57.213 | 70.00th=[ 326], 80.00th=[ 435], 90.00th=[ 558], 95.00th=[ 625], 00:27:57.213 | 99.00th=[ 735], 99.50th=[ 793], 99.90th=[ 818], 99.95th=[ 818], 00:27:57.213 | 99.99th=[ 818] 00:27:57.213 bw ( KiB/s): min=23040, max=135168, per=7.20%, avg=61030.40, stdev=32240.21, samples=20 00:27:57.213 iops : min= 90, max= 528, avg=238.40, stdev=125.94, samples=20 00:27:57.213 lat (msec) : 20=1.31%, 50=4.05%, 100=6.38%, 250=47.94%, 500=26.69% 00:27:57.213 lat (msec) : 750=12.91%, 1000=0.74% 00:27:57.213 cpu : usr=0.11%, sys=0.84%, ctx=510, majf=0, minf=3534 00:27:57.213 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:27:57.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:57.213 issued rwts: total=2447,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.213 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:57.213 job1: (groupid=0, jobs=1): err= 0: pid=464806: Fri Sep 27 15:46:35 2024 00:27:57.213 read: IOPS=299, BW=75.0MiB/s (78.6MB/s)(758MiB/10108msec) 00:27:57.213 slat (usec): min=11, max=208023, avg=2217.48, stdev=10276.73 00:27:57.213 clat (msec): min=12, max=656, avg=210.89, stdev=120.73 00:27:57.213 lat (msec): min=13, max=662, avg=213.11, stdev=121.89 00:27:57.213 clat percentiles (msec): 00:27:57.213 | 1.00th=[ 40], 5.00th=[ 79], 10.00th=[ 91], 20.00th=[ 110], 00:27:57.213 | 30.00th=[ 128], 40.00th=[ 144], 50.00th=[ 165], 60.00th=[ 224], 00:27:57.213 | 70.00th=[ 271], 80.00th=[ 313], 90.00th=[ 376], 95.00th=[ 426], 00:27:57.213 | 99.00th=[ 592], 99.50th=[ 642], 99.90th=[ 642], 99.95th=[ 642], 00:27:57.213 | 99.99th=[ 659] 00:27:57.213 bw ( KiB/s): min=31744, max=176128, per=8.96%, avg=76006.40, stdev=35109.78, samples=20 00:27:57.213 iops : min= 124, max= 688, avg=296.90, stdev=137.15, samples=20 00:27:57.213 lat (msec) : 20=0.30%, 50=1.32%, 100=13.13%, 250=49.54%, 500=33.01% 00:27:57.213 lat (msec) : 750=2.70% 00:27:57.213 cpu : usr=0.10%, sys=0.98%, ctx=642, majf=0, minf=4097 00:27:57.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:27:57.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:57.213 issued rwts: total=3032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.213 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:57.213 job2: (groupid=0, jobs=1): err= 0: pid=464807: Fri Sep 27 15:46:35 2024 00:27:57.213 read: IOPS=259, BW=64.8MiB/s (67.9MB/s)(655MiB/10106msec) 00:27:57.213 slat (usec): min=9, max=458275, avg=2747.42, stdev=16643.01 00:27:57.213 clat (msec): min=13, max=1029, avg=244.00, stdev=199.15 00:27:57.213 lat (msec): min=13, max=1222, avg=246.74, stdev=201.10 00:27:57.213 clat percentiles (msec): 00:27:57.213 | 1.00th=[ 24], 5.00th=[ 33], 10.00th=[ 46], 20.00th=[ 120], 00:27:57.213 | 30.00th=[ 133], 40.00th=[ 142], 50.00th=[ 153], 60.00th=[ 199], 00:27:57.213 | 70.00th=[ 288], 80.00th=[ 393], 90.00th=[ 514], 95.00th=[ 676], 00:27:57.213 | 99.00th=[ 885], 99.50th=[ 902], 99.90th=[ 1003], 99.95th=[ 1003], 00:27:57.213 | 99.99th=[ 1028] 00:27:57.213 bw ( KiB/s): min= 6144, max=128000, per=7.71%, avg=65389.30, stdev=37729.30, samples=20 00:27:57.213 iops : min= 24, max= 500, avg=255.40, stdev=147.38, samples=20 00:27:57.213 lat (msec) : 20=0.46%, 50=10.12%, 100=5.88%, 250=50.57%, 500=21.81% 00:27:57.213 lat (msec) : 750=7.03%, 1000=4.09%, 2000=0.04% 00:27:57.213 cpu : usr=0.08%, sys=0.87%, ctx=617, majf=0, minf=4097 00:27:57.213 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:27:57.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:57.213 issued rwts: total=2618,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.213 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:57.213 job3: (groupid=0, jobs=1): err= 0: pid=464808: Fri Sep 27 15:46:35 2024 00:27:57.213 read: IOPS=208, BW=52.2MiB/s (54.7MB/s)(528MiB/10109msec) 00:27:57.213 slat (usec): min=11, max=362841, avg=3172.94, stdev=17817.46 00:27:57.213 clat (msec): min=14, max=854, avg=302.92, stdev=185.08 00:27:57.213 lat (msec): min=16, max=1012, avg=306.09, stdev=187.41 00:27:57.213 clat percentiles (msec): 00:27:57.213 | 1.00th=[ 40], 5.00th=[ 62], 10.00th=[ 90], 20.00th=[ 122], 00:27:57.213 | 30.00th=[ 180], 40.00th=[ 228], 50.00th=[ 279], 60.00th=[ 321], 00:27:57.213 | 70.00th=[ 380], 80.00th=[ 464], 90.00th=[ 575], 95.00th=[ 651], 00:27:57.213 | 99.00th=[ 827], 99.50th=[ 835], 99.90th=[ 844], 99.95th=[ 844], 00:27:57.213 | 99.99th=[ 852] 00:27:57.213 bw ( KiB/s): min= 7680, max=117760, per=6.18%, avg=52403.20, stdev=26246.27, samples=20 00:27:57.213 iops : min= 30, max= 460, avg=204.70, stdev=102.52, samples=20 00:27:57.213 lat (msec) : 20=0.28%, 50=2.42%, 100=9.71%, 250=32.02%, 500=39.41% 00:27:57.213 lat (msec) : 750=13.69%, 1000=2.46% 00:27:57.213 cpu : usr=0.07%, sys=0.79%, ctx=436, majf=0, minf=4097 00:27:57.213 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:27:57.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:57.213 issued rwts: total=2111,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.213 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:57.213 job4: (groupid=0, jobs=1): err= 0: pid=464809: Fri Sep 27 15:46:35 2024 00:27:57.213 read: IOPS=210, BW=52.7MiB/s (55.3MB/s)(533MiB/10108msec) 00:27:57.213 slat (usec): min=11, max=297094, avg=2728.50, stdev=16383.29 00:27:57.213 clat (msec): min=15, max=866, avg=300.55, stdev=201.31 00:27:57.213 lat (msec): min=15, max=884, avg=303.28, stdev=203.27 00:27:57.213 clat percentiles (msec): 00:27:57.213 | 1.00th=[ 21], 5.00th=[ 59], 10.00th=[ 74], 20.00th=[ 105], 00:27:57.213 | 30.00th=[ 148], 40.00th=[ 178], 50.00th=[ 288], 60.00th=[ 326], 00:27:57.213 | 70.00th=[ 405], 80.00th=[ 481], 90.00th=[ 592], 95.00th=[ 693], 00:27:57.213 | 99.00th=[ 844], 99.50th=[ 860], 99.90th=[ 860], 99.95th=[ 860], 00:27:57.213 | 99.99th=[ 869] 00:27:57.213 bw ( KiB/s): min=23040, max=103424, per=6.24%, avg=52940.80, stdev=24329.66, samples=20 00:27:57.213 iops : min= 90, max= 404, avg=206.80, stdev=95.04, samples=20 00:27:57.213 lat (msec) : 20=0.89%, 50=2.53%, 100=14.45%, 250=26.94%, 500=37.31% 00:27:57.213 lat (msec) : 750=15.67%, 1000=2.21% 00:27:57.213 cpu : usr=0.07%, sys=0.75%, ctx=466, majf=0, minf=4097 00:27:57.213 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:27:57.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:57.213 issued rwts: total=2131,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.213 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:57.213 job5: (groupid=0, jobs=1): err= 0: pid=464810: Fri Sep 27 15:46:35 2024 00:27:57.213 read: IOPS=157, BW=39.3MiB/s (41.2MB/s)(398MiB/10111msec) 00:27:57.213 slat (usec): min=10, max=192719, avg=6132.85, stdev=18991.88 00:27:57.213 clat (msec): min=13, max=838, avg=400.23, stdev=164.50 00:27:57.213 lat (msec): min=14, max=842, avg=406.36, stdev=166.93 00:27:57.213 clat percentiles (msec): 00:27:57.213 | 1.00th=[ 30], 5.00th=[ 138], 10.00th=[ 182], 20.00th=[ 253], 00:27:57.213 | 30.00th=[ 309], 40.00th=[ 359], 50.00th=[ 401], 60.00th=[ 435], 00:27:57.213 | 70.00th=[ 481], 80.00th=[ 542], 90.00th=[ 625], 95.00th=[ 693], 00:27:57.213 | 99.00th=[ 785], 99.50th=[ 802], 99.90th=[ 827], 99.95th=[ 835], 00:27:57.213 | 99.99th=[ 835] 00:27:57.213 bw ( KiB/s): min=20992, max=88576, per=4.61%, avg=39070.60, stdev=15592.11, samples=20 00:27:57.213 iops : min= 82, max= 346, avg=152.60, stdev=60.89, samples=20 00:27:57.213 lat (msec) : 20=0.31%, 50=0.75%, 100=0.44%, 250=18.18%, 500=53.96% 00:27:57.213 lat (msec) : 750=24.59%, 1000=1.76% 00:27:57.213 cpu : usr=0.03%, sys=0.67%, ctx=286, majf=0, minf=4097 00:27:57.213 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.0% 00:27:57.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.213 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:57.213 issued rwts: total=1590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.213 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:57.213 job6: (groupid=0, jobs=1): err= 0: pid=464811: Fri Sep 27 15:46:35 2024 00:27:57.213 read: IOPS=212, BW=53.2MiB/s (55.8MB/s)(538MiB/10107msec) 00:27:57.213 slat (usec): min=10, max=145613, avg=4534.60, stdev=15621.31 00:27:57.213 clat (msec): min=11, max=840, avg=295.95, stdev=175.17 00:27:57.213 lat (msec): min=12, max=886, avg=300.48, stdev=177.67 00:27:57.213 clat percentiles (msec): 00:27:57.213 | 1.00th=[ 51], 5.00th=[ 120], 10.00th=[ 136], 20.00th=[ 153], 00:27:57.213 | 30.00th=[ 163], 40.00th=[ 178], 50.00th=[ 232], 60.00th=[ 313], 00:27:57.213 | 70.00th=[ 368], 80.00th=[ 451], 90.00th=[ 550], 95.00th=[ 659], 00:27:57.213 | 99.00th=[ 793], 99.50th=[ 802], 99.90th=[ 835], 99.95th=[ 844], 00:27:57.213 | 99.99th=[ 844] 00:27:57.213 bw ( KiB/s): min=17408, max=105984, per=6.30%, avg=53401.60, stdev=29590.86, samples=20 00:27:57.213 iops : min= 68, max= 414, avg=208.60, stdev=115.59, samples=20 00:27:57.213 lat (msec) : 20=0.93%, 100=1.35%, 250=48.05%, 500=34.65%, 750=12.47% 00:27:57.213 lat (msec) : 1000=2.56% 00:27:57.213 cpu : usr=0.08%, sys=0.81%, ctx=359, majf=0, minf=4097 00:27:57.213 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:27:57.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:57.214 issued rwts: total=2150,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.214 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:57.214 job7: (groupid=0, jobs=1): err= 0: pid=464812: Fri Sep 27 15:46:35 2024 00:27:57.214 read: IOPS=599, BW=150MiB/s (157MB/s)(1514MiB/10105msec) 00:27:57.214 slat (usec): min=9, max=129121, avg=1649.30, stdev=5898.25 00:27:57.214 clat (msec): min=11, max=568, avg=105.01, stdev=79.16 00:27:57.214 lat (msec): min=12, max=568, avg=106.66, stdev=80.29 00:27:57.214 clat percentiles (msec): 00:27:57.214 | 1.00th=[ 29], 5.00th=[ 32], 10.00th=[ 35], 20.00th=[ 40], 00:27:57.214 | 30.00th=[ 48], 40.00th=[ 61], 50.00th=[ 90], 60.00th=[ 118], 00:27:57.214 | 70.00th=[ 133], 80.00th=[ 144], 90.00th=[ 182], 95.00th=[ 264], 00:27:57.214 | 99.00th=[ 405], 99.50th=[ 477], 99.90th=[ 550], 99.95th=[ 567], 00:27:57.214 | 99.99th=[ 567] 00:27:57.214 bw ( KiB/s): min=33792, max=387584, per=18.09%, avg=153420.80, stdev=101845.31, samples=20 00:27:57.214 iops : min= 132, max= 1514, avg=599.30, stdev=397.83, samples=20 00:27:57.214 lat (msec) : 20=0.28%, 50=32.53%, 100=19.63%, 250=41.88%, 500=5.45% 00:27:57.214 lat (msec) : 750=0.23% 00:27:57.214 cpu : usr=0.24%, sys=2.03%, ctx=981, majf=0, minf=4097 00:27:57.214 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:27:57.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:57.214 issued rwts: total=6056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.214 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:57.214 job8: (groupid=0, jobs=1): err= 0: pid=464813: Fri Sep 27 15:46:35 2024 00:27:57.214 read: IOPS=331, BW=83.0MiB/s (87.0MB/s)(839MiB/10105msec) 00:27:57.214 slat (usec): min=9, max=172915, avg=2635.33, stdev=9730.02 00:27:57.214 clat (msec): min=31, max=636, avg=190.01, stdev=103.37 00:27:57.214 lat (msec): min=31, max=636, avg=192.65, stdev=104.79 00:27:57.214 clat percentiles (msec): 00:27:57.214 | 1.00th=[ 44], 5.00th=[ 74], 10.00th=[ 96], 20.00th=[ 110], 00:27:57.214 | 30.00th=[ 124], 40.00th=[ 133], 50.00th=[ 148], 60.00th=[ 180], 00:27:57.214 | 70.00th=[ 230], 80.00th=[ 279], 90.00th=[ 342], 95.00th=[ 405], 00:27:57.214 | 99.00th=[ 502], 99.50th=[ 506], 99.90th=[ 550], 99.95th=[ 634], 00:27:57.214 | 99.99th=[ 634] 00:27:57.214 bw ( KiB/s): min=33280, max=146944, per=9.94%, avg=84260.85, stdev=36232.03, samples=20 00:27:57.214 iops : min= 130, max= 574, avg=329.10, stdev=141.49, samples=20 00:27:57.214 lat (msec) : 50=2.12%, 100=10.26%, 250=61.27%, 500=25.52%, 750=0.83% 00:27:57.214 cpu : usr=0.07%, sys=1.05%, ctx=604, majf=0, minf=4097 00:27:57.214 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:27:57.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:57.214 issued rwts: total=3354,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.214 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:57.214 job9: (groupid=0, jobs=1): err= 0: pid=464814: Fri Sep 27 15:46:35 2024 00:27:57.214 read: IOPS=429, BW=107MiB/s (113MB/s)(1081MiB/10063msec) 00:27:57.214 slat (usec): min=7, max=121414, avg=2309.87, stdev=7974.94 00:27:57.214 clat (msec): min=14, max=549, avg=146.34, stdev=113.18 00:27:57.214 lat (msec): min=14, max=566, avg=148.65, stdev=114.87 00:27:57.214 clat percentiles (msec): 00:27:57.214 | 1.00th=[ 28], 5.00th=[ 33], 10.00th=[ 35], 20.00th=[ 41], 00:27:57.214 | 30.00th=[ 81], 40.00th=[ 104], 50.00th=[ 118], 60.00th=[ 136], 00:27:57.214 | 70.00th=[ 159], 80.00th=[ 222], 90.00th=[ 330], 95.00th=[ 393], 00:27:57.214 | 99.00th=[ 498], 99.50th=[ 514], 99.90th=[ 531], 99.95th=[ 531], 00:27:57.214 | 99.99th=[ 550] 00:27:57.214 bw ( KiB/s): min=33280, max=303616, per=12.87%, avg=109107.20, stdev=79997.12, samples=20 00:27:57.214 iops : min= 130, max= 1186, avg=426.20, stdev=312.49, samples=20 00:27:57.214 lat (msec) : 20=0.12%, 50=25.39%, 100=11.86%, 250=44.46%, 500=17.25% 00:27:57.214 lat (msec) : 750=0.92% 00:27:57.214 cpu : usr=0.17%, sys=1.50%, ctx=723, majf=0, minf=4097 00:27:57.214 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:27:57.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:57.214 issued rwts: total=4325,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.214 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:57.214 job10: (groupid=0, jobs=1): err= 0: pid=464817: Fri Sep 27 15:46:35 2024 00:27:57.214 read: IOPS=367, BW=91.8MiB/s (96.3MB/s)(928MiB/10104msec) 00:27:57.214 slat (usec): min=11, max=235182, avg=2640.91, stdev=9289.99 00:27:57.214 clat (msec): min=19, max=557, avg=171.46, stdev=69.29 00:27:57.214 lat (msec): min=20, max=557, avg=174.10, stdev=70.23 00:27:57.214 clat percentiles (msec): 00:27:57.214 | 1.00th=[ 94], 5.00th=[ 107], 10.00th=[ 113], 20.00th=[ 122], 00:27:57.214 | 30.00th=[ 130], 40.00th=[ 138], 50.00th=[ 148], 60.00th=[ 157], 00:27:57.214 | 70.00th=[ 176], 80.00th=[ 215], 90.00th=[ 288], 95.00th=[ 326], 00:27:57.214 | 99.00th=[ 401], 99.50th=[ 405], 99.90th=[ 426], 99.95th=[ 468], 00:27:57.214 | 99.99th=[ 558] 00:27:57.214 bw ( KiB/s): min=33724, max=146432, per=11.01%, avg=93385.40, stdev=30616.92, samples=20 00:27:57.214 iops : min= 131, max= 572, avg=364.75, stdev=119.67, samples=20 00:27:57.214 lat (msec) : 20=0.03%, 50=0.08%, 100=2.05%, 250=83.54%, 500=14.28% 00:27:57.214 lat (msec) : 750=0.03% 00:27:57.214 cpu : usr=0.15%, sys=1.30%, ctx=661, majf=0, minf=4097 00:27:57.214 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:27:57.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:57.214 issued rwts: total=3711,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.214 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:57.214 00:27:57.214 Run status group 0 (all jobs): 00:27:57.214 READ: bw=828MiB/s (868MB/s), 39.3MiB/s-150MiB/s (41.2MB/s-157MB/s), io=8381MiB (8788MB), run=10063-10120msec 00:27:57.214 00:27:57.214 Disk stats (read/write): 00:27:57.214 nvme0n1: ios=4813/0, merge=0/0, ticks=1251678/0, in_queue=1251678, util=96.48% 00:27:57.214 nvme10n1: ios=6025/0, merge=0/0, ticks=1258200/0, in_queue=1258200, util=96.64% 00:27:57.214 nvme1n1: ios=5182/0, merge=0/0, ticks=1257138/0, in_queue=1257138, util=97.02% 00:27:57.214 nvme2n1: ios=4199/0, merge=0/0, ticks=1262991/0, in_queue=1262991, util=97.25% 00:27:57.214 nvme3n1: ios=4191/0, merge=0/0, ticks=1253005/0, in_queue=1253005, util=97.30% 00:27:57.214 nvme4n1: ios=3122/0, merge=0/0, ticks=1252598/0, in_queue=1252598, util=97.81% 00:27:57.214 nvme5n1: ios=4256/0, merge=0/0, ticks=1248869/0, in_queue=1248869, util=98.07% 00:27:57.214 nvme6n1: ios=12097/0, merge=0/0, ticks=1256061/0, in_queue=1256061, util=98.19% 00:27:57.214 nvme7n1: ios=6649/0, merge=0/0, ticks=1251869/0, in_queue=1251869, util=98.68% 00:27:57.214 nvme8n1: ios=8414/0, merge=0/0, ticks=1220828/0, in_queue=1220828, util=98.95% 00:27:57.214 nvme9n1: ios=7405/0, merge=0/0, ticks=1256728/0, in_queue=1256728, util=99.13% 00:27:57.214 15:46:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:27:57.214 [global] 00:27:57.214 thread=1 00:27:57.214 invalidate=1 00:27:57.214 rw=randwrite 00:27:57.214 time_based=1 00:27:57.214 runtime=10 00:27:57.214 ioengine=libaio 00:27:57.214 direct=1 00:27:57.214 bs=262144 00:27:57.214 iodepth=64 00:27:57.214 norandommap=1 00:27:57.214 numjobs=1 00:27:57.214 00:27:57.214 [job0] 00:27:57.214 filename=/dev/nvme0n1 00:27:57.214 [job1] 00:27:57.214 filename=/dev/nvme10n1 00:27:57.214 [job2] 00:27:57.214 filename=/dev/nvme1n1 00:27:57.214 [job3] 00:27:57.214 filename=/dev/nvme2n1 00:27:57.214 [job4] 00:27:57.214 filename=/dev/nvme3n1 00:27:57.214 [job5] 00:27:57.214 filename=/dev/nvme4n1 00:27:57.214 [job6] 00:27:57.214 filename=/dev/nvme5n1 00:27:57.214 [job7] 00:27:57.214 filename=/dev/nvme6n1 00:27:57.214 [job8] 00:27:57.214 filename=/dev/nvme7n1 00:27:57.214 [job9] 00:27:57.214 filename=/dev/nvme8n1 00:27:57.214 [job10] 00:27:57.214 filename=/dev/nvme9n1 00:27:57.214 Could not set queue depth (nvme0n1) 00:27:57.214 Could not set queue depth (nvme10n1) 00:27:57.214 Could not set queue depth (nvme1n1) 00:27:57.214 Could not set queue depth (nvme2n1) 00:27:57.214 Could not set queue depth (nvme3n1) 00:27:57.214 Could not set queue depth (nvme4n1) 00:27:57.214 Could not set queue depth (nvme5n1) 00:27:57.214 Could not set queue depth (nvme6n1) 00:27:57.214 Could not set queue depth (nvme7n1) 00:27:57.214 Could not set queue depth (nvme8n1) 00:27:57.214 Could not set queue depth (nvme9n1) 00:27:57.214 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:57.215 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:57.215 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:57.215 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:57.215 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:57.215 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:57.215 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:57.215 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:57.215 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:57.215 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:57.215 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:57.215 fio-3.35 00:27:57.215 Starting 11 threads 00:28:07.231 00:28:07.231 job0: (groupid=0, jobs=1): err= 0: pid=466239: Fri Sep 27 15:46:46 2024 00:28:07.231 write: IOPS=413, BW=103MiB/s (109MB/s)(1046MiB/10110msec); 0 zone resets 00:28:07.231 slat (usec): min=25, max=75015, avg=2258.83, stdev=4535.89 00:28:07.231 clat (msec): min=3, max=301, avg=152.26, stdev=50.91 00:28:07.231 lat (msec): min=3, max=301, avg=154.52, stdev=51.43 00:28:07.231 clat percentiles (msec): 00:28:07.231 | 1.00th=[ 22], 5.00th=[ 81], 10.00th=[ 85], 20.00th=[ 90], 00:28:07.231 | 30.00th=[ 132], 40.00th=[ 155], 50.00th=[ 163], 60.00th=[ 169], 00:28:07.231 | 70.00th=[ 171], 80.00th=[ 194], 90.00th=[ 218], 95.00th=[ 228], 00:28:07.231 | 99.00th=[ 268], 99.50th=[ 288], 99.90th=[ 300], 99.95th=[ 300], 00:28:07.231 | 99.99th=[ 300] 00:28:07.231 bw ( KiB/s): min=75776, max=188928, per=7.71%, avg=105523.20, stdev=31911.28, samples=20 00:28:07.231 iops : min= 296, max= 738, avg=412.20, stdev=124.65, samples=20 00:28:07.231 lat (msec) : 4=0.02%, 10=0.05%, 20=0.55%, 50=1.58%, 100=21.82% 00:28:07.231 lat (msec) : 250=73.79%, 500=2.20% 00:28:07.231 cpu : usr=1.12%, sys=1.47%, ctx=1234, majf=0, minf=1 00:28:07.231 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:28:07.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:07.231 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:07.231 issued rwts: total=0,4185,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:07.231 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:07.231 job1: (groupid=0, jobs=1): err= 0: pid=466260: Fri Sep 27 15:46:46 2024 00:28:07.231 write: IOPS=510, BW=128MiB/s (134MB/s)(1286MiB/10070msec); 0 zone resets 00:28:07.231 slat (usec): min=21, max=34578, avg=1784.27, stdev=3773.16 00:28:07.231 clat (msec): min=6, max=377, avg=123.41, stdev=61.16 00:28:07.231 lat (msec): min=6, max=381, avg=125.20, stdev=61.97 00:28:07.231 clat percentiles (msec): 00:28:07.231 | 1.00th=[ 11], 5.00th=[ 44], 10.00th=[ 72], 20.00th=[ 84], 00:28:07.231 | 30.00th=[ 95], 40.00th=[ 99], 50.00th=[ 103], 60.00th=[ 106], 00:28:07.231 | 70.00th=[ 140], 80.00th=[ 176], 90.00th=[ 215], 95.00th=[ 230], 00:28:07.231 | 99.00th=[ 347], 99.50th=[ 355], 99.90th=[ 368], 99.95th=[ 372], 00:28:07.231 | 99.99th=[ 380] 00:28:07.231 bw ( KiB/s): min=49152, max=218112, per=9.50%, avg=130108.30, stdev=46173.00, samples=20 00:28:07.231 iops : min= 192, max= 852, avg=508.20, stdev=180.39, samples=20 00:28:07.231 lat (msec) : 10=0.68%, 20=2.12%, 50=2.55%, 100=37.34%, 250=54.54% 00:28:07.231 lat (msec) : 500=2.78% 00:28:07.231 cpu : usr=1.24%, sys=1.37%, ctx=1676, majf=0, minf=1 00:28:07.231 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:28:07.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:07.231 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:07.231 issued rwts: total=0,5145,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:07.231 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:07.231 job2: (groupid=0, jobs=1): err= 0: pid=466266: Fri Sep 27 15:46:46 2024 00:28:07.231 write: IOPS=568, BW=142MiB/s (149MB/s)(1431MiB/10073msec); 0 zone resets 00:28:07.231 slat (usec): min=21, max=19230, avg=1742.84, stdev=3332.10 00:28:07.231 clat (msec): min=16, max=246, avg=110.90, stdev=43.16 00:28:07.231 lat (msec): min=16, max=246, avg=112.64, stdev=43.75 00:28:07.231 clat percentiles (msec): 00:28:07.231 | 1.00th=[ 59], 5.00th=[ 69], 10.00th=[ 72], 20.00th=[ 77], 00:28:07.231 | 30.00th=[ 83], 40.00th=[ 96], 50.00th=[ 101], 60.00th=[ 104], 00:28:07.231 | 70.00th=[ 108], 80.00th=[ 146], 90.00th=[ 184], 95.00th=[ 209], 00:28:07.231 | 99.00th=[ 230], 99.50th=[ 234], 99.90th=[ 247], 99.95th=[ 247], 00:28:07.231 | 99.99th=[ 247] 00:28:07.231 bw ( KiB/s): min=79872, max=224256, per=10.58%, avg=144870.40, stdev=45498.00, samples=20 00:28:07.231 iops : min= 312, max= 876, avg=565.90, stdev=177.73, samples=20 00:28:07.231 lat (msec) : 20=0.07%, 50=0.21%, 100=49.67%, 250=50.05% 00:28:07.231 cpu : usr=1.41%, sys=1.71%, ctx=1397, majf=0, minf=1 00:28:07.231 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:28:07.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:07.231 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:07.231 issued rwts: total=0,5722,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:07.231 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:07.231 job3: (groupid=0, jobs=1): err= 0: pid=466269: Fri Sep 27 15:46:46 2024 00:28:07.231 write: IOPS=692, BW=173MiB/s (182MB/s)(1745MiB/10078msec); 0 zone resets 00:28:07.231 slat (usec): min=16, max=18424, avg=1420.05, stdev=2799.00 00:28:07.231 clat (msec): min=20, max=236, avg=90.97, stdev=41.20 00:28:07.231 lat (msec): min=20, max=236, avg=92.39, stdev=41.79 00:28:07.231 clat percentiles (msec): 00:28:07.231 | 1.00th=[ 42], 5.00th=[ 44], 10.00th=[ 50], 20.00th=[ 59], 00:28:07.231 | 30.00th=[ 62], 40.00th=[ 64], 50.00th=[ 78], 60.00th=[ 102], 00:28:07.231 | 70.00th=[ 106], 80.00th=[ 109], 90.00th=[ 167], 95.00th=[ 180], 00:28:07.231 | 99.00th=[ 209], 99.50th=[ 220], 99.90th=[ 232], 99.95th=[ 236], 00:28:07.231 | 99.99th=[ 236] 00:28:07.231 bw ( KiB/s): min=87040, max=315904, per=12.93%, avg=177024.00, stdev=73220.97, samples=20 00:28:07.231 iops : min= 340, max= 1234, avg=691.50, stdev=286.02, samples=20 00:28:07.231 lat (msec) : 50=10.19%, 100=46.75%, 250=43.06% 00:28:07.231 cpu : usr=1.52%, sys=2.24%, ctx=1729, majf=0, minf=1 00:28:07.231 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:28:07.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:07.231 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:07.231 issued rwts: total=0,6978,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:07.232 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:07.232 job4: (groupid=0, jobs=1): err= 0: pid=466273: Fri Sep 27 15:46:46 2024 00:28:07.232 write: IOPS=394, BW=98.7MiB/s (103MB/s)(998MiB/10111msec); 0 zone resets 00:28:07.232 slat (usec): min=25, max=71689, avg=2045.33, stdev=4693.68 00:28:07.232 clat (msec): min=7, max=384, avg=159.99, stdev=65.06 00:28:07.232 lat (msec): min=7, max=387, avg=162.04, stdev=65.91 00:28:07.232 clat percentiles (msec): 00:28:07.232 | 1.00th=[ 17], 5.00th=[ 35], 10.00th=[ 55], 20.00th=[ 105], 00:28:07.232 | 30.00th=[ 127], 40.00th=[ 169], 50.00th=[ 180], 60.00th=[ 186], 00:28:07.232 | 70.00th=[ 192], 80.00th=[ 201], 90.00th=[ 230], 95.00th=[ 247], 00:28:07.232 | 99.00th=[ 300], 99.50th=[ 355], 99.90th=[ 380], 99.95th=[ 380], 00:28:07.232 | 99.99th=[ 384] 00:28:07.232 bw ( KiB/s): min=65154, max=177152, per=7.34%, avg=100563.30, stdev=36222.84, samples=20 00:28:07.232 iops : min= 254, max= 692, avg=392.80, stdev=141.52, samples=20 00:28:07.232 lat (msec) : 10=0.05%, 20=2.03%, 50=6.94%, 100=8.92%, 250=77.52% 00:28:07.232 lat (msec) : 500=4.54% 00:28:07.232 cpu : usr=1.08%, sys=1.35%, ctx=1751, majf=0, minf=1 00:28:07.232 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:28:07.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:07.232 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:07.232 issued rwts: total=0,3991,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:07.232 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:07.232 job5: (groupid=0, jobs=1): err= 0: pid=466291: Fri Sep 27 15:46:46 2024 00:28:07.232 write: IOPS=432, BW=108MiB/s (113MB/s)(1094MiB/10103msec); 0 zone resets 00:28:07.232 slat (usec): min=24, max=57782, avg=1986.79, stdev=4233.98 00:28:07.232 clat (msec): min=13, max=331, avg=145.78, stdev=55.08 00:28:07.232 lat (msec): min=13, max=331, avg=147.77, stdev=55.77 00:28:07.232 clat percentiles (msec): 00:28:07.232 | 1.00th=[ 25], 5.00th=[ 53], 10.00th=[ 81], 20.00th=[ 88], 00:28:07.232 | 30.00th=[ 109], 40.00th=[ 142], 50.00th=[ 159], 60.00th=[ 167], 00:28:07.232 | 70.00th=[ 169], 80.00th=[ 180], 90.00th=[ 213], 95.00th=[ 228], 00:28:07.232 | 99.00th=[ 292], 99.50th=[ 321], 99.90th=[ 330], 99.95th=[ 330], 00:28:07.232 | 99.99th=[ 330] 00:28:07.232 bw ( KiB/s): min=73728, max=192512, per=8.06%, avg=110373.05, stdev=36612.14, samples=20 00:28:07.232 iops : min= 288, max= 752, avg=431.10, stdev=143.01, samples=20 00:28:07.232 lat (msec) : 20=0.57%, 50=3.89%, 100=23.91%, 250=68.88%, 500=2.74% 00:28:07.232 cpu : usr=1.11%, sys=1.33%, ctx=1680, majf=0, minf=1 00:28:07.232 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:28:07.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:07.232 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:07.232 issued rwts: total=0,4374,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:07.232 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:07.232 job6: (groupid=0, jobs=1): err= 0: pid=466304: Fri Sep 27 15:46:46 2024 00:28:07.232 write: IOPS=378, BW=94.6MiB/s (99.2MB/s)(957MiB/10111msec); 0 zone resets 00:28:07.232 slat (usec): min=17, max=62354, avg=2339.09, stdev=4665.11 00:28:07.232 clat (msec): min=10, max=418, avg=166.75, stdev=49.86 00:28:07.232 lat (msec): min=12, max=423, avg=169.09, stdev=50.33 00:28:07.232 clat percentiles (msec): 00:28:07.232 | 1.00th=[ 37], 5.00th=[ 67], 10.00th=[ 93], 20.00th=[ 153], 00:28:07.232 | 30.00th=[ 161], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 171], 00:28:07.232 | 70.00th=[ 178], 80.00th=[ 197], 90.00th=[ 215], 95.00th=[ 226], 00:28:07.232 | 99.00th=[ 359], 99.50th=[ 388], 99.90th=[ 414], 99.95th=[ 418], 00:28:07.232 | 99.99th=[ 418] 00:28:07.232 bw ( KiB/s): min=73728, max=169298, per=7.04%, avg=96349.70, stdev=19823.50, samples=20 00:28:07.232 iops : min= 288, max= 661, avg=376.35, stdev=77.37, samples=20 00:28:07.232 lat (msec) : 20=0.26%, 50=1.44%, 100=9.33%, 250=86.38%, 500=2.59% 00:28:07.232 cpu : usr=0.83%, sys=1.14%, ctx=1284, majf=0, minf=1 00:28:07.232 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:28:07.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:07.232 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:07.232 issued rwts: total=0,3826,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:07.232 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:07.232 job7: (groupid=0, jobs=1): err= 0: pid=466315: Fri Sep 27 15:46:46 2024 00:28:07.232 write: IOPS=638, BW=160MiB/s (167MB/s)(1607MiB/10071msec); 0 zone resets 00:28:07.232 slat (usec): min=22, max=219335, avg=1424.96, stdev=4274.48 00:28:07.232 clat (msec): min=3, max=341, avg=98.82, stdev=53.47 00:28:07.232 lat (msec): min=3, max=341, avg=100.25, stdev=54.12 00:28:07.232 clat percentiles (msec): 00:28:07.232 | 1.00th=[ 12], 5.00th=[ 51], 10.00th=[ 53], 20.00th=[ 55], 00:28:07.232 | 30.00th=[ 58], 40.00th=[ 71], 50.00th=[ 96], 60.00th=[ 99], 00:28:07.232 | 70.00th=[ 103], 80.00th=[ 132], 90.00th=[ 180], 95.00th=[ 211], 00:28:07.232 | 99.00th=[ 257], 99.50th=[ 300], 99.90th=[ 330], 99.95th=[ 338], 00:28:07.232 | 99.99th=[ 342] 00:28:07.232 bw ( KiB/s): min=67206, max=296448, per=11.90%, avg=162899.50, stdev=70076.66, samples=20 00:28:07.232 iops : min= 262, max= 1158, avg=636.30, stdev=273.77, samples=20 00:28:07.232 lat (msec) : 4=0.05%, 10=0.78%, 20=1.32%, 50=2.65%, 100=60.07% 00:28:07.232 lat (msec) : 250=34.13%, 500=1.01% 00:28:07.232 cpu : usr=1.76%, sys=2.06%, ctx=2061, majf=0, minf=1 00:28:07.232 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:28:07.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:07.232 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:07.232 issued rwts: total=0,6426,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:07.232 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:07.232 job8: (groupid=0, jobs=1): err= 0: pid=466343: Fri Sep 27 15:46:46 2024 00:28:07.232 write: IOPS=445, BW=111MiB/s (117MB/s)(1126MiB/10110msec); 0 zone resets 00:28:07.232 slat (usec): min=26, max=54258, avg=1926.75, stdev=4279.68 00:28:07.232 clat (msec): min=2, max=351, avg=141.75, stdev=63.77 00:28:07.232 lat (msec): min=2, max=351, avg=143.68, stdev=64.52 00:28:07.232 clat percentiles (msec): 00:28:07.232 | 1.00th=[ 7], 5.00th=[ 59], 10.00th=[ 70], 20.00th=[ 74], 00:28:07.232 | 30.00th=[ 78], 40.00th=[ 124], 50.00th=[ 167], 60.00th=[ 180], 00:28:07.232 | 70.00th=[ 188], 80.00th=[ 194], 90.00th=[ 207], 95.00th=[ 232], 00:28:07.232 | 99.00th=[ 264], 99.50th=[ 296], 99.90th=[ 347], 99.95th=[ 347], 00:28:07.232 | 99.99th=[ 351] 00:28:07.232 bw ( KiB/s): min=75776, max=222208, per=8.30%, avg=113646.55, stdev=42706.83, samples=20 00:28:07.232 iops : min= 296, max= 868, avg=443.90, stdev=166.85, samples=20 00:28:07.232 lat (msec) : 4=0.16%, 10=1.80%, 20=0.67%, 50=1.71%, 100=32.87% 00:28:07.232 lat (msec) : 250=60.17%, 500=2.62% 00:28:07.232 cpu : usr=0.99%, sys=1.61%, ctx=1692, majf=0, minf=1 00:28:07.232 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:28:07.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:07.232 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:07.232 issued rwts: total=0,4502,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:07.232 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:07.232 job9: (groupid=0, jobs=1): err= 0: pid=466356: Fri Sep 27 15:46:46 2024 00:28:07.232 write: IOPS=494, BW=124MiB/s (130MB/s)(1251MiB/10115msec); 0 zone resets 00:28:07.232 slat (usec): min=20, max=34771, avg=1959.05, stdev=3987.05 00:28:07.232 clat (msec): min=13, max=311, avg=127.40, stdev=62.83 00:28:07.232 lat (msec): min=13, max=311, avg=129.36, stdev=63.69 00:28:07.232 clat percentiles (msec): 00:28:07.232 | 1.00th=[ 46], 5.00th=[ 50], 10.00th=[ 51], 20.00th=[ 59], 00:28:07.232 | 30.00th=[ 77], 40.00th=[ 93], 50.00th=[ 111], 60.00th=[ 167], 00:28:07.232 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 201], 95.00th=[ 228], 00:28:07.232 | 99.00th=[ 251], 99.50th=[ 259], 99.90th=[ 300], 99.95th=[ 300], 00:28:07.232 | 99.99th=[ 313] 00:28:07.232 bw ( KiB/s): min=67584, max=259584, per=9.23%, avg=126438.40, stdev=59311.91, samples=20 00:28:07.232 iops : min= 264, max= 1014, avg=493.90, stdev=231.69, samples=20 00:28:07.232 lat (msec) : 20=0.06%, 50=7.28%, 100=34.98%, 250=56.67%, 500=1.02% 00:28:07.232 cpu : usr=1.25%, sys=1.52%, ctx=1278, majf=0, minf=1 00:28:07.232 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:28:07.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:07.232 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:07.232 issued rwts: total=0,5003,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:07.232 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:07.232 job10: (groupid=0, jobs=1): err= 0: pid=466367: Fri Sep 27 15:46:46 2024 00:28:07.232 write: IOPS=390, BW=97.6MiB/s (102MB/s)(987MiB/10112msec); 0 zone resets 00:28:07.232 slat (usec): min=21, max=32317, avg=2530.03, stdev=4819.86 00:28:07.232 clat (msec): min=33, max=391, avg=161.37, stdev=59.11 00:28:07.232 lat (msec): min=33, max=391, avg=163.90, stdev=59.84 00:28:07.232 clat percentiles (msec): 00:28:07.232 | 1.00th=[ 77], 5.00th=[ 83], 10.00th=[ 86], 20.00th=[ 92], 00:28:07.232 | 30.00th=[ 140], 40.00th=[ 159], 50.00th=[ 165], 60.00th=[ 169], 00:28:07.232 | 70.00th=[ 174], 80.00th=[ 199], 90.00th=[ 220], 95.00th=[ 251], 00:28:07.232 | 99.00th=[ 372], 99.50th=[ 384], 99.90th=[ 393], 99.95th=[ 393], 00:28:07.232 | 99.99th=[ 393] 00:28:07.232 bw ( KiB/s): min=47104, max=190464, per=7.26%, avg=99413.35, stdev=34641.30, samples=20 00:28:07.232 iops : min= 184, max= 744, avg=388.30, stdev=135.33, samples=20 00:28:07.232 lat (msec) : 50=0.20%, 100=21.54%, 250=73.01%, 500=5.25% 00:28:07.232 cpu : usr=1.05%, sys=1.15%, ctx=972, majf=0, minf=1 00:28:07.232 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:28:07.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:07.232 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:07.232 issued rwts: total=0,3946,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:07.232 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:07.232 00:28:07.232 Run status group 0 (all jobs): 00:28:07.232 WRITE: bw=1337MiB/s (1402MB/s), 94.6MiB/s-173MiB/s (99.2MB/s-182MB/s), io=13.2GiB (14.2GB), run=10070-10115msec 00:28:07.232 00:28:07.232 Disk stats (read/write): 00:28:07.232 nvme0n1: ios=44/8323, merge=0/0, ticks=2121/1224077, in_queue=1226198, util=99.89% 00:28:07.232 nvme10n1: ios=46/9959, merge=0/0, ticks=1397/1199478, in_queue=1200875, util=100.00% 00:28:07.232 nvme1n1: ios=0/11113, merge=0/0, ticks=0/1195981, in_queue=1195981, util=96.87% 00:28:07.232 nvme2n1: ios=43/13631, merge=0/0, ticks=994/1196110, in_queue=1197104, util=100.00% 00:28:07.232 nvme3n1: ios=41/7925, merge=0/0, ticks=2806/1232078, in_queue=1234884, util=100.00% 00:28:07.233 nvme4n1: ios=0/8714, merge=0/0, ticks=0/1232933, in_queue=1232933, util=97.73% 00:28:07.233 nvme5n1: ios=45/7602, merge=0/0, ticks=452/1230993, in_queue=1231445, util=98.87% 00:28:07.233 nvme6n1: ios=30/12514, merge=0/0, ticks=950/1181173, in_queue=1182123, util=100.00% 00:28:07.233 nvme7n1: ios=0/8948, merge=0/0, ticks=0/1230853, in_queue=1230853, util=98.66% 00:28:07.233 nvme8n1: ios=0/9946, merge=0/0, ticks=0/1224567, in_queue=1224567, util=98.94% 00:28:07.233 nvme9n1: ios=39/7843, merge=0/0, ticks=1846/1224584, in_queue=1226430, util=100.00% 00:28:07.233 15:46:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:28:07.233 15:46:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:28:07.233 15:46:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:07.233 15:46:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:07.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:07.233 15:46:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:28:07.233 15:46:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:07.233 15:46:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:07.233 15:46:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:28:07.233 15:46:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:07.233 15:46:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:28:07.233 15:46:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:07.233 15:46:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:07.233 15:46:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.233 15:46:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:07.233 15:46:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.233 15:46:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:07.233 15:46:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:28:07.233 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:28:07.233 15:46:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:28:07.233 15:46:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:07.233 15:46:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:07.233 15:46:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:28:07.233 15:46:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:28:07.233 15:46:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:07.233 15:46:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:07.233 15:46:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:07.233 15:46:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.233 15:46:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:07.233 15:46:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.233 15:46:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:07.233 15:46:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:28:07.805 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:28:07.805 15:46:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:28:07.805 15:46:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:07.805 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:07.805 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:28:07.805 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:28:07.805 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:07.805 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:07.805 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:28:07.805 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.805 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:07.805 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.805 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:07.805 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:28:08.066 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:28:08.066 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:28:08.066 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:08.066 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:08.066 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:28:08.066 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:08.066 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:28:08.066 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:08.066 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:28:08.066 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.066 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:08.066 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.066 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:08.066 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:28:08.327 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:28:08.327 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:28:08.327 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:08.327 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:08.327 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:28:08.327 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:08.327 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:28:08.327 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:08.327 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:28:08.327 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.327 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:08.327 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.327 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:08.327 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:28:08.588 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:28:08.588 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:28:08.588 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:08.588 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:08.588 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:28:08.588 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:08.588 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:28:08.588 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:08.588 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:28:08.588 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.588 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:08.588 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.588 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:08.588 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:28:08.588 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:28:08.588 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:28:08.588 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:08.588 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:08.588 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:28:08.588 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:08.588 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:28:08.848 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:08.848 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:28:08.848 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.848 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:08.848 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.848 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:08.848 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:28:08.848 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:28:08.848 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:28:08.848 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:08.848 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:08.848 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:28:08.848 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:28:08.848 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:08.848 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:08.848 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:28:08.848 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.848 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:08.848 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.848 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:08.848 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:28:09.109 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:28:09.109 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:28:09.109 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:09.109 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:09.109 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:28:09.109 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:09.109 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:28:09.109 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:09.109 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:28:09.109 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.109 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:09.109 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.109 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:09.109 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:28:09.370 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:28:09.370 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:28:09.370 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:09.370 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:09.370 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:28:09.370 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:28:09.370 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:09.370 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:09.370 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:28:09.370 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.370 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:09.370 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.370 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:09.370 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:28:09.370 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:28:09.370 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:28:09.370 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:09.370 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:09.370 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:28:09.370 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:28:09.370 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:09.370 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:09.370 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:28:09.370 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.370 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:09.370 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.370 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:28:09.370 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:28:09.370 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:28:09.370 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # nvmfcleanup 00:28:09.370 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:28:09.370 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:09.370 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:28:09.370 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:09.370 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:09.370 rmmod nvme_tcp 00:28:09.370 rmmod nvme_fabrics 00:28:09.632 rmmod nvme_keyring 00:28:09.632 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:09.632 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:28:09.632 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:28:09.632 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@513 -- # '[' -n 456307 ']' 00:28:09.632 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@514 -- # killprocess 456307 00:28:09.632 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 456307 ']' 00:28:09.632 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 456307 00:28:09.632 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:28:09.632 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:09.632 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 456307 00:28:09.632 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:09.632 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:09.632 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 456307' 00:28:09.632 killing process with pid 456307 00:28:09.632 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 456307 00:28:09.632 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 456307 00:28:09.893 15:46:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:28:09.893 15:46:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:28:09.893 15:46:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:28:09.893 15:46:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:28:09.893 15:46:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # iptables-save 00:28:09.893 15:46:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:28:09.893 15:46:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # iptables-restore 00:28:09.893 15:46:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:09.893 15:46:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:09.893 15:46:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:09.893 15:46:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:09.893 15:46:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:12.438 00:28:12.438 real 1m18.032s 00:28:12.438 user 4m57.789s 00:28:12.438 sys 0m17.317s 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:12.438 ************************************ 00:28:12.438 END TEST nvmf_multiconnection 00:28:12.438 ************************************ 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:12.438 ************************************ 00:28:12.438 START TEST nvmf_initiator_timeout 00:28:12.438 ************************************ 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:28:12.438 * Looking for test storage... 00:28:12.438 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lcov --version 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:12.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.438 --rc genhtml_branch_coverage=1 00:28:12.438 --rc genhtml_function_coverage=1 00:28:12.438 --rc genhtml_legend=1 00:28:12.438 --rc geninfo_all_blocks=1 00:28:12.438 --rc geninfo_unexecuted_blocks=1 00:28:12.438 00:28:12.438 ' 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:12.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.438 --rc genhtml_branch_coverage=1 00:28:12.438 --rc genhtml_function_coverage=1 00:28:12.438 --rc genhtml_legend=1 00:28:12.438 --rc geninfo_all_blocks=1 00:28:12.438 --rc geninfo_unexecuted_blocks=1 00:28:12.438 00:28:12.438 ' 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:12.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.438 --rc genhtml_branch_coverage=1 00:28:12.438 --rc genhtml_function_coverage=1 00:28:12.438 --rc genhtml_legend=1 00:28:12.438 --rc geninfo_all_blocks=1 00:28:12.438 --rc geninfo_unexecuted_blocks=1 00:28:12.438 00:28:12.438 ' 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:12.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.438 --rc genhtml_branch_coverage=1 00:28:12.438 --rc genhtml_function_coverage=1 00:28:12.438 --rc genhtml_legend=1 00:28:12.438 --rc geninfo_all_blocks=1 00:28:12.438 --rc geninfo_unexecuted_blocks=1 00:28:12.438 00:28:12.438 ' 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:12.438 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:12.439 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:12.439 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:28:12.439 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:12.439 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:12.439 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:12.439 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.439 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.439 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.439 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:28:12.439 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.439 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:28:12.439 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:12.439 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:12.439 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:12.439 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:12.439 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:12.439 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:12.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:12.439 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:12.439 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:12.439 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:12.439 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:12.439 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:12.439 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:28:12.439 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:28:12.439 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:12.439 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@472 -- # prepare_net_devs 00:28:12.439 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@434 -- # local -g is_hw=no 00:28:12.439 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@436 -- # remove_spdk_ns 00:28:12.439 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.439 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:12.439 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.439 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:28:12.439 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:28:12.439 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:28:12.439 15:46:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:20.576 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:20.576 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:28:20.576 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:20.576 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:20.576 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:20.576 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:20.576 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:20.576 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:28:20.576 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:20.577 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:20.577 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:20.577 Found net devices under 0000:31:00.0: cvl_0_0 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:20.577 Found net devices under 0000:31:00.1: cvl_0_1 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # is_hw=yes 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:20.577 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:20.577 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:20.577 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:20.577 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:20.578 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:20.578 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:20.578 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:20.578 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:20.578 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:20.578 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:20.578 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.575 ms 00:28:20.578 00:28:20.578 --- 10.0.0.2 ping statistics --- 00:28:20.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.578 rtt min/avg/max/mdev = 0.575/0.575/0.575/0.000 ms 00:28:20.578 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:20.578 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:20.578 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:28:20.578 00:28:20.578 --- 10.0.0.1 ping statistics --- 00:28:20.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.578 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:28:20.578 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:20.578 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # return 0 00:28:20.578 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:28:20.578 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:20.578 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:28:20.578 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:28:20.578 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:20.578 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:28:20.578 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:28:20.578 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:28:20.578 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:28:20.578 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:20.578 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:20.578 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@505 -- # nvmfpid=472913 00:28:20.578 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@506 -- # waitforlisten 472913 00:28:20.578 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:20.578 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 472913 ']' 00:28:20.578 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:20.578 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:20.578 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:20.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:20.578 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:20.578 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:20.578 [2024-09-27 15:47:00.330136] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:28:20.578 [2024-09-27 15:47:00.330202] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:20.578 [2024-09-27 15:47:00.420871] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:20.578 [2024-09-27 15:47:00.468260] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:20.578 [2024-09-27 15:47:00.468317] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:20.578 [2024-09-27 15:47:00.468326] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:20.578 [2024-09-27 15:47:00.468333] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:20.578 [2024-09-27 15:47:00.468339] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:20.578 [2024-09-27 15:47:00.468489] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:20.578 [2024-09-27 15:47:00.468651] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:28:20.578 [2024-09-27 15:47:00.468809] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.578 [2024-09-27 15:47:00.468810] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:28:20.839 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:20.839 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:28:20.839 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:28:20.839 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:20.839 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:20.839 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:20.839 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:28:20.839 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:20.839 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.839 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:20.839 Malloc0 00:28:20.839 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.839 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:28:20.839 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.839 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:20.839 Delay0 00:28:20.839 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.840 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:20.840 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.840 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:20.840 [2024-09-27 15:47:01.244315] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:20.840 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.840 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:28:20.840 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.840 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:20.840 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.840 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:20.840 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.840 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:20.840 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.840 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:20.840 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.840 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:20.840 [2024-09-27 15:47:01.284733] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:20.840 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.840 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:28:22.753 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:28:22.753 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:28:22.753 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:28:22.753 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:28:22.753 15:47:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:28:24.692 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:28:24.692 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:28:24.692 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:28:24.692 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:28:24.692 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:28:24.692 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:28:24.692 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=473655 00:28:24.692 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:28:24.692 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:28:24.692 [global] 00:28:24.692 thread=1 00:28:24.692 invalidate=1 00:28:24.692 rw=write 00:28:24.692 time_based=1 00:28:24.692 runtime=60 00:28:24.692 ioengine=libaio 00:28:24.692 direct=1 00:28:24.692 bs=4096 00:28:24.692 iodepth=1 00:28:24.692 norandommap=0 00:28:24.692 numjobs=1 00:28:24.692 00:28:24.692 verify_dump=1 00:28:24.692 verify_backlog=512 00:28:24.692 verify_state_save=0 00:28:24.692 do_verify=1 00:28:24.692 verify=crc32c-intel 00:28:24.692 [job0] 00:28:24.692 filename=/dev/nvme0n1 00:28:24.692 Could not set queue depth (nvme0n1) 00:28:24.958 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:24.958 fio-3.35 00:28:24.958 Starting 1 thread 00:28:27.502 15:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:28:27.502 15:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.502 15:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:27.502 true 00:28:27.502 15:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.502 15:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:28:27.502 15:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.502 15:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:27.502 true 00:28:27.502 15:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.502 15:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:28:27.502 15:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.502 15:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:27.502 true 00:28:27.502 15:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.502 15:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:28:27.502 15:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.502 15:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:27.502 true 00:28:27.502 15:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.502 15:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:28:30.801 15:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:28:30.802 15:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.802 15:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:30.802 true 00:28:30.802 15:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.802 15:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:28:30.802 15:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.802 15:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:30.802 true 00:28:30.802 15:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.802 15:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:28:30.802 15:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.802 15:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:30.802 true 00:28:30.802 15:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.802 15:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:28:30.802 15:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.802 15:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:30.802 true 00:28:30.802 15:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.802 15:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:28:30.802 15:47:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 473655 00:29:27.074 00:29:27.074 job0: (groupid=0, jobs=1): err= 0: pid=474004: Fri Sep 27 15:48:05 2024 00:29:27.074 read: IOPS=52, BW=211KiB/s (216kB/s)(12.3MiB/60034msec) 00:29:27.074 slat (nsec): min=6841, max=63935, avg=24879.03, stdev=5711.29 00:29:27.074 clat (usec): min=191, max=42763, avg=5122.53, stdev=12620.85 00:29:27.074 lat (usec): min=218, max=42789, avg=5147.41, stdev=12621.39 00:29:27.074 clat percentiles (usec): 00:29:27.074 | 1.00th=[ 408], 5.00th=[ 469], 10.00th=[ 515], 20.00th=[ 603], 00:29:27.074 | 30.00th=[ 627], 40.00th=[ 693], 50.00th=[ 930], 60.00th=[ 955], 00:29:27.074 | 70.00th=[ 971], 80.00th=[ 996], 90.00th=[41157], 95.00th=[42206], 00:29:27.074 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42730], 00:29:27.074 | 99.99th=[42730] 00:29:27.074 write: IOPS=59, BW=239KiB/s (245kB/s)(14.0MiB/60034msec); 0 zone resets 00:29:27.074 slat (usec): min=9, max=27763, avg=43.04, stdev=525.02 00:29:27.074 clat (usec): min=135, max=42031k, avg=12151.85, stdev=702069.53 00:29:27.075 lat (usec): min=147, max=42031k, avg=12194.89, stdev=702069.59 00:29:27.075 clat percentiles (usec): 00:29:27.075 | 1.00th=[ 186], 5.00th=[ 210], 10.00th=[ 233], 00:29:27.075 | 20.00th=[ 285], 30.00th=[ 310], 40.00th=[ 375], 00:29:27.075 | 50.00th=[ 424], 60.00th=[ 490], 70.00th=[ 515], 00:29:27.075 | 80.00th=[ 570], 90.00th=[ 611], 95.00th=[ 635], 00:29:27.075 | 99.00th=[ 693], 99.50th=[ 742], 99.90th=[ 1029], 00:29:27.075 | 99.95th=[ 3752], 99.99th=[17112761] 00:29:27.075 bw ( KiB/s): min= 760, max= 4096, per=100.00%, avg=2867.20, stdev=1375.20, samples=10 00:29:27.075 iops : min= 190, max= 1024, avg=716.80, stdev=343.80, samples=10 00:29:27.075 lat (usec) : 250=6.36%, 500=32.16%, 750=33.54%, 1000=20.06% 00:29:27.075 lat (msec) : 2=2.89%, 4=0.03%, 50=4.95%, >=2000=0.01% 00:29:27.075 cpu : usr=0.19%, sys=0.30%, ctx=6751, majf=0, minf=1 00:29:27.075 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:27.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:27.075 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:27.075 issued rwts: total=3161,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:27.075 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:27.075 00:29:27.075 Run status group 0 (all jobs): 00:29:27.075 READ: bw=211KiB/s (216kB/s), 211KiB/s-211KiB/s (216kB/s-216kB/s), io=12.3MiB (12.9MB), run=60034-60034msec 00:29:27.075 WRITE: bw=239KiB/s (245kB/s), 239KiB/s-239KiB/s (245kB/s-245kB/s), io=14.0MiB (14.7MB), run=60034-60034msec 00:29:27.075 00:29:27.075 Disk stats (read/write): 00:29:27.075 nvme0n1: ios=3210/3584, merge=0/0, ticks=17350/1366, in_queue=18716, util=99.72% 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:27.075 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:29:27.075 nvmf hotplug test: fio successful as expected 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:27.075 rmmod nvme_tcp 00:29:27.075 rmmod nvme_fabrics 00:29:27.075 rmmod nvme_keyring 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@513 -- # '[' -n 472913 ']' 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@514 -- # killprocess 472913 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 472913 ']' 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 472913 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 472913 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 472913' 00:29:27.075 killing process with pid 472913 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 472913 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 472913 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # iptables-save 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # iptables-restore 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:27.075 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:27.648 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:27.648 00:29:27.648 real 1m15.585s 00:29:27.648 user 4m33.425s 00:29:27.648 sys 0m7.925s 00:29:27.648 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:27.648 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:27.648 ************************************ 00:29:27.648 END TEST nvmf_initiator_timeout 00:29:27.648 ************************************ 00:29:27.648 15:48:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:29:27.648 15:48:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:29:27.648 15:48:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:29:27.648 15:48:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:29:27.648 15:48:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:35.795 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:35.795 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:35.795 Found net devices under 0000:31:00.0: cvl_0_0 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:35.795 Found net devices under 0000:31:00.1: cvl_0_1 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:35.795 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:35.796 ************************************ 00:29:35.796 START TEST nvmf_perf_adq 00:29:35.796 ************************************ 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:29:35.796 * Looking for test storage... 00:29:35.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lcov --version 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:35.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.796 --rc genhtml_branch_coverage=1 00:29:35.796 --rc genhtml_function_coverage=1 00:29:35.796 --rc genhtml_legend=1 00:29:35.796 --rc geninfo_all_blocks=1 00:29:35.796 --rc geninfo_unexecuted_blocks=1 00:29:35.796 00:29:35.796 ' 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:35.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.796 --rc genhtml_branch_coverage=1 00:29:35.796 --rc genhtml_function_coverage=1 00:29:35.796 --rc genhtml_legend=1 00:29:35.796 --rc geninfo_all_blocks=1 00:29:35.796 --rc geninfo_unexecuted_blocks=1 00:29:35.796 00:29:35.796 ' 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:35.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.796 --rc genhtml_branch_coverage=1 00:29:35.796 --rc genhtml_function_coverage=1 00:29:35.796 --rc genhtml_legend=1 00:29:35.796 --rc geninfo_all_blocks=1 00:29:35.796 --rc geninfo_unexecuted_blocks=1 00:29:35.796 00:29:35.796 ' 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:35.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.796 --rc genhtml_branch_coverage=1 00:29:35.796 --rc genhtml_function_coverage=1 00:29:35.796 --rc genhtml_legend=1 00:29:35.796 --rc geninfo_all_blocks=1 00:29:35.796 --rc geninfo_unexecuted_blocks=1 00:29:35.796 00:29:35.796 ' 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:35.796 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:35.797 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:35.797 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:35.797 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:35.797 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:35.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:35.797 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:35.797 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:35.797 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:35.797 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:29:35.797 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:29:35.797 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:43.943 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:43.943 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:43.943 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:43.944 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:43.944 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:43.944 Found net devices under 0000:31:00.0: cvl_0_0 00:29:43.944 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:43.944 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:43.944 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:43.944 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:43.944 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:43.944 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:43.944 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:43.944 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:43.944 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:43.944 Found net devices under 0000:31:00.1: cvl_0_1 00:29:43.944 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:43.944 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:43.944 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:43.944 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:29:43.944 15:48:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:29:43.944 15:48:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:29:43.944 15:48:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:29:43.944 15:48:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:29:44.205 15:48:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:29:48.414 15:48:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:52.623 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:52.623 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:52.623 Found net devices under 0000:31:00.0: cvl_0_0 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:52.623 Found net devices under 0000:31:00.1: cvl_0_1 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # is_hw=yes 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:52.623 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:52.885 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:52.885 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:52.885 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:52.885 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:52.885 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:52.885 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:52.885 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:52.885 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:52.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:52.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.607 ms 00:29:52.885 00:29:52.885 --- 10.0.0.2 ping statistics --- 00:29:52.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.885 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:29:52.885 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:52.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:52.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:29:52.885 00:29:52.885 --- 10.0.0.1 ping statistics --- 00:29:52.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.885 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:29:52.885 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:52.885 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # return 0 00:29:52.885 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:52.885 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:52.885 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:52.885 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:52.885 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:52.885 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:52.885 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:53.146 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:53.146 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:53.146 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:53.146 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:53.146 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # nvmfpid=495841 00:29:53.146 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # waitforlisten 495841 00:29:53.146 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:53.146 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 495841 ']' 00:29:53.146 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:53.146 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:53.146 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:53.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:53.146 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:53.146 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:53.146 [2024-09-27 15:48:33.455162] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:29:53.146 [2024-09-27 15:48:33.455231] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:53.146 [2024-09-27 15:48:33.544830] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:53.146 [2024-09-27 15:48:33.593128] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:53.146 [2024-09-27 15:48:33.593181] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:53.146 [2024-09-27 15:48:33.593190] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:53.146 [2024-09-27 15:48:33.593197] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:53.146 [2024-09-27 15:48:33.593203] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:53.146 [2024-09-27 15:48:33.593859] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:53.146 [2024-09-27 15:48:33.594001] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:53.146 [2024-09-27 15:48:33.594446] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:53.146 [2024-09-27 15:48:33.594450] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.089 15:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:54.090 15:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:29:54.090 15:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:54.090 15:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:54.090 15:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:54.090 15:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:54.090 15:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:29:54.090 15:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:29:54.090 15:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:29:54.090 15:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.090 15:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:54.090 15:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.090 15:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:29:54.090 15:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:29:54.090 15:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.090 15:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:54.090 15:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.090 15:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:29:54.090 15:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.090 15:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:54.090 15:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.090 15:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:29:54.090 15:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.090 15:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:54.090 [2024-09-27 15:48:34.477739] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:54.090 15:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.090 15:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:54.090 15:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.090 15:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:54.090 Malloc1 00:29:54.090 15:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.090 15:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:54.090 15:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.090 15:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:54.090 15:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.090 15:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:54.090 15:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.090 15:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:54.090 15:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.090 15:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:54.090 15:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.090 15:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:54.090 [2024-09-27 15:48:34.542252] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:54.090 15:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.090 15:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=496163 00:29:54.090 15:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:29:54.090 15:48:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:56.637 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:29:56.637 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.637 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:56.637 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.637 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:29:56.637 "tick_rate": 2400000000, 00:29:56.637 "poll_groups": [ 00:29:56.637 { 00:29:56.637 "name": "nvmf_tgt_poll_group_000", 00:29:56.637 "admin_qpairs": 1, 00:29:56.637 "io_qpairs": 1, 00:29:56.637 "current_admin_qpairs": 1, 00:29:56.637 "current_io_qpairs": 1, 00:29:56.637 "pending_bdev_io": 0, 00:29:56.637 "completed_nvme_io": 17573, 00:29:56.637 "transports": [ 00:29:56.637 { 00:29:56.637 "trtype": "TCP" 00:29:56.637 } 00:29:56.637 ] 00:29:56.637 }, 00:29:56.637 { 00:29:56.637 "name": "nvmf_tgt_poll_group_001", 00:29:56.637 "admin_qpairs": 0, 00:29:56.637 "io_qpairs": 1, 00:29:56.637 "current_admin_qpairs": 0, 00:29:56.637 "current_io_qpairs": 1, 00:29:56.637 "pending_bdev_io": 0, 00:29:56.637 "completed_nvme_io": 21579, 00:29:56.637 "transports": [ 00:29:56.637 { 00:29:56.637 "trtype": "TCP" 00:29:56.637 } 00:29:56.637 ] 00:29:56.637 }, 00:29:56.637 { 00:29:56.637 "name": "nvmf_tgt_poll_group_002", 00:29:56.637 "admin_qpairs": 0, 00:29:56.637 "io_qpairs": 1, 00:29:56.637 "current_admin_qpairs": 0, 00:29:56.637 "current_io_qpairs": 1, 00:29:56.637 "pending_bdev_io": 0, 00:29:56.637 "completed_nvme_io": 19823, 00:29:56.637 "transports": [ 00:29:56.637 { 00:29:56.637 "trtype": "TCP" 00:29:56.637 } 00:29:56.637 ] 00:29:56.637 }, 00:29:56.637 { 00:29:56.637 "name": "nvmf_tgt_poll_group_003", 00:29:56.637 "admin_qpairs": 0, 00:29:56.637 "io_qpairs": 1, 00:29:56.637 "current_admin_qpairs": 0, 00:29:56.637 "current_io_qpairs": 1, 00:29:56.637 "pending_bdev_io": 0, 00:29:56.637 "completed_nvme_io": 18551, 00:29:56.637 "transports": [ 00:29:56.637 { 00:29:56.637 "trtype": "TCP" 00:29:56.637 } 00:29:56.637 ] 00:29:56.637 } 00:29:56.637 ] 00:29:56.637 }' 00:29:56.637 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:29:56.637 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:29:56.637 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:29:56.637 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:29:56.637 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 496163 00:30:04.774 Initializing NVMe Controllers 00:30:04.774 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:04.774 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:30:04.774 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:30:04.774 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:30:04.774 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:30:04.774 Initialization complete. Launching workers. 00:30:04.774 ======================================================== 00:30:04.774 Latency(us) 00:30:04.774 Device Information : IOPS MiB/s Average min max 00:30:04.774 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13186.40 51.51 4853.72 1285.01 12244.69 00:30:04.774 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13896.19 54.28 4605.07 1320.80 13263.42 00:30:04.774 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13717.50 53.58 4664.79 1199.45 12660.99 00:30:04.774 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13038.40 50.93 4908.16 1292.28 14675.17 00:30:04.774 ======================================================== 00:30:04.774 Total : 53838.49 210.31 4754.59 1199.45 14675.17 00:30:04.774 00:30:04.774 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:30:04.774 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # nvmfcleanup 00:30:04.774 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:30:04.774 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:04.774 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:30:04.774 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:04.774 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:04.774 rmmod nvme_tcp 00:30:04.774 rmmod nvme_fabrics 00:30:04.774 rmmod nvme_keyring 00:30:04.774 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:04.774 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:30:04.774 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:30:04.774 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@513 -- # '[' -n 495841 ']' 00:30:04.774 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # killprocess 495841 00:30:04.774 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 495841 ']' 00:30:04.774 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 495841 00:30:04.774 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:30:04.774 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:04.774 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 495841 00:30:04.774 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:04.774 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:04.774 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 495841' 00:30:04.774 killing process with pid 495841 00:30:04.774 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 495841 00:30:04.774 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 495841 00:30:04.774 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:30:04.774 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:30:04.774 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:30:04.774 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:30:04.774 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-save 00:30:04.774 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:30:04.774 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-restore 00:30:04.775 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:04.775 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:04.775 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:04.775 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:04.775 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:06.688 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:06.688 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:30:06.688 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:30:06.688 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:30:08.602 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:30:10.518 15:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # prepare_net_devs 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@434 -- # local -g is_hw=no 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # remove_spdk_ns 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:15.809 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:15.809 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:15.809 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:15.810 Found net devices under 0000:31:00.0: cvl_0_0 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:15.810 Found net devices under 0000:31:00.1: cvl_0_1 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # is_hw=yes 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:15.810 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:15.810 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:15.810 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:15.810 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:15.810 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:15.810 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:15.810 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.677 ms 00:30:15.810 00:30:15.810 --- 10.0.0.2 ping statistics --- 00:30:15.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:15.810 rtt min/avg/max/mdev = 0.677/0.677/0.677/0.000 ms 00:30:15.810 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:15.810 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:15.810 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:30:15.810 00:30:15.810 --- 10.0.0.1 ping statistics --- 00:30:15.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:15.810 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:30:15.810 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:15.810 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # return 0 00:30:15.810 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:30:15.810 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:15.810 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:30:15.810 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:30:15.810 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:15.810 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:30:15.810 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:30:15.810 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:30:15.810 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:30:15.810 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:30:15.810 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:30:15.810 net.core.busy_poll = 1 00:30:15.810 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:30:15.810 net.core.busy_read = 1 00:30:15.810 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:30:15.810 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:30:16.072 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:30:16.072 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:30:16.072 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:30:16.072 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:30:16.072 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:30:16.072 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:16.072 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:16.072 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # nvmfpid=500628 00:30:16.072 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # waitforlisten 500628 00:30:16.072 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:16.072 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 500628 ']' 00:30:16.072 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:16.072 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:16.072 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:16.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:16.072 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:16.072 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:16.072 [2024-09-27 15:48:56.534778] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:30:16.072 [2024-09-27 15:48:56.534851] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:16.333 [2024-09-27 15:48:56.623777] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:16.333 [2024-09-27 15:48:56.671506] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:16.333 [2024-09-27 15:48:56.671560] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:16.333 [2024-09-27 15:48:56.671569] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:16.333 [2024-09-27 15:48:56.671576] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:16.333 [2024-09-27 15:48:56.671582] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:16.333 [2024-09-27 15:48:56.671738] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:16.333 [2024-09-27 15:48:56.672306] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:30:16.333 [2024-09-27 15:48:56.672503] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:30:16.333 [2024-09-27 15:48:56.672504] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:16.905 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:16.905 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:30:16.905 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:30:16.905 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:16.905 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:17.167 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:17.167 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:30:17.167 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:30:17.167 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:30:17.167 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.167 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:17.167 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.167 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:30:17.167 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:30:17.167 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.167 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:17.167 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.167 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:30:17.167 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.167 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:17.167 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.167 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:30:17.167 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.167 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:17.167 [2024-09-27 15:48:57.561781] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:17.167 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.167 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:17.167 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.167 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:17.167 Malloc1 00:30:17.167 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.167 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:17.167 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.167 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:17.167 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.167 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:17.167 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.167 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:17.167 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.167 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:17.167 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.167 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:17.167 [2024-09-27 15:48:57.627433] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:17.167 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.167 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=500978 00:30:17.167 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:30:17.167 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:19.716 15:48:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:30:19.716 15:48:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.716 15:48:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:19.716 15:48:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.716 15:48:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:30:19.716 "tick_rate": 2400000000, 00:30:19.716 "poll_groups": [ 00:30:19.716 { 00:30:19.716 "name": "nvmf_tgt_poll_group_000", 00:30:19.716 "admin_qpairs": 1, 00:30:19.716 "io_qpairs": 2, 00:30:19.716 "current_admin_qpairs": 1, 00:30:19.716 "current_io_qpairs": 2, 00:30:19.716 "pending_bdev_io": 0, 00:30:19.716 "completed_nvme_io": 25983, 00:30:19.716 "transports": [ 00:30:19.716 { 00:30:19.716 "trtype": "TCP" 00:30:19.716 } 00:30:19.716 ] 00:30:19.716 }, 00:30:19.716 { 00:30:19.716 "name": "nvmf_tgt_poll_group_001", 00:30:19.716 "admin_qpairs": 0, 00:30:19.716 "io_qpairs": 2, 00:30:19.716 "current_admin_qpairs": 0, 00:30:19.716 "current_io_qpairs": 2, 00:30:19.716 "pending_bdev_io": 0, 00:30:19.716 "completed_nvme_io": 27151, 00:30:19.716 "transports": [ 00:30:19.716 { 00:30:19.716 "trtype": "TCP" 00:30:19.716 } 00:30:19.716 ] 00:30:19.716 }, 00:30:19.716 { 00:30:19.716 "name": "nvmf_tgt_poll_group_002", 00:30:19.716 "admin_qpairs": 0, 00:30:19.716 "io_qpairs": 0, 00:30:19.716 "current_admin_qpairs": 0, 00:30:19.716 "current_io_qpairs": 0, 00:30:19.716 "pending_bdev_io": 0, 00:30:19.716 "completed_nvme_io": 0, 00:30:19.716 "transports": [ 00:30:19.716 { 00:30:19.716 "trtype": "TCP" 00:30:19.716 } 00:30:19.716 ] 00:30:19.716 }, 00:30:19.716 { 00:30:19.716 "name": "nvmf_tgt_poll_group_003", 00:30:19.716 "admin_qpairs": 0, 00:30:19.716 "io_qpairs": 0, 00:30:19.716 "current_admin_qpairs": 0, 00:30:19.716 "current_io_qpairs": 0, 00:30:19.716 "pending_bdev_io": 0, 00:30:19.716 "completed_nvme_io": 0, 00:30:19.716 "transports": [ 00:30:19.716 { 00:30:19.716 "trtype": "TCP" 00:30:19.716 } 00:30:19.716 ] 00:30:19.716 } 00:30:19.716 ] 00:30:19.716 }' 00:30:19.716 15:48:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:30:19.716 15:48:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:30:19.716 15:48:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:30:19.716 15:48:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:30:19.716 15:48:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 500978 00:30:27.853 Initializing NVMe Controllers 00:30:27.853 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:27.853 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:30:27.853 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:30:27.853 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:30:27.853 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:30:27.853 Initialization complete. Launching workers. 00:30:27.853 ======================================================== 00:30:27.853 Latency(us) 00:30:27.853 Device Information : IOPS MiB/s Average min max 00:30:27.853 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10518.90 41.09 6103.17 1344.93 52120.26 00:30:27.853 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7076.40 27.64 9062.32 1361.17 53961.64 00:30:27.853 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 12078.80 47.18 5298.37 1058.02 51374.56 00:30:27.853 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8177.10 31.94 7850.78 1257.72 54487.73 00:30:27.853 ======================================================== 00:30:27.853 Total : 37851.20 147.86 6777.11 1058.02 54487.73 00:30:27.853 00:30:27.853 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:30:27.853 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # nvmfcleanup 00:30:27.853 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:30:27.853 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:27.853 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:30:27.853 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:27.853 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:27.853 rmmod nvme_tcp 00:30:27.853 rmmod nvme_fabrics 00:30:27.853 rmmod nvme_keyring 00:30:27.853 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:27.853 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:30:27.853 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:30:27.853 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@513 -- # '[' -n 500628 ']' 00:30:27.853 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # killprocess 500628 00:30:27.853 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 500628 ']' 00:30:27.853 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 500628 00:30:27.853 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:30:27.853 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:27.853 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 500628 00:30:27.853 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:27.853 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:27.853 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 500628' 00:30:27.853 killing process with pid 500628 00:30:27.853 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 500628 00:30:27.853 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 500628 00:30:27.853 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:30:27.853 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:30:27.853 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:30:27.853 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:30:27.853 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-save 00:30:27.853 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:30:27.853 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-restore 00:30:27.853 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:27.853 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:27.853 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:27.853 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:27.853 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:31.155 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:31.155 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:30:31.155 00:30:31.155 real 0m55.749s 00:30:31.155 user 2m50.113s 00:30:31.155 sys 0m12.710s 00:30:31.155 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:31.155 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:31.155 ************************************ 00:30:31.155 END TEST nvmf_perf_adq 00:30:31.155 ************************************ 00:30:31.155 15:49:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:30:31.155 15:49:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:31.155 15:49:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:31.155 15:49:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:31.155 ************************************ 00:30:31.155 START TEST nvmf_shutdown 00:30:31.155 ************************************ 00:30:31.155 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:30:31.155 * Looking for test storage... 00:30:31.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:31.155 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:31.155 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:30:31.155 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:31.155 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:31.155 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:31.155 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:31.155 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:31.155 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:30:31.155 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:30:31.155 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:30:31.155 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:30:31.155 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:30:31.155 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:30:31.155 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:30:31.155 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:31.155 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:30:31.155 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:30:31.155 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:31.155 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:31.155 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:30:31.155 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:30:31.155 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:31.155 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:30:31.155 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:30:31.155 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:30:31.155 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:30:31.155 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:31.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.156 --rc genhtml_branch_coverage=1 00:30:31.156 --rc genhtml_function_coverage=1 00:30:31.156 --rc genhtml_legend=1 00:30:31.156 --rc geninfo_all_blocks=1 00:30:31.156 --rc geninfo_unexecuted_blocks=1 00:30:31.156 00:30:31.156 ' 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:31.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.156 --rc genhtml_branch_coverage=1 00:30:31.156 --rc genhtml_function_coverage=1 00:30:31.156 --rc genhtml_legend=1 00:30:31.156 --rc geninfo_all_blocks=1 00:30:31.156 --rc geninfo_unexecuted_blocks=1 00:30:31.156 00:30:31.156 ' 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:31.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.156 --rc genhtml_branch_coverage=1 00:30:31.156 --rc genhtml_function_coverage=1 00:30:31.156 --rc genhtml_legend=1 00:30:31.156 --rc geninfo_all_blocks=1 00:30:31.156 --rc geninfo_unexecuted_blocks=1 00:30:31.156 00:30:31.156 ' 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:31.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.156 --rc genhtml_branch_coverage=1 00:30:31.156 --rc genhtml_function_coverage=1 00:30:31.156 --rc genhtml_legend=1 00:30:31.156 --rc geninfo_all_blocks=1 00:30:31.156 --rc geninfo_unexecuted_blocks=1 00:30:31.156 00:30:31.156 ' 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:31.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:31.156 ************************************ 00:30:31.156 START TEST nvmf_shutdown_tc1 00:30:31.156 ************************************ 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:30:31.156 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:30:31.157 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:30:31.157 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:30:31.157 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:31.157 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # prepare_net_devs 00:30:31.157 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:30:31.157 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:30:31.157 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:31.157 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:31.157 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:31.157 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:30:31.157 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:30:31.157 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:30:31.157 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:39.297 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:39.297 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:30:39.297 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:39.297 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:39.297 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:39.297 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:39.297 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:39.297 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:30:39.297 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:39.297 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:30:39.297 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:30:39.297 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:30:39.297 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:30:39.297 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:30:39.297 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:30:39.297 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:39.297 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:39.297 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:39.297 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:39.297 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:39.297 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:39.297 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:39.297 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:39.297 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:39.297 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:39.297 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:39.297 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:30:39.297 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:30:39.297 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:30:39.297 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:30:39.297 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:30:39.297 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:30:39.297 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:39.297 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:39.297 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:39.297 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:39.297 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:39.297 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:39.297 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:39.297 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:39.297 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:39.297 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:39.297 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:39.297 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:39.298 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:39.298 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:39.298 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:39.298 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:39.298 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:30:39.298 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:30:39.298 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:30:39.298 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:39.298 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:39.298 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:39.298 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:39.298 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:39.298 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:39.298 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:39.298 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:39.298 Found net devices under 0000:31:00.0: cvl_0_0 00:30:39.298 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:39.298 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:39.298 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:39.298 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:39.298 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:39.298 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:39.298 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:39.298 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:39.298 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:39.298 Found net devices under 0000:31:00.1: cvl_0_1 00:30:39.298 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:39.298 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:30:39.298 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # is_hw=yes 00:30:39.298 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:30:39.298 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:30:39.298 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:30:39.298 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:39.298 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:39.298 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:39.298 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:39.298 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:39.298 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:39.298 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:39.298 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:39.298 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:39.298 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:39.298 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:39.298 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:39.298 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:39.298 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:39.298 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:39.298 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:39.298 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:39.298 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:39.298 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:39.298 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:39.298 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:39.298 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:39.298 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:39.298 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:39.298 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:30:39.298 00:30:39.298 --- 10.0.0.2 ping statistics --- 00:30:39.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:39.298 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:30:39.298 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:39.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:39.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:30:39.298 00:30:39.298 --- 10.0.0.1 ping statistics --- 00:30:39.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:39.298 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:30:39.298 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:39.298 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # return 0 00:30:39.298 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:30:39.298 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:39.298 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:30:39.298 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:30:39.298 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:39.298 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:30:39.298 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:30:39.298 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:30:39.298 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:30:39.298 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:39.298 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:39.298 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # nvmfpid=507502 00:30:39.298 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # waitforlisten 507502 00:30:39.298 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:39.298 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 507502 ']' 00:30:39.298 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:39.298 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:39.298 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:39.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:39.298 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:39.298 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:39.298 [2024-09-27 15:49:19.368269] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:30:39.298 [2024-09-27 15:49:19.368332] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:39.298 [2024-09-27 15:49:19.459212] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:39.298 [2024-09-27 15:49:19.506683] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:39.298 [2024-09-27 15:49:19.506739] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:39.298 [2024-09-27 15:49:19.506747] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:39.298 [2024-09-27 15:49:19.506755] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:39.298 [2024-09-27 15:49:19.506761] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:39.298 [2024-09-27 15:49:19.506941] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:30:39.299 [2024-09-27 15:49:19.507110] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:30:39.299 [2024-09-27 15:49:19.507270] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:39.299 [2024-09-27 15:49:19.507272] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:30:39.871 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:39.871 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:30:39.871 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:30:39.871 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:39.871 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:39.871 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:39.871 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:39.871 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.871 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:39.871 [2024-09-27 15:49:20.243581] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:39.871 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.871 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:30:39.871 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:30:39.872 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:39.872 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:39.872 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:39.872 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:39.872 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:39.872 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:39.872 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:39.872 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:39.872 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:39.872 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:39.872 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:39.872 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:39.872 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:39.872 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:39.872 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:39.872 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:39.872 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:39.872 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:39.872 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:39.872 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:39.872 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:39.872 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:39.872 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:39.872 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:30:39.872 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.872 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:39.872 Malloc1 00:30:40.134 [2024-09-27 15:49:20.361235] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:40.134 Malloc2 00:30:40.134 Malloc3 00:30:40.134 Malloc4 00:30:40.134 Malloc5 00:30:40.134 Malloc6 00:30:40.134 Malloc7 00:30:40.396 Malloc8 00:30:40.396 Malloc9 00:30:40.396 Malloc10 00:30:40.396 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.396 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:30:40.396 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:40.396 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:40.396 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=507771 00:30:40.397 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 507771 /var/tmp/bdevperf.sock 00:30:40.397 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 507771 ']' 00:30:40.397 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:40.397 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:40.397 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:40.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:40.397 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:30:40.397 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:40.397 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:40.397 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:40.397 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # config=() 00:30:40.397 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # local subsystem config 00:30:40.397 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:40.397 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:40.397 { 00:30:40.397 "params": { 00:30:40.397 "name": "Nvme$subsystem", 00:30:40.397 "trtype": "$TEST_TRANSPORT", 00:30:40.397 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:40.397 "adrfam": "ipv4", 00:30:40.397 "trsvcid": "$NVMF_PORT", 00:30:40.397 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:40.397 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:40.397 "hdgst": ${hdgst:-false}, 00:30:40.397 "ddgst": ${ddgst:-false} 00:30:40.397 }, 00:30:40.397 "method": "bdev_nvme_attach_controller" 00:30:40.397 } 00:30:40.397 EOF 00:30:40.397 )") 00:30:40.397 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:30:40.397 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:40.397 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:40.397 { 00:30:40.397 "params": { 00:30:40.397 "name": "Nvme$subsystem", 00:30:40.397 "trtype": "$TEST_TRANSPORT", 00:30:40.397 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:40.397 "adrfam": "ipv4", 00:30:40.397 "trsvcid": "$NVMF_PORT", 00:30:40.397 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:40.397 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:40.397 "hdgst": ${hdgst:-false}, 00:30:40.397 "ddgst": ${ddgst:-false} 00:30:40.397 }, 00:30:40.397 "method": "bdev_nvme_attach_controller" 00:30:40.397 } 00:30:40.397 EOF 00:30:40.397 )") 00:30:40.397 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:30:40.397 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:40.397 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:40.397 { 00:30:40.397 "params": { 00:30:40.397 "name": "Nvme$subsystem", 00:30:40.397 "trtype": "$TEST_TRANSPORT", 00:30:40.397 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:40.397 "adrfam": "ipv4", 00:30:40.397 "trsvcid": "$NVMF_PORT", 00:30:40.397 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:40.397 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:40.397 "hdgst": ${hdgst:-false}, 00:30:40.397 "ddgst": ${ddgst:-false} 00:30:40.397 }, 00:30:40.397 "method": "bdev_nvme_attach_controller" 00:30:40.397 } 00:30:40.397 EOF 00:30:40.397 )") 00:30:40.397 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:30:40.397 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:40.397 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:40.397 { 00:30:40.397 "params": { 00:30:40.397 "name": "Nvme$subsystem", 00:30:40.397 "trtype": "$TEST_TRANSPORT", 00:30:40.397 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:40.397 "adrfam": "ipv4", 00:30:40.397 "trsvcid": "$NVMF_PORT", 00:30:40.397 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:40.397 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:40.397 "hdgst": ${hdgst:-false}, 00:30:40.397 "ddgst": ${ddgst:-false} 00:30:40.397 }, 00:30:40.397 "method": "bdev_nvme_attach_controller" 00:30:40.397 } 00:30:40.397 EOF 00:30:40.397 )") 00:30:40.397 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:30:40.397 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:40.397 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:40.397 { 00:30:40.397 "params": { 00:30:40.397 "name": "Nvme$subsystem", 00:30:40.397 "trtype": "$TEST_TRANSPORT", 00:30:40.397 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:40.397 "adrfam": "ipv4", 00:30:40.397 "trsvcid": "$NVMF_PORT", 00:30:40.397 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:40.397 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:40.397 "hdgst": ${hdgst:-false}, 00:30:40.397 "ddgst": ${ddgst:-false} 00:30:40.397 }, 00:30:40.397 "method": "bdev_nvme_attach_controller" 00:30:40.397 } 00:30:40.397 EOF 00:30:40.397 )") 00:30:40.397 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:30:40.397 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:40.397 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:40.397 { 00:30:40.397 "params": { 00:30:40.397 "name": "Nvme$subsystem", 00:30:40.397 "trtype": "$TEST_TRANSPORT", 00:30:40.397 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:40.397 "adrfam": "ipv4", 00:30:40.397 "trsvcid": "$NVMF_PORT", 00:30:40.397 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:40.397 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:40.397 "hdgst": ${hdgst:-false}, 00:30:40.397 "ddgst": ${ddgst:-false} 00:30:40.397 }, 00:30:40.397 "method": "bdev_nvme_attach_controller" 00:30:40.397 } 00:30:40.397 EOF 00:30:40.397 )") 00:30:40.397 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:30:40.397 [2024-09-27 15:49:20.881588] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:30:40.397 [2024-09-27 15:49:20.881660] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:30:40.397 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:40.397 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:40.397 { 00:30:40.397 "params": { 00:30:40.397 "name": "Nvme$subsystem", 00:30:40.397 "trtype": "$TEST_TRANSPORT", 00:30:40.397 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:40.397 "adrfam": "ipv4", 00:30:40.397 "trsvcid": "$NVMF_PORT", 00:30:40.397 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:40.397 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:40.397 "hdgst": ${hdgst:-false}, 00:30:40.397 "ddgst": ${ddgst:-false} 00:30:40.397 }, 00:30:40.397 "method": "bdev_nvme_attach_controller" 00:30:40.397 } 00:30:40.397 EOF 00:30:40.397 )") 00:30:40.660 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:30:40.660 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:40.660 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:40.660 { 00:30:40.660 "params": { 00:30:40.660 "name": "Nvme$subsystem", 00:30:40.660 "trtype": "$TEST_TRANSPORT", 00:30:40.660 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:40.660 "adrfam": "ipv4", 00:30:40.660 "trsvcid": "$NVMF_PORT", 00:30:40.660 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:40.660 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:40.660 "hdgst": ${hdgst:-false}, 00:30:40.660 "ddgst": ${ddgst:-false} 00:30:40.660 }, 00:30:40.660 "method": "bdev_nvme_attach_controller" 00:30:40.660 } 00:30:40.660 EOF 00:30:40.660 )") 00:30:40.660 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:30:40.660 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:40.660 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:40.660 { 00:30:40.660 "params": { 00:30:40.660 "name": "Nvme$subsystem", 00:30:40.660 "trtype": "$TEST_TRANSPORT", 00:30:40.660 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:40.660 "adrfam": "ipv4", 00:30:40.660 "trsvcid": "$NVMF_PORT", 00:30:40.660 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:40.660 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:40.660 "hdgst": ${hdgst:-false}, 00:30:40.660 "ddgst": ${ddgst:-false} 00:30:40.660 }, 00:30:40.660 "method": "bdev_nvme_attach_controller" 00:30:40.660 } 00:30:40.660 EOF 00:30:40.660 )") 00:30:40.660 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:30:40.660 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:40.660 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:40.660 { 00:30:40.660 "params": { 00:30:40.660 "name": "Nvme$subsystem", 00:30:40.660 "trtype": "$TEST_TRANSPORT", 00:30:40.660 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:40.660 "adrfam": "ipv4", 00:30:40.660 "trsvcid": "$NVMF_PORT", 00:30:40.660 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:40.660 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:40.660 "hdgst": ${hdgst:-false}, 00:30:40.660 "ddgst": ${ddgst:-false} 00:30:40.660 }, 00:30:40.660 "method": "bdev_nvme_attach_controller" 00:30:40.660 } 00:30:40.660 EOF 00:30:40.660 )") 00:30:40.660 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:30:40.660 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # jq . 00:30:40.660 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@581 -- # IFS=, 00:30:40.660 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:30:40.660 "params": { 00:30:40.660 "name": "Nvme1", 00:30:40.660 "trtype": "tcp", 00:30:40.660 "traddr": "10.0.0.2", 00:30:40.660 "adrfam": "ipv4", 00:30:40.660 "trsvcid": "4420", 00:30:40.660 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:40.660 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:40.660 "hdgst": false, 00:30:40.660 "ddgst": false 00:30:40.660 }, 00:30:40.660 "method": "bdev_nvme_attach_controller" 00:30:40.660 },{ 00:30:40.660 "params": { 00:30:40.660 "name": "Nvme2", 00:30:40.660 "trtype": "tcp", 00:30:40.660 "traddr": "10.0.0.2", 00:30:40.660 "adrfam": "ipv4", 00:30:40.660 "trsvcid": "4420", 00:30:40.660 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:40.660 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:40.660 "hdgst": false, 00:30:40.660 "ddgst": false 00:30:40.660 }, 00:30:40.660 "method": "bdev_nvme_attach_controller" 00:30:40.660 },{ 00:30:40.660 "params": { 00:30:40.660 "name": "Nvme3", 00:30:40.660 "trtype": "tcp", 00:30:40.660 "traddr": "10.0.0.2", 00:30:40.660 "adrfam": "ipv4", 00:30:40.660 "trsvcid": "4420", 00:30:40.660 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:40.660 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:40.660 "hdgst": false, 00:30:40.660 "ddgst": false 00:30:40.660 }, 00:30:40.660 "method": "bdev_nvme_attach_controller" 00:30:40.660 },{ 00:30:40.660 "params": { 00:30:40.660 "name": "Nvme4", 00:30:40.660 "trtype": "tcp", 00:30:40.660 "traddr": "10.0.0.2", 00:30:40.660 "adrfam": "ipv4", 00:30:40.660 "trsvcid": "4420", 00:30:40.660 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:40.660 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:40.660 "hdgst": false, 00:30:40.660 "ddgst": false 00:30:40.660 }, 00:30:40.660 "method": "bdev_nvme_attach_controller" 00:30:40.660 },{ 00:30:40.660 "params": { 00:30:40.660 "name": "Nvme5", 00:30:40.660 "trtype": "tcp", 00:30:40.660 "traddr": "10.0.0.2", 00:30:40.660 "adrfam": "ipv4", 00:30:40.660 "trsvcid": "4420", 00:30:40.660 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:40.660 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:40.660 "hdgst": false, 00:30:40.660 "ddgst": false 00:30:40.660 }, 00:30:40.660 "method": "bdev_nvme_attach_controller" 00:30:40.660 },{ 00:30:40.660 "params": { 00:30:40.660 "name": "Nvme6", 00:30:40.660 "trtype": "tcp", 00:30:40.660 "traddr": "10.0.0.2", 00:30:40.660 "adrfam": "ipv4", 00:30:40.660 "trsvcid": "4420", 00:30:40.660 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:40.660 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:40.660 "hdgst": false, 00:30:40.660 "ddgst": false 00:30:40.660 }, 00:30:40.660 "method": "bdev_nvme_attach_controller" 00:30:40.660 },{ 00:30:40.660 "params": { 00:30:40.660 "name": "Nvme7", 00:30:40.660 "trtype": "tcp", 00:30:40.660 "traddr": "10.0.0.2", 00:30:40.660 "adrfam": "ipv4", 00:30:40.661 "trsvcid": "4420", 00:30:40.661 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:40.661 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:40.661 "hdgst": false, 00:30:40.661 "ddgst": false 00:30:40.661 }, 00:30:40.661 "method": "bdev_nvme_attach_controller" 00:30:40.661 },{ 00:30:40.661 "params": { 00:30:40.661 "name": "Nvme8", 00:30:40.661 "trtype": "tcp", 00:30:40.661 "traddr": "10.0.0.2", 00:30:40.661 "adrfam": "ipv4", 00:30:40.661 "trsvcid": "4420", 00:30:40.661 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:40.661 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:40.661 "hdgst": false, 00:30:40.661 "ddgst": false 00:30:40.661 }, 00:30:40.661 "method": "bdev_nvme_attach_controller" 00:30:40.661 },{ 00:30:40.661 "params": { 00:30:40.661 "name": "Nvme9", 00:30:40.661 "trtype": "tcp", 00:30:40.661 "traddr": "10.0.0.2", 00:30:40.661 "adrfam": "ipv4", 00:30:40.661 "trsvcid": "4420", 00:30:40.661 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:40.661 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:40.661 "hdgst": false, 00:30:40.661 "ddgst": false 00:30:40.661 }, 00:30:40.661 "method": "bdev_nvme_attach_controller" 00:30:40.661 },{ 00:30:40.661 "params": { 00:30:40.661 "name": "Nvme10", 00:30:40.661 "trtype": "tcp", 00:30:40.661 "traddr": "10.0.0.2", 00:30:40.661 "adrfam": "ipv4", 00:30:40.661 "trsvcid": "4420", 00:30:40.661 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:40.661 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:40.661 "hdgst": false, 00:30:40.661 "ddgst": false 00:30:40.661 }, 00:30:40.661 "method": "bdev_nvme_attach_controller" 00:30:40.661 }' 00:30:40.661 [2024-09-27 15:49:20.968873] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:40.661 [2024-09-27 15:49:21.015802] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:42.048 15:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:42.048 15:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:30:42.048 15:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:42.048 15:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.048 15:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:42.048 15:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.048 15:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 507771 00:30:42.048 15:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:30:42.048 15:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:30:42.993 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 507771 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:30:42.993 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 507502 00:30:42.993 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:30:42.993 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:42.993 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # config=() 00:30:42.993 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # local subsystem config 00:30:42.993 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:42.993 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:42.993 { 00:30:42.993 "params": { 00:30:42.993 "name": "Nvme$subsystem", 00:30:42.993 "trtype": "$TEST_TRANSPORT", 00:30:42.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.993 "adrfam": "ipv4", 00:30:42.993 "trsvcid": "$NVMF_PORT", 00:30:42.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.993 "hdgst": ${hdgst:-false}, 00:30:42.993 "ddgst": ${ddgst:-false} 00:30:42.993 }, 00:30:42.993 "method": "bdev_nvme_attach_controller" 00:30:42.993 } 00:30:42.993 EOF 00:30:42.993 )") 00:30:42.993 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:30:42.993 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:42.993 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:42.993 { 00:30:42.993 "params": { 00:30:42.993 "name": "Nvme$subsystem", 00:30:42.993 "trtype": "$TEST_TRANSPORT", 00:30:42.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.993 "adrfam": "ipv4", 00:30:42.993 "trsvcid": "$NVMF_PORT", 00:30:42.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.993 "hdgst": ${hdgst:-false}, 00:30:42.993 "ddgst": ${ddgst:-false} 00:30:42.993 }, 00:30:42.993 "method": "bdev_nvme_attach_controller" 00:30:42.993 } 00:30:42.993 EOF 00:30:42.993 )") 00:30:42.993 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:30:42.993 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:42.993 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:42.993 { 00:30:42.993 "params": { 00:30:42.993 "name": "Nvme$subsystem", 00:30:42.993 "trtype": "$TEST_TRANSPORT", 00:30:42.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.993 "adrfam": "ipv4", 00:30:42.993 "trsvcid": "$NVMF_PORT", 00:30:42.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.993 "hdgst": ${hdgst:-false}, 00:30:42.993 "ddgst": ${ddgst:-false} 00:30:42.993 }, 00:30:42.993 "method": "bdev_nvme_attach_controller" 00:30:42.993 } 00:30:42.993 EOF 00:30:42.993 )") 00:30:42.993 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:30:42.993 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:42.993 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:42.993 { 00:30:42.993 "params": { 00:30:42.993 "name": "Nvme$subsystem", 00:30:42.993 "trtype": "$TEST_TRANSPORT", 00:30:42.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.993 "adrfam": "ipv4", 00:30:42.993 "trsvcid": "$NVMF_PORT", 00:30:42.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.993 "hdgst": ${hdgst:-false}, 00:30:42.993 "ddgst": ${ddgst:-false} 00:30:42.993 }, 00:30:42.993 "method": "bdev_nvme_attach_controller" 00:30:42.993 } 00:30:42.993 EOF 00:30:42.993 )") 00:30:42.993 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:30:42.993 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:42.993 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:42.993 { 00:30:42.993 "params": { 00:30:42.993 "name": "Nvme$subsystem", 00:30:42.993 "trtype": "$TEST_TRANSPORT", 00:30:42.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.993 "adrfam": "ipv4", 00:30:42.993 "trsvcid": "$NVMF_PORT", 00:30:42.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.993 "hdgst": ${hdgst:-false}, 00:30:42.993 "ddgst": ${ddgst:-false} 00:30:42.993 }, 00:30:42.993 "method": "bdev_nvme_attach_controller" 00:30:42.993 } 00:30:42.993 EOF 00:30:42.993 )") 00:30:42.993 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:30:42.993 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:42.993 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:42.993 { 00:30:42.993 "params": { 00:30:42.993 "name": "Nvme$subsystem", 00:30:42.993 "trtype": "$TEST_TRANSPORT", 00:30:42.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.993 "adrfam": "ipv4", 00:30:42.993 "trsvcid": "$NVMF_PORT", 00:30:42.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.993 "hdgst": ${hdgst:-false}, 00:30:42.993 "ddgst": ${ddgst:-false} 00:30:42.993 }, 00:30:42.993 "method": "bdev_nvme_attach_controller" 00:30:42.993 } 00:30:42.993 EOF 00:30:42.993 )") 00:30:42.993 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:30:42.993 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:42.993 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:42.993 { 00:30:42.993 "params": { 00:30:42.993 "name": "Nvme$subsystem", 00:30:42.993 "trtype": "$TEST_TRANSPORT", 00:30:42.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.993 "adrfam": "ipv4", 00:30:42.993 "trsvcid": "$NVMF_PORT", 00:30:42.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.993 "hdgst": ${hdgst:-false}, 00:30:42.993 "ddgst": ${ddgst:-false} 00:30:42.993 }, 00:30:42.993 "method": "bdev_nvme_attach_controller" 00:30:42.993 } 00:30:42.993 EOF 00:30:42.993 )") 00:30:42.993 [2024-09-27 15:49:23.393704] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:30:42.993 [2024-09-27 15:49:23.393757] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid508261 ] 00:30:42.993 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:30:42.993 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:42.993 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:42.993 { 00:30:42.993 "params": { 00:30:42.993 "name": "Nvme$subsystem", 00:30:42.993 "trtype": "$TEST_TRANSPORT", 00:30:42.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.993 "adrfam": "ipv4", 00:30:42.993 "trsvcid": "$NVMF_PORT", 00:30:42.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.993 "hdgst": ${hdgst:-false}, 00:30:42.993 "ddgst": ${ddgst:-false} 00:30:42.993 }, 00:30:42.993 "method": "bdev_nvme_attach_controller" 00:30:42.993 } 00:30:42.993 EOF 00:30:42.993 )") 00:30:42.993 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:30:42.993 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:42.993 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:42.993 { 00:30:42.993 "params": { 00:30:42.993 "name": "Nvme$subsystem", 00:30:42.993 "trtype": "$TEST_TRANSPORT", 00:30:42.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.994 "adrfam": "ipv4", 00:30:42.994 "trsvcid": "$NVMF_PORT", 00:30:42.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.994 "hdgst": ${hdgst:-false}, 00:30:42.994 "ddgst": ${ddgst:-false} 00:30:42.994 }, 00:30:42.994 "method": "bdev_nvme_attach_controller" 00:30:42.994 } 00:30:42.994 EOF 00:30:42.994 )") 00:30:42.994 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:30:42.994 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:42.994 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:42.994 { 00:30:42.994 "params": { 00:30:42.994 "name": "Nvme$subsystem", 00:30:42.994 "trtype": "$TEST_TRANSPORT", 00:30:42.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.994 "adrfam": "ipv4", 00:30:42.994 "trsvcid": "$NVMF_PORT", 00:30:42.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.994 "hdgst": ${hdgst:-false}, 00:30:42.994 "ddgst": ${ddgst:-false} 00:30:42.994 }, 00:30:42.994 "method": "bdev_nvme_attach_controller" 00:30:42.994 } 00:30:42.994 EOF 00:30:42.994 )") 00:30:42.994 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:30:42.994 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # jq . 00:30:42.994 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@581 -- # IFS=, 00:30:42.994 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:30:42.994 "params": { 00:30:42.994 "name": "Nvme1", 00:30:42.994 "trtype": "tcp", 00:30:42.994 "traddr": "10.0.0.2", 00:30:42.994 "adrfam": "ipv4", 00:30:42.994 "trsvcid": "4420", 00:30:42.994 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:42.994 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:42.994 "hdgst": false, 00:30:42.994 "ddgst": false 00:30:42.994 }, 00:30:42.994 "method": "bdev_nvme_attach_controller" 00:30:42.994 },{ 00:30:42.994 "params": { 00:30:42.994 "name": "Nvme2", 00:30:42.994 "trtype": "tcp", 00:30:42.994 "traddr": "10.0.0.2", 00:30:42.994 "adrfam": "ipv4", 00:30:42.994 "trsvcid": "4420", 00:30:42.994 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:42.994 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:42.994 "hdgst": false, 00:30:42.994 "ddgst": false 00:30:42.994 }, 00:30:42.994 "method": "bdev_nvme_attach_controller" 00:30:42.994 },{ 00:30:42.994 "params": { 00:30:42.994 "name": "Nvme3", 00:30:42.994 "trtype": "tcp", 00:30:42.994 "traddr": "10.0.0.2", 00:30:42.994 "adrfam": "ipv4", 00:30:42.994 "trsvcid": "4420", 00:30:42.994 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:42.994 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:42.994 "hdgst": false, 00:30:42.994 "ddgst": false 00:30:42.994 }, 00:30:42.994 "method": "bdev_nvme_attach_controller" 00:30:42.994 },{ 00:30:42.994 "params": { 00:30:42.994 "name": "Nvme4", 00:30:42.994 "trtype": "tcp", 00:30:42.994 "traddr": "10.0.0.2", 00:30:42.994 "adrfam": "ipv4", 00:30:42.994 "trsvcid": "4420", 00:30:42.994 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:42.994 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:42.994 "hdgst": false, 00:30:42.994 "ddgst": false 00:30:42.994 }, 00:30:42.994 "method": "bdev_nvme_attach_controller" 00:30:42.994 },{ 00:30:42.994 "params": { 00:30:42.994 "name": "Nvme5", 00:30:42.994 "trtype": "tcp", 00:30:42.994 "traddr": "10.0.0.2", 00:30:42.994 "adrfam": "ipv4", 00:30:42.994 "trsvcid": "4420", 00:30:42.994 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:42.994 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:42.994 "hdgst": false, 00:30:42.994 "ddgst": false 00:30:42.994 }, 00:30:42.994 "method": "bdev_nvme_attach_controller" 00:30:42.994 },{ 00:30:42.994 "params": { 00:30:42.994 "name": "Nvme6", 00:30:42.994 "trtype": "tcp", 00:30:42.994 "traddr": "10.0.0.2", 00:30:42.994 "adrfam": "ipv4", 00:30:42.994 "trsvcid": "4420", 00:30:42.994 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:42.994 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:42.994 "hdgst": false, 00:30:42.994 "ddgst": false 00:30:42.994 }, 00:30:42.994 "method": "bdev_nvme_attach_controller" 00:30:42.994 },{ 00:30:42.994 "params": { 00:30:42.994 "name": "Nvme7", 00:30:42.994 "trtype": "tcp", 00:30:42.994 "traddr": "10.0.0.2", 00:30:42.994 "adrfam": "ipv4", 00:30:42.994 "trsvcid": "4420", 00:30:42.994 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:42.994 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:42.994 "hdgst": false, 00:30:42.994 "ddgst": false 00:30:42.994 }, 00:30:42.994 "method": "bdev_nvme_attach_controller" 00:30:42.994 },{ 00:30:42.994 "params": { 00:30:42.994 "name": "Nvme8", 00:30:42.994 "trtype": "tcp", 00:30:42.994 "traddr": "10.0.0.2", 00:30:42.994 "adrfam": "ipv4", 00:30:42.994 "trsvcid": "4420", 00:30:42.994 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:42.994 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:42.994 "hdgst": false, 00:30:42.994 "ddgst": false 00:30:42.994 }, 00:30:42.994 "method": "bdev_nvme_attach_controller" 00:30:42.994 },{ 00:30:42.994 "params": { 00:30:42.994 "name": "Nvme9", 00:30:42.994 "trtype": "tcp", 00:30:42.994 "traddr": "10.0.0.2", 00:30:42.994 "adrfam": "ipv4", 00:30:42.994 "trsvcid": "4420", 00:30:42.994 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:42.994 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:42.994 "hdgst": false, 00:30:42.994 "ddgst": false 00:30:42.994 }, 00:30:42.994 "method": "bdev_nvme_attach_controller" 00:30:42.994 },{ 00:30:42.994 "params": { 00:30:42.994 "name": "Nvme10", 00:30:42.994 "trtype": "tcp", 00:30:42.994 "traddr": "10.0.0.2", 00:30:42.994 "adrfam": "ipv4", 00:30:42.994 "trsvcid": "4420", 00:30:42.994 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:42.994 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:42.994 "hdgst": false, 00:30:42.994 "ddgst": false 00:30:42.994 }, 00:30:42.994 "method": "bdev_nvme_attach_controller" 00:30:42.994 }' 00:30:42.994 [2024-09-27 15:49:23.473254] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:43.255 [2024-09-27 15:49:23.504286] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:44.640 Running I/O for 1 seconds... 00:30:45.583 1861.00 IOPS, 116.31 MiB/s 00:30:45.583 Latency(us) 00:30:45.583 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:45.583 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:45.583 Verification LBA range: start 0x0 length 0x400 00:30:45.583 Nvme1n1 : 1.16 220.00 13.75 0.00 0.00 287965.44 19333.12 267386.88 00:30:45.583 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:45.583 Verification LBA range: start 0x0 length 0x400 00:30:45.583 Nvme2n1 : 1.14 224.28 14.02 0.00 0.00 277538.13 18677.76 256901.12 00:30:45.583 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:45.583 Verification LBA range: start 0x0 length 0x400 00:30:45.583 Nvme3n1 : 1.16 221.07 13.82 0.00 0.00 276988.37 19333.12 253405.87 00:30:45.583 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:45.583 Verification LBA range: start 0x0 length 0x400 00:30:45.583 Nvme4n1 : 1.16 276.95 17.31 0.00 0.00 217172.65 12724.91 239424.85 00:30:45.583 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:45.583 Verification LBA range: start 0x0 length 0x400 00:30:45.583 Nvme5n1 : 1.15 223.11 13.94 0.00 0.00 264542.93 16602.45 248162.99 00:30:45.583 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:45.583 Verification LBA range: start 0x0 length 0x400 00:30:45.583 Nvme6n1 : 1.15 222.46 13.90 0.00 0.00 260590.29 21408.43 251658.24 00:30:45.583 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:45.583 Verification LBA range: start 0x0 length 0x400 00:30:45.583 Nvme7n1 : 1.19 269.07 16.82 0.00 0.00 212211.71 12124.16 248162.99 00:30:45.583 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:45.583 Verification LBA range: start 0x0 length 0x400 00:30:45.583 Nvme8n1 : 1.20 267.43 16.71 0.00 0.00 209886.63 12997.97 270882.13 00:30:45.583 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:45.583 Verification LBA range: start 0x0 length 0x400 00:30:45.584 Nvme9n1 : 1.19 214.69 13.42 0.00 0.00 256460.59 21626.88 270882.13 00:30:45.584 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:45.584 Verification LBA range: start 0x0 length 0x400 00:30:45.584 Nvme10n1 : 1.21 264.87 16.55 0.00 0.00 204635.31 9666.56 267386.88 00:30:45.584 =================================================================================================================== 00:30:45.584 Total : 2403.93 150.25 0.00 0.00 243542.60 9666.56 270882.13 00:30:45.844 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:30:45.844 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:45.844 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:45.844 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:45.844 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:45.844 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # nvmfcleanup 00:30:45.844 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:30:45.844 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:45.844 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:30:45.844 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:45.844 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:45.844 rmmod nvme_tcp 00:30:45.844 rmmod nvme_fabrics 00:30:45.844 rmmod nvme_keyring 00:30:45.844 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:45.844 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:30:45.844 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:30:45.844 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@513 -- # '[' -n 507502 ']' 00:30:45.844 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # killprocess 507502 00:30:45.844 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 507502 ']' 00:30:45.844 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 507502 00:30:45.844 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:30:45.844 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:45.844 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 507502 00:30:45.844 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:45.844 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:45.844 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 507502' 00:30:45.844 killing process with pid 507502 00:30:45.844 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 507502 00:30:45.844 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 507502 00:30:46.104 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:30:46.104 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:30:46.104 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:30:46.104 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:30:46.104 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@787 -- # iptables-save 00:30:46.104 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:30:46.104 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@787 -- # iptables-restore 00:30:46.104 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:46.104 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:46.104 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:46.104 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:46.104 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:48.651 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:48.651 00:30:48.651 real 0m17.015s 00:30:48.651 user 0m33.849s 00:30:48.651 sys 0m7.165s 00:30:48.651 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:48.651 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:48.651 ************************************ 00:30:48.651 END TEST nvmf_shutdown_tc1 00:30:48.651 ************************************ 00:30:48.651 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:30:48.651 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:48.651 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:48.651 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:48.651 ************************************ 00:30:48.651 START TEST nvmf_shutdown_tc2 00:30:48.651 ************************************ 00:30:48.651 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:30:48.651 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:30:48.651 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:30:48.651 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:30:48.651 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:48.651 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # prepare_net_devs 00:30:48.651 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:30:48.651 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:30:48.651 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:48.651 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:48.651 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:48.651 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:30:48.651 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:30:48.651 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:30:48.651 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:48.651 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:48.651 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:30:48.651 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:48.651 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:48.651 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:48.651 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:48.651 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:48.651 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:30:48.651 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:48.651 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:30:48.651 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:30:48.651 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:30:48.651 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:30:48.651 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:30:48.651 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:30:48.651 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:48.651 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:48.651 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:48.651 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:48.652 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:48.652 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:48.652 Found net devices under 0000:31:00.0: cvl_0_0 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:48.652 Found net devices under 0000:31:00.1: cvl_0_1 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # is_hw=yes 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:48.652 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:48.652 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:30:48.652 00:30:48.652 --- 10.0.0.2 ping statistics --- 00:30:48.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:48.652 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:30:48.652 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:48.652 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:48.652 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:30:48.652 00:30:48.652 --- 10.0.0.1 ping statistics --- 00:30:48.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:48.652 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:30:48.653 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:48.653 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # return 0 00:30:48.653 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:30:48.653 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:48.653 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:30:48.653 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:30:48.653 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:48.653 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:30:48.653 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:30:48.653 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:30:48.653 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:30:48.653 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:48.653 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:48.653 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # nvmfpid=509379 00:30:48.653 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # waitforlisten 509379 00:30:48.653 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:48.653 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 509379 ']' 00:30:48.653 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:48.653 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:48.653 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:48.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:48.653 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:48.653 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:48.653 [2024-09-27 15:49:29.103264] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:30:48.653 [2024-09-27 15:49:29.103326] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:48.913 [2024-09-27 15:49:29.191557] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:48.913 [2024-09-27 15:49:29.225046] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:48.913 [2024-09-27 15:49:29.225082] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:48.913 [2024-09-27 15:49:29.225088] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:48.913 [2024-09-27 15:49:29.225093] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:48.913 [2024-09-27 15:49:29.225097] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:48.913 [2024-09-27 15:49:29.225247] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:30:48.914 [2024-09-27 15:49:29.225406] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:30:48.914 [2024-09-27 15:49:29.225560] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:48.914 [2024-09-27 15:49:29.225563] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:30:49.485 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:49.485 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:30:49.485 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:30:49.485 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:49.485 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:49.485 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:49.485 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:49.485 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.485 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:49.485 [2024-09-27 15:49:29.933013] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:49.485 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.485 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:30:49.485 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:30:49.485 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:49.485 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:49.485 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:49.485 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:49.485 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:49.485 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:49.485 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:49.485 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:49.485 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:49.485 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:49.485 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:49.485 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:49.485 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:49.746 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:49.746 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:49.746 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:49.746 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:49.746 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:49.746 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:49.746 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:49.746 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:49.746 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:49.746 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:49.746 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:30:49.746 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.746 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:49.746 Malloc1 00:30:49.746 [2024-09-27 15:49:30.036092] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:49.746 Malloc2 00:30:49.746 Malloc3 00:30:49.746 Malloc4 00:30:49.746 Malloc5 00:30:49.746 Malloc6 00:30:50.008 Malloc7 00:30:50.008 Malloc8 00:30:50.008 Malloc9 00:30:50.008 Malloc10 00:30:50.008 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.008 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:30:50.008 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:50.008 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:50.008 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=509754 00:30:50.008 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 509754 /var/tmp/bdevperf.sock 00:30:50.008 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 509754 ']' 00:30:50.008 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:50.008 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:50.008 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:50.008 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:50.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:50.008 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:50.008 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:50.008 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:50.008 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # config=() 00:30:50.008 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # local subsystem config 00:30:50.008 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:50.008 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:50.008 { 00:30:50.008 "params": { 00:30:50.008 "name": "Nvme$subsystem", 00:30:50.008 "trtype": "$TEST_TRANSPORT", 00:30:50.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:50.008 "adrfam": "ipv4", 00:30:50.008 "trsvcid": "$NVMF_PORT", 00:30:50.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:50.008 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:50.008 "hdgst": ${hdgst:-false}, 00:30:50.008 "ddgst": ${ddgst:-false} 00:30:50.008 }, 00:30:50.008 "method": "bdev_nvme_attach_controller" 00:30:50.008 } 00:30:50.008 EOF 00:30:50.008 )") 00:30:50.008 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:30:50.008 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:50.008 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:50.008 { 00:30:50.008 "params": { 00:30:50.008 "name": "Nvme$subsystem", 00:30:50.008 "trtype": "$TEST_TRANSPORT", 00:30:50.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:50.008 "adrfam": "ipv4", 00:30:50.008 "trsvcid": "$NVMF_PORT", 00:30:50.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:50.008 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:50.008 "hdgst": ${hdgst:-false}, 00:30:50.008 "ddgst": ${ddgst:-false} 00:30:50.008 }, 00:30:50.008 "method": "bdev_nvme_attach_controller" 00:30:50.008 } 00:30:50.008 EOF 00:30:50.008 )") 00:30:50.008 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:30:50.008 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:50.008 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:50.008 { 00:30:50.008 "params": { 00:30:50.008 "name": "Nvme$subsystem", 00:30:50.008 "trtype": "$TEST_TRANSPORT", 00:30:50.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:50.008 "adrfam": "ipv4", 00:30:50.008 "trsvcid": "$NVMF_PORT", 00:30:50.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:50.008 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:50.008 "hdgst": ${hdgst:-false}, 00:30:50.008 "ddgst": ${ddgst:-false} 00:30:50.008 }, 00:30:50.008 "method": "bdev_nvme_attach_controller" 00:30:50.008 } 00:30:50.008 EOF 00:30:50.008 )") 00:30:50.008 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:30:50.008 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:50.008 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:50.008 { 00:30:50.008 "params": { 00:30:50.008 "name": "Nvme$subsystem", 00:30:50.008 "trtype": "$TEST_TRANSPORT", 00:30:50.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:50.008 "adrfam": "ipv4", 00:30:50.008 "trsvcid": "$NVMF_PORT", 00:30:50.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:50.008 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:50.008 "hdgst": ${hdgst:-false}, 00:30:50.008 "ddgst": ${ddgst:-false} 00:30:50.008 }, 00:30:50.008 "method": "bdev_nvme_attach_controller" 00:30:50.008 } 00:30:50.008 EOF 00:30:50.008 )") 00:30:50.008 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:30:50.008 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:50.008 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:50.008 { 00:30:50.008 "params": { 00:30:50.008 "name": "Nvme$subsystem", 00:30:50.008 "trtype": "$TEST_TRANSPORT", 00:30:50.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:50.008 "adrfam": "ipv4", 00:30:50.008 "trsvcid": "$NVMF_PORT", 00:30:50.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:50.008 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:50.008 "hdgst": ${hdgst:-false}, 00:30:50.008 "ddgst": ${ddgst:-false} 00:30:50.008 }, 00:30:50.008 "method": "bdev_nvme_attach_controller" 00:30:50.008 } 00:30:50.008 EOF 00:30:50.008 )") 00:30:50.008 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:30:50.008 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:50.008 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:50.008 { 00:30:50.008 "params": { 00:30:50.008 "name": "Nvme$subsystem", 00:30:50.008 "trtype": "$TEST_TRANSPORT", 00:30:50.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:50.008 "adrfam": "ipv4", 00:30:50.008 "trsvcid": "$NVMF_PORT", 00:30:50.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:50.008 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:50.008 "hdgst": ${hdgst:-false}, 00:30:50.008 "ddgst": ${ddgst:-false} 00:30:50.008 }, 00:30:50.008 "method": "bdev_nvme_attach_controller" 00:30:50.008 } 00:30:50.008 EOF 00:30:50.008 )") 00:30:50.008 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:30:50.008 [2024-09-27 15:49:30.478177] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:30:50.008 [2024-09-27 15:49:30.478233] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid509754 ] 00:30:50.009 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:50.009 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:50.009 { 00:30:50.009 "params": { 00:30:50.009 "name": "Nvme$subsystem", 00:30:50.009 "trtype": "$TEST_TRANSPORT", 00:30:50.009 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:50.009 "adrfam": "ipv4", 00:30:50.009 "trsvcid": "$NVMF_PORT", 00:30:50.009 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:50.009 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:50.009 "hdgst": ${hdgst:-false}, 00:30:50.009 "ddgst": ${ddgst:-false} 00:30:50.009 }, 00:30:50.009 "method": "bdev_nvme_attach_controller" 00:30:50.009 } 00:30:50.009 EOF 00:30:50.009 )") 00:30:50.009 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:30:50.009 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:50.009 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:50.009 { 00:30:50.009 "params": { 00:30:50.009 "name": "Nvme$subsystem", 00:30:50.009 "trtype": "$TEST_TRANSPORT", 00:30:50.009 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:50.009 "adrfam": "ipv4", 00:30:50.009 "trsvcid": "$NVMF_PORT", 00:30:50.009 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:50.009 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:50.009 "hdgst": ${hdgst:-false}, 00:30:50.009 "ddgst": ${ddgst:-false} 00:30:50.009 }, 00:30:50.009 "method": "bdev_nvme_attach_controller" 00:30:50.009 } 00:30:50.009 EOF 00:30:50.009 )") 00:30:50.009 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:30:50.009 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:50.009 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:50.009 { 00:30:50.009 "params": { 00:30:50.009 "name": "Nvme$subsystem", 00:30:50.009 "trtype": "$TEST_TRANSPORT", 00:30:50.009 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:50.009 "adrfam": "ipv4", 00:30:50.009 "trsvcid": "$NVMF_PORT", 00:30:50.009 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:50.009 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:50.009 "hdgst": ${hdgst:-false}, 00:30:50.009 "ddgst": ${ddgst:-false} 00:30:50.009 }, 00:30:50.009 "method": "bdev_nvme_attach_controller" 00:30:50.009 } 00:30:50.009 EOF 00:30:50.009 )") 00:30:50.269 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:30:50.269 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:50.269 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:50.269 { 00:30:50.269 "params": { 00:30:50.269 "name": "Nvme$subsystem", 00:30:50.269 "trtype": "$TEST_TRANSPORT", 00:30:50.269 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:50.269 "adrfam": "ipv4", 00:30:50.269 "trsvcid": "$NVMF_PORT", 00:30:50.269 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:50.269 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:50.269 "hdgst": ${hdgst:-false}, 00:30:50.269 "ddgst": ${ddgst:-false} 00:30:50.269 }, 00:30:50.269 "method": "bdev_nvme_attach_controller" 00:30:50.269 } 00:30:50.269 EOF 00:30:50.269 )") 00:30:50.269 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:30:50.269 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # jq . 00:30:50.269 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@581 -- # IFS=, 00:30:50.269 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:30:50.269 "params": { 00:30:50.269 "name": "Nvme1", 00:30:50.269 "trtype": "tcp", 00:30:50.269 "traddr": "10.0.0.2", 00:30:50.269 "adrfam": "ipv4", 00:30:50.269 "trsvcid": "4420", 00:30:50.269 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:50.269 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:50.269 "hdgst": false, 00:30:50.269 "ddgst": false 00:30:50.269 }, 00:30:50.269 "method": "bdev_nvme_attach_controller" 00:30:50.269 },{ 00:30:50.269 "params": { 00:30:50.269 "name": "Nvme2", 00:30:50.269 "trtype": "tcp", 00:30:50.269 "traddr": "10.0.0.2", 00:30:50.269 "adrfam": "ipv4", 00:30:50.269 "trsvcid": "4420", 00:30:50.269 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:50.269 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:50.269 "hdgst": false, 00:30:50.269 "ddgst": false 00:30:50.269 }, 00:30:50.269 "method": "bdev_nvme_attach_controller" 00:30:50.269 },{ 00:30:50.269 "params": { 00:30:50.269 "name": "Nvme3", 00:30:50.269 "trtype": "tcp", 00:30:50.269 "traddr": "10.0.0.2", 00:30:50.269 "adrfam": "ipv4", 00:30:50.269 "trsvcid": "4420", 00:30:50.269 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:50.269 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:50.269 "hdgst": false, 00:30:50.269 "ddgst": false 00:30:50.269 }, 00:30:50.269 "method": "bdev_nvme_attach_controller" 00:30:50.269 },{ 00:30:50.269 "params": { 00:30:50.269 "name": "Nvme4", 00:30:50.269 "trtype": "tcp", 00:30:50.269 "traddr": "10.0.0.2", 00:30:50.269 "adrfam": "ipv4", 00:30:50.269 "trsvcid": "4420", 00:30:50.269 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:50.269 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:50.269 "hdgst": false, 00:30:50.269 "ddgst": false 00:30:50.269 }, 00:30:50.269 "method": "bdev_nvme_attach_controller" 00:30:50.269 },{ 00:30:50.269 "params": { 00:30:50.269 "name": "Nvme5", 00:30:50.269 "trtype": "tcp", 00:30:50.269 "traddr": "10.0.0.2", 00:30:50.269 "adrfam": "ipv4", 00:30:50.269 "trsvcid": "4420", 00:30:50.269 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:50.269 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:50.269 "hdgst": false, 00:30:50.269 "ddgst": false 00:30:50.269 }, 00:30:50.269 "method": "bdev_nvme_attach_controller" 00:30:50.269 },{ 00:30:50.269 "params": { 00:30:50.269 "name": "Nvme6", 00:30:50.269 "trtype": "tcp", 00:30:50.269 "traddr": "10.0.0.2", 00:30:50.269 "adrfam": "ipv4", 00:30:50.269 "trsvcid": "4420", 00:30:50.269 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:50.269 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:50.269 "hdgst": false, 00:30:50.269 "ddgst": false 00:30:50.269 }, 00:30:50.269 "method": "bdev_nvme_attach_controller" 00:30:50.269 },{ 00:30:50.269 "params": { 00:30:50.269 "name": "Nvme7", 00:30:50.269 "trtype": "tcp", 00:30:50.269 "traddr": "10.0.0.2", 00:30:50.269 "adrfam": "ipv4", 00:30:50.269 "trsvcid": "4420", 00:30:50.269 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:50.269 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:50.269 "hdgst": false, 00:30:50.269 "ddgst": false 00:30:50.269 }, 00:30:50.269 "method": "bdev_nvme_attach_controller" 00:30:50.269 },{ 00:30:50.269 "params": { 00:30:50.269 "name": "Nvme8", 00:30:50.269 "trtype": "tcp", 00:30:50.269 "traddr": "10.0.0.2", 00:30:50.269 "adrfam": "ipv4", 00:30:50.269 "trsvcid": "4420", 00:30:50.269 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:50.269 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:50.269 "hdgst": false, 00:30:50.269 "ddgst": false 00:30:50.269 }, 00:30:50.269 "method": "bdev_nvme_attach_controller" 00:30:50.269 },{ 00:30:50.269 "params": { 00:30:50.269 "name": "Nvme9", 00:30:50.269 "trtype": "tcp", 00:30:50.269 "traddr": "10.0.0.2", 00:30:50.269 "adrfam": "ipv4", 00:30:50.269 "trsvcid": "4420", 00:30:50.269 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:50.269 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:50.269 "hdgst": false, 00:30:50.269 "ddgst": false 00:30:50.269 }, 00:30:50.269 "method": "bdev_nvme_attach_controller" 00:30:50.269 },{ 00:30:50.269 "params": { 00:30:50.269 "name": "Nvme10", 00:30:50.269 "trtype": "tcp", 00:30:50.269 "traddr": "10.0.0.2", 00:30:50.269 "adrfam": "ipv4", 00:30:50.269 "trsvcid": "4420", 00:30:50.269 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:50.269 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:50.269 "hdgst": false, 00:30:50.269 "ddgst": false 00:30:50.269 }, 00:30:50.269 "method": "bdev_nvme_attach_controller" 00:30:50.269 }' 00:30:50.269 [2024-09-27 15:49:30.556687] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:50.269 [2024-09-27 15:49:30.588474] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:52.291 Running I/O for 10 seconds... 00:30:52.291 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:52.291 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:30:52.291 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:52.291 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.291 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:52.291 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.291 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:30:52.291 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:52.291 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:30:52.291 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:30:52.291 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:30:52.291 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:30:52.291 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:52.291 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:52.291 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:52.291 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.291 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:52.291 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.291 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:30:52.291 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:30:52.291 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:30:52.575 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:30:52.575 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:52.575 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:52.575 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:52.575 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.575 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:52.575 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.575 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:30:52.575 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:30:52.575 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:30:52.856 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:30:52.856 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:52.856 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:52.856 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:52.856 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.856 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:52.856 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.856 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:30:52.856 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:30:52.856 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:30:52.856 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:30:52.856 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:30:52.856 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 509754 00:30:52.856 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 509754 ']' 00:30:52.856 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 509754 00:30:52.856 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:30:52.856 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:52.856 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 509754 00:30:52.856 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:52.856 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:52.856 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 509754' 00:30:52.856 killing process with pid 509754 00:30:52.856 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 509754 00:30:52.856 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 509754 00:30:53.133 1664.00 IOPS, 104.00 MiB/s Received shutdown signal, test time was about 1.087718 seconds 00:30:53.133 00:30:53.133 Latency(us) 00:30:53.133 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:53.133 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:53.133 Verification LBA range: start 0x0 length 0x400 00:30:53.133 Nvme1n1 : 1.08 236.06 14.75 0.00 0.00 267089.49 21299.20 297096.53 00:30:53.133 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:53.133 Verification LBA range: start 0x0 length 0x400 00:30:53.133 Nvme2n1 : 1.09 235.55 14.72 0.00 0.00 260565.76 16930.13 267386.88 00:30:53.133 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:53.133 Verification LBA range: start 0x0 length 0x400 00:30:53.133 Nvme3n1 : 1.07 238.87 14.93 0.00 0.00 249887.04 12724.91 255153.49 00:30:53.133 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:53.133 Verification LBA range: start 0x0 length 0x400 00:30:53.133 Nvme4n1 : 1.08 236.70 14.79 0.00 0.00 245399.25 20206.93 249910.61 00:30:53.133 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:53.133 Verification LBA range: start 0x0 length 0x400 00:30:53.133 Nvme5n1 : 1.06 180.68 11.29 0.00 0.00 311641.32 19333.12 255153.49 00:30:53.133 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:53.133 Verification LBA range: start 0x0 length 0x400 00:30:53.133 Nvme6n1 : 1.08 238.11 14.88 0.00 0.00 229771.09 31238.83 232434.35 00:30:53.133 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:53.133 Verification LBA range: start 0x0 length 0x400 00:30:53.133 Nvme7n1 : 1.08 237.40 14.84 0.00 0.00 223589.55 16165.55 255153.49 00:30:53.133 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:53.133 Verification LBA range: start 0x0 length 0x400 00:30:53.133 Nvme8n1 : 1.05 182.58 11.41 0.00 0.00 279204.69 19770.03 242920.11 00:30:53.133 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:53.133 Verification LBA range: start 0x0 length 0x400 00:30:53.133 Nvme9n1 : 1.06 181.92 11.37 0.00 0.00 271403.80 18350.08 251658.24 00:30:53.133 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:53.133 Verification LBA range: start 0x0 length 0x400 00:30:53.133 Nvme10n1 : 1.07 179.98 11.25 0.00 0.00 266087.82 14964.05 269134.51 00:30:53.133 =================================================================================================================== 00:30:53.133 Total : 2147.86 134.24 0.00 0.00 258061.71 12724.91 297096.53 00:30:53.133 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:30:54.120 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 509379 00:30:54.120 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:30:54.120 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:54.121 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:54.121 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:54.121 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:54.121 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # nvmfcleanup 00:30:54.121 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:30:54.121 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:54.121 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:30:54.121 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:54.121 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:54.121 rmmod nvme_tcp 00:30:54.121 rmmod nvme_fabrics 00:30:54.121 rmmod nvme_keyring 00:30:54.121 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:54.121 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:30:54.121 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:30:54.121 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@513 -- # '[' -n 509379 ']' 00:30:54.121 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # killprocess 509379 00:30:54.121 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 509379 ']' 00:30:54.121 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 509379 00:30:54.121 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:30:54.121 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:54.121 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 509379 00:30:54.398 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:54.398 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:54.398 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 509379' 00:30:54.398 killing process with pid 509379 00:30:54.398 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 509379 00:30:54.398 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 509379 00:30:54.398 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:30:54.398 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:30:54.398 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:30:54.398 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:30:54.398 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:30:54.398 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@787 -- # iptables-save 00:30:54.398 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@787 -- # iptables-restore 00:30:54.398 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:54.398 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:54.398 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:54.398 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:54.398 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:57.136 15:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:57.136 00:30:57.136 real 0m8.272s 00:30:57.136 user 0m25.719s 00:30:57.136 sys 0m1.290s 00:30:57.136 15:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:57.136 15:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:57.136 ************************************ 00:30:57.136 END TEST nvmf_shutdown_tc2 00:30:57.136 ************************************ 00:30:57.136 15:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:30:57.136 15:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:57.136 15:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:57.136 15:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:57.136 ************************************ 00:30:57.136 START TEST nvmf_shutdown_tc3 00:30:57.136 ************************************ 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # prepare_net_devs 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:30:57.136 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:57.137 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:57.137 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:57.137 Found net devices under 0000:31:00.0: cvl_0_0 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:57.137 Found net devices under 0000:31:00.1: cvl_0_1 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # is_hw=yes 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:57.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:57.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:30:57.137 00:30:57.137 --- 10.0.0.2 ping statistics --- 00:30:57.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:57.137 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:57.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:57.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:30:57.137 00:30:57.137 --- 10.0.0.1 ping statistics --- 00:30:57.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:57.137 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # return 0 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # nvmfpid=511245 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # waitforlisten 511245 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:57.137 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 511245 ']' 00:30:57.138 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:57.138 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:57.138 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:57.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:57.138 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:57.138 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:57.138 [2024-09-27 15:49:37.450777] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:30:57.138 [2024-09-27 15:49:37.450827] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:57.138 [2024-09-27 15:49:37.509649] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:57.138 [2024-09-27 15:49:37.538313] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:57.138 [2024-09-27 15:49:37.538345] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:57.138 [2024-09-27 15:49:37.538351] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:57.138 [2024-09-27 15:49:37.538356] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:57.138 [2024-09-27 15:49:37.538361] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:57.138 [2024-09-27 15:49:37.538499] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:30:57.138 [2024-09-27 15:49:37.538649] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:30:57.138 [2024-09-27 15:49:37.538761] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:57.138 [2024-09-27 15:49:37.538763] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:30:57.453 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:57.453 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:30:57.453 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:30:57.453 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:57.453 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:57.453 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:57.453 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:57.453 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.453 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:57.453 [2024-09-27 15:49:37.680473] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:57.453 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.453 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:30:57.453 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:30:57.453 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:57.453 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:57.453 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:57.453 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:57.453 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:57.453 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:57.453 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:57.453 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:57.453 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:57.453 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:57.453 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:57.453 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:57.453 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:57.453 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:57.453 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:57.453 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:57.453 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:57.453 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:57.453 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:57.453 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:57.453 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:57.453 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:57.453 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:57.453 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:30:57.453 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.453 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:57.453 Malloc1 00:30:57.453 [2024-09-27 15:49:37.779180] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:57.453 Malloc2 00:30:57.453 Malloc3 00:30:57.453 Malloc4 00:30:57.453 Malloc5 00:30:57.745 Malloc6 00:30:57.745 Malloc7 00:30:57.745 Malloc8 00:30:57.745 Malloc9 00:30:57.745 Malloc10 00:30:57.745 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.745 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:30:57.745 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:57.745 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:57.745 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=511309 00:30:57.745 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 511309 /var/tmp/bdevperf.sock 00:30:57.745 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 511309 ']' 00:30:57.745 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:57.745 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:57.745 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:57.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:57.745 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:57.745 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:57.745 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:57.745 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:57.745 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # config=() 00:30:57.745 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # local subsystem config 00:30:57.745 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:57.745 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:57.745 { 00:30:57.745 "params": { 00:30:57.745 "name": "Nvme$subsystem", 00:30:57.745 "trtype": "$TEST_TRANSPORT", 00:30:57.745 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:57.745 "adrfam": "ipv4", 00:30:57.745 "trsvcid": "$NVMF_PORT", 00:30:57.745 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:57.745 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:57.746 "hdgst": ${hdgst:-false}, 00:30:57.746 "ddgst": ${ddgst:-false} 00:30:57.746 }, 00:30:57.746 "method": "bdev_nvme_attach_controller" 00:30:57.746 } 00:30:57.746 EOF 00:30:57.746 )") 00:30:57.746 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:30:57.746 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:57.746 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:57.746 { 00:30:57.746 "params": { 00:30:57.746 "name": "Nvme$subsystem", 00:30:57.746 "trtype": "$TEST_TRANSPORT", 00:30:57.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:57.746 "adrfam": "ipv4", 00:30:57.746 "trsvcid": "$NVMF_PORT", 00:30:57.746 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:57.746 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:57.746 "hdgst": ${hdgst:-false}, 00:30:57.746 "ddgst": ${ddgst:-false} 00:30:57.746 }, 00:30:57.746 "method": "bdev_nvme_attach_controller" 00:30:57.746 } 00:30:57.746 EOF 00:30:57.746 )") 00:30:57.746 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:30:57.746 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:57.746 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:57.746 { 00:30:57.746 "params": { 00:30:57.746 "name": "Nvme$subsystem", 00:30:57.746 "trtype": "$TEST_TRANSPORT", 00:30:57.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:57.746 "adrfam": "ipv4", 00:30:57.746 "trsvcid": "$NVMF_PORT", 00:30:57.746 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:57.746 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:57.746 "hdgst": ${hdgst:-false}, 00:30:57.746 "ddgst": ${ddgst:-false} 00:30:57.746 }, 00:30:57.746 "method": "bdev_nvme_attach_controller" 00:30:57.746 } 00:30:57.746 EOF 00:30:57.746 )") 00:30:57.746 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:30:57.746 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:57.746 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:57.746 { 00:30:57.746 "params": { 00:30:57.746 "name": "Nvme$subsystem", 00:30:57.746 "trtype": "$TEST_TRANSPORT", 00:30:57.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:57.746 "adrfam": "ipv4", 00:30:57.746 "trsvcid": "$NVMF_PORT", 00:30:57.746 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:57.746 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:57.746 "hdgst": ${hdgst:-false}, 00:30:57.746 "ddgst": ${ddgst:-false} 00:30:57.746 }, 00:30:57.746 "method": "bdev_nvme_attach_controller" 00:30:57.746 } 00:30:57.746 EOF 00:30:57.746 )") 00:30:57.746 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:30:57.746 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:57.746 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:57.746 { 00:30:57.746 "params": { 00:30:57.746 "name": "Nvme$subsystem", 00:30:57.746 "trtype": "$TEST_TRANSPORT", 00:30:57.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:57.746 "adrfam": "ipv4", 00:30:57.746 "trsvcid": "$NVMF_PORT", 00:30:57.746 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:57.746 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:57.746 "hdgst": ${hdgst:-false}, 00:30:57.746 "ddgst": ${ddgst:-false} 00:30:57.746 }, 00:30:57.746 "method": "bdev_nvme_attach_controller" 00:30:57.746 } 00:30:57.746 EOF 00:30:57.746 )") 00:30:57.746 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:30:57.746 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:57.746 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:57.746 { 00:30:57.746 "params": { 00:30:57.746 "name": "Nvme$subsystem", 00:30:57.746 "trtype": "$TEST_TRANSPORT", 00:30:57.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:57.746 "adrfam": "ipv4", 00:30:57.746 "trsvcid": "$NVMF_PORT", 00:30:57.746 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:57.746 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:57.746 "hdgst": ${hdgst:-false}, 00:30:57.746 "ddgst": ${ddgst:-false} 00:30:57.746 }, 00:30:57.746 "method": "bdev_nvme_attach_controller" 00:30:57.746 } 00:30:57.746 EOF 00:30:57.746 )") 00:30:57.746 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:30:57.746 [2024-09-27 15:49:38.222945] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:30:57.746 [2024-09-27 15:49:38.222995] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid511309 ] 00:30:57.746 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:57.746 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:57.746 { 00:30:57.746 "params": { 00:30:57.746 "name": "Nvme$subsystem", 00:30:57.746 "trtype": "$TEST_TRANSPORT", 00:30:57.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:57.746 "adrfam": "ipv4", 00:30:57.746 "trsvcid": "$NVMF_PORT", 00:30:57.746 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:57.746 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:57.746 "hdgst": ${hdgst:-false}, 00:30:57.746 "ddgst": ${ddgst:-false} 00:30:57.746 }, 00:30:57.746 "method": "bdev_nvme_attach_controller" 00:30:57.746 } 00:30:57.746 EOF 00:30:57.746 )") 00:30:57.746 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:30:57.746 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:57.746 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:57.746 { 00:30:57.746 "params": { 00:30:57.746 "name": "Nvme$subsystem", 00:30:57.746 "trtype": "$TEST_TRANSPORT", 00:30:57.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:57.746 "adrfam": "ipv4", 00:30:57.746 "trsvcid": "$NVMF_PORT", 00:30:57.746 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:57.746 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:57.746 "hdgst": ${hdgst:-false}, 00:30:57.746 "ddgst": ${ddgst:-false} 00:30:57.746 }, 00:30:57.746 "method": "bdev_nvme_attach_controller" 00:30:57.746 } 00:30:57.746 EOF 00:30:57.746 )") 00:30:58.008 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:30:58.008 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:58.008 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:58.008 { 00:30:58.008 "params": { 00:30:58.008 "name": "Nvme$subsystem", 00:30:58.008 "trtype": "$TEST_TRANSPORT", 00:30:58.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:58.008 "adrfam": "ipv4", 00:30:58.008 "trsvcid": "$NVMF_PORT", 00:30:58.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:58.008 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:58.008 "hdgst": ${hdgst:-false}, 00:30:58.008 "ddgst": ${ddgst:-false} 00:30:58.008 }, 00:30:58.008 "method": "bdev_nvme_attach_controller" 00:30:58.008 } 00:30:58.008 EOF 00:30:58.008 )") 00:30:58.008 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:30:58.008 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:58.008 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:58.008 { 00:30:58.008 "params": { 00:30:58.008 "name": "Nvme$subsystem", 00:30:58.008 "trtype": "$TEST_TRANSPORT", 00:30:58.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:58.008 "adrfam": "ipv4", 00:30:58.008 "trsvcid": "$NVMF_PORT", 00:30:58.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:58.008 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:58.008 "hdgst": ${hdgst:-false}, 00:30:58.008 "ddgst": ${ddgst:-false} 00:30:58.008 }, 00:30:58.008 "method": "bdev_nvme_attach_controller" 00:30:58.009 } 00:30:58.009 EOF 00:30:58.009 )") 00:30:58.009 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:30:58.009 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # jq . 00:30:58.009 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@581 -- # IFS=, 00:30:58.009 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:30:58.009 "params": { 00:30:58.009 "name": "Nvme1", 00:30:58.009 "trtype": "tcp", 00:30:58.009 "traddr": "10.0.0.2", 00:30:58.009 "adrfam": "ipv4", 00:30:58.009 "trsvcid": "4420", 00:30:58.009 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:58.009 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:58.009 "hdgst": false, 00:30:58.009 "ddgst": false 00:30:58.009 }, 00:30:58.009 "method": "bdev_nvme_attach_controller" 00:30:58.009 },{ 00:30:58.009 "params": { 00:30:58.009 "name": "Nvme2", 00:30:58.009 "trtype": "tcp", 00:30:58.009 "traddr": "10.0.0.2", 00:30:58.009 "adrfam": "ipv4", 00:30:58.009 "trsvcid": "4420", 00:30:58.009 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:58.009 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:58.009 "hdgst": false, 00:30:58.009 "ddgst": false 00:30:58.009 }, 00:30:58.009 "method": "bdev_nvme_attach_controller" 00:30:58.009 },{ 00:30:58.009 "params": { 00:30:58.009 "name": "Nvme3", 00:30:58.009 "trtype": "tcp", 00:30:58.009 "traddr": "10.0.0.2", 00:30:58.009 "adrfam": "ipv4", 00:30:58.009 "trsvcid": "4420", 00:30:58.009 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:58.009 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:58.009 "hdgst": false, 00:30:58.009 "ddgst": false 00:30:58.009 }, 00:30:58.009 "method": "bdev_nvme_attach_controller" 00:30:58.009 },{ 00:30:58.009 "params": { 00:30:58.009 "name": "Nvme4", 00:30:58.009 "trtype": "tcp", 00:30:58.009 "traddr": "10.0.0.2", 00:30:58.009 "adrfam": "ipv4", 00:30:58.009 "trsvcid": "4420", 00:30:58.009 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:58.009 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:58.009 "hdgst": false, 00:30:58.009 "ddgst": false 00:30:58.009 }, 00:30:58.009 "method": "bdev_nvme_attach_controller" 00:30:58.009 },{ 00:30:58.009 "params": { 00:30:58.009 "name": "Nvme5", 00:30:58.009 "trtype": "tcp", 00:30:58.009 "traddr": "10.0.0.2", 00:30:58.009 "adrfam": "ipv4", 00:30:58.009 "trsvcid": "4420", 00:30:58.009 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:58.009 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:58.009 "hdgst": false, 00:30:58.009 "ddgst": false 00:30:58.009 }, 00:30:58.009 "method": "bdev_nvme_attach_controller" 00:30:58.009 },{ 00:30:58.009 "params": { 00:30:58.009 "name": "Nvme6", 00:30:58.009 "trtype": "tcp", 00:30:58.009 "traddr": "10.0.0.2", 00:30:58.009 "adrfam": "ipv4", 00:30:58.009 "trsvcid": "4420", 00:30:58.009 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:58.009 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:58.009 "hdgst": false, 00:30:58.009 "ddgst": false 00:30:58.009 }, 00:30:58.009 "method": "bdev_nvme_attach_controller" 00:30:58.009 },{ 00:30:58.009 "params": { 00:30:58.009 "name": "Nvme7", 00:30:58.009 "trtype": "tcp", 00:30:58.009 "traddr": "10.0.0.2", 00:30:58.009 "adrfam": "ipv4", 00:30:58.009 "trsvcid": "4420", 00:30:58.009 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:58.009 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:58.009 "hdgst": false, 00:30:58.009 "ddgst": false 00:30:58.009 }, 00:30:58.009 "method": "bdev_nvme_attach_controller" 00:30:58.009 },{ 00:30:58.009 "params": { 00:30:58.009 "name": "Nvme8", 00:30:58.009 "trtype": "tcp", 00:30:58.009 "traddr": "10.0.0.2", 00:30:58.009 "adrfam": "ipv4", 00:30:58.009 "trsvcid": "4420", 00:30:58.009 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:58.009 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:58.009 "hdgst": false, 00:30:58.009 "ddgst": false 00:30:58.009 }, 00:30:58.009 "method": "bdev_nvme_attach_controller" 00:30:58.009 },{ 00:30:58.009 "params": { 00:30:58.009 "name": "Nvme9", 00:30:58.009 "trtype": "tcp", 00:30:58.009 "traddr": "10.0.0.2", 00:30:58.009 "adrfam": "ipv4", 00:30:58.009 "trsvcid": "4420", 00:30:58.009 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:58.009 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:58.009 "hdgst": false, 00:30:58.009 "ddgst": false 00:30:58.009 }, 00:30:58.009 "method": "bdev_nvme_attach_controller" 00:30:58.009 },{ 00:30:58.009 "params": { 00:30:58.009 "name": "Nvme10", 00:30:58.009 "trtype": "tcp", 00:30:58.009 "traddr": "10.0.0.2", 00:30:58.009 "adrfam": "ipv4", 00:30:58.009 "trsvcid": "4420", 00:30:58.009 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:58.009 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:58.009 "hdgst": false, 00:30:58.009 "ddgst": false 00:30:58.009 }, 00:30:58.009 "method": "bdev_nvme_attach_controller" 00:30:58.009 }' 00:30:58.009 [2024-09-27 15:49:38.300713] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:58.009 [2024-09-27 15:49:38.332051] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:59.923 Running I/O for 10 seconds... 00:30:59.923 15:49:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:59.923 15:49:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:30:59.923 15:49:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:59.923 15:49:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.923 15:49:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:59.923 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.923 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:59.923 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:30:59.923 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:59.923 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:30:59.923 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:30:59.923 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:30:59.923 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:30:59.923 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:59.924 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:59.924 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:59.924 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.924 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:59.924 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.924 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:30:59.924 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:30:59.924 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:31:00.185 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:31:00.185 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:31:00.185 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:31:00.185 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:31:00.185 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.185 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:00.185 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.185 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:31:00.185 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:31:00.185 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:31:00.454 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:31:00.454 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:31:00.454 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:31:00.454 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:31:00.454 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.454 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:00.454 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.454 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:31:00.454 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:31:00.454 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:31:00.454 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:31:00.454 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:31:00.454 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 511245 00:31:00.454 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 511245 ']' 00:31:00.455 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 511245 00:31:00.455 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:31:00.455 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:00.455 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 511245 00:31:00.455 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:00.455 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:00.455 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 511245' 00:31:00.455 killing process with pid 511245 00:31:00.455 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 511245 00:31:00.455 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 511245 00:31:00.455 [2024-09-27 15:49:40.908635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.908985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b20 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.911042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.911066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.911073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.911078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.911083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.911088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.911093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.455 [2024-09-27 15:49:40.911098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.911364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f02be0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.912618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.912643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.912649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.912654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.912658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.912664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.912668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.912673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.912678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.912686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.912692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.912697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.912702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.912707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.912712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.912716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.912721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.912726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.912730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.912735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.912739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.912744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.456 [2024-09-27 15:49:40.912749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.912754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.912759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.912764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.912768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.912773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.912778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.912783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.912787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.912792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.912797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.912803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.912807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.912812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.912819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.912823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.912828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.912833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.912837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.912842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.912848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.912853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.912858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.912862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.912867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.912872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.912876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.912881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.912886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.912890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.912899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.912904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.912910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.912915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.912920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.912925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.912929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.912934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.912939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.912943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.912948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f030d0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.913689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.913712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.913719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.913724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.913730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.913735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.913740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.913745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.913750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.913754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.913759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.913764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.913768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.913773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.913778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.913783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.913788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.913793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.913797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.913802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.913807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.913811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.913816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.913820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.913826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.913831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.913835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.913844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.913848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.913853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.913858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.913862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.913867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.913871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.913876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.913881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.913886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.913891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.457 [2024-09-27 15:49:40.913900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.913904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.913909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.913913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.913919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.913923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.913928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.913933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.913938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.913943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.913948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.913953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.913957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.913962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.913967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.913971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.913980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.913985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.913990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.913995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.914000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.914004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.914009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.914014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.914018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f035a0 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.914928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.914943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.914948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.914953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.914958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.914963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.914969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.914973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.914978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.914983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.914988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.914993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.914998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.915002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.915007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.915012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.915017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.915022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.915030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.915034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.915039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.915044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.915049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.915053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.915058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.915063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.915068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.915073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.915078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.915082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.915087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.915092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.915097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.915101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.915106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.915110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.915115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.915120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.915125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.915130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.915134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.915139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.915144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.915148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.915153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.915159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.915164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.915169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.915174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.915179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.915184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.915188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.915193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.458 [2024-09-27 15:49:40.915197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.915201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.915206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.915211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.915216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03a70 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.459 [2024-09-27 15:49:40.916467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.916472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.916476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.916481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.916485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.916490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.916495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f40 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04430 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.917981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.460 [2024-09-27 15:49:40.924514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.460 [2024-09-27 15:49:40.924550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.460 [2024-09-27 15:49:40.924563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.460 [2024-09-27 15:49:40.924574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.460 [2024-09-27 15:49:40.924583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.460 [2024-09-27 15:49:40.924591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.460 [2024-09-27 15:49:40.924601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.460 [2024-09-27 15:49:40.924610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.460 [2024-09-27 15:49:40.924618] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f190 is same with the state(6) to be set 00:31:00.461 [2024-09-27 15:49:40.924648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.461 [2024-09-27 15:49:40.924658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.924666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.461 [2024-09-27 15:49:40.924678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.924686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.461 [2024-09-27 15:49:40.924694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.924702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.461 [2024-09-27 15:49:40.924709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.924716] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba680 is same with the state(6) to be set 00:31:00.461 [2024-09-27 15:49:40.924741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.461 [2024-09-27 15:49:40.924750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.924759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.461 [2024-09-27 15:49:40.924766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.924775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.461 [2024-09-27 15:49:40.924783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.924791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.461 [2024-09-27 15:49:40.924798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.924805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba220 is same with the state(6) to be set 00:31:00.461 [2024-09-27 15:49:40.924829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.461 [2024-09-27 15:49:40.924838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.924847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.461 [2024-09-27 15:49:40.924854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.924863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.461 [2024-09-27 15:49:40.924871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.924879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.461 [2024-09-27 15:49:40.924886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.924900] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b02e0 is same with the state(6) to be set 00:31:00.461 [2024-09-27 15:49:40.924924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.461 [2024-09-27 15:49:40.924933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.924944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.461 [2024-09-27 15:49:40.924951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.924960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.461 [2024-09-27 15:49:40.924967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.924975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.461 [2024-09-27 15:49:40.924982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.924989] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde5a30 is same with the state(6) to be set 00:31:00.461 [2024-09-27 15:49:40.925014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.461 [2024-09-27 15:49:40.925023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.925031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.461 [2024-09-27 15:49:40.925038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.925047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.461 [2024-09-27 15:49:40.925054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.925062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.461 [2024-09-27 15:49:40.925070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.925076] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30620 is same with the state(6) to be set 00:31:00.461 [2024-09-27 15:49:40.925102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.461 [2024-09-27 15:49:40.925111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.925120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.461 [2024-09-27 15:49:40.925127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.925135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.461 [2024-09-27 15:49:40.925143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.925151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.461 [2024-09-27 15:49:40.925159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.925165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c7610 is same with the state(6) to be set 00:31:00.461 [2024-09-27 15:49:40.925191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.461 [2024-09-27 15:49:40.925200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.925209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.461 [2024-09-27 15:49:40.925216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.925225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.461 [2024-09-27 15:49:40.925232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.925240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.461 [2024-09-27 15:49:40.925247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.925255] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdda6d0 is same with the state(6) to be set 00:31:00.461 [2024-09-27 15:49:40.925278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.461 [2024-09-27 15:49:40.925287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.925295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.461 [2024-09-27 15:49:40.925302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.925310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.461 [2024-09-27 15:49:40.925318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.925326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.461 [2024-09-27 15:49:40.925334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.925341] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde6190 is same with the state(6) to be set 00:31:00.461 [2024-09-27 15:49:40.925845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.461 [2024-09-27 15:49:40.925863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.925878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.461 [2024-09-27 15:49:40.925887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.925904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.461 [2024-09-27 15:49:40.925911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.925922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.461 [2024-09-27 15:49:40.925929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.925942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.461 [2024-09-27 15:49:40.925950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.925959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.461 [2024-09-27 15:49:40.925967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.925977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.461 [2024-09-27 15:49:40.925985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.925994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.461 [2024-09-27 15:49:40.926001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.926011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.461 [2024-09-27 15:49:40.926019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.926029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.461 [2024-09-27 15:49:40.926036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.926045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.461 [2024-09-27 15:49:40.926053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.926063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.461 [2024-09-27 15:49:40.926070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.926079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.461 [2024-09-27 15:49:40.926087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.926096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.461 [2024-09-27 15:49:40.926104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.926113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.461 [2024-09-27 15:49:40.926121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.926130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.461 [2024-09-27 15:49:40.926137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.926146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.461 [2024-09-27 15:49:40.926155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.926165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.461 [2024-09-27 15:49:40.926172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.926181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.461 [2024-09-27 15:49:40.926189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.926198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.461 [2024-09-27 15:49:40.926206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.926215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.461 [2024-09-27 15:49:40.926222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.926232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.461 [2024-09-27 15:49:40.926239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.926248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.461 [2024-09-27 15:49:40.926255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.926265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.461 [2024-09-27 15:49:40.926272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.461 [2024-09-27 15:49:40.926282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.461 [2024-09-27 15:49:40.926290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.926299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.926306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.926316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.926323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.926333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.926340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.926349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.926357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.926372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.926381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.926391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.926398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.926408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.926415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.926425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.926433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.926442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.926449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.926459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.926466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.926476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.926483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.926492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.926499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.926509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.926517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.926526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.926533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.926542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.926549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.926559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.926566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.926576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.926585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.926594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.926602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.926611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.926619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.926628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.926635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.926644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.926652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.926661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.926668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.926678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.926685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.926694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.926701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.926710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.926718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.926727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.926734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.926743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.926751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.926760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.926767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.926778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.926786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.926797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.926804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.926813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.926821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.926832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.926839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.926848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.926855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.926865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.926873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.926882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.926889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.926902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.926910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.926920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.926928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.926937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.926945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.926954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.926962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.926986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:00.462 [2024-09-27 15:49:40.927028] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xbbe830 was disconnected and freed. reset controller. 00:31:00.462 [2024-09-27 15:49:40.927064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.927074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.927085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.927097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.927107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.927115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.927124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.927131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.927141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.927149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.927159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.927167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.927176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.927184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.927194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.927201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.927211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.927219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.927228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.927236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.927246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.927253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.927263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.462 [2024-09-27 15:49:40.927270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.462 [2024-09-27 15:49:40.927280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.463 [2024-09-27 15:49:40.927287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.463 [2024-09-27 15:49:40.927297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.463 [2024-09-27 15:49:40.927304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.463 [2024-09-27 15:49:40.927315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.463 [2024-09-27 15:49:40.927322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.463 [2024-09-27 15:49:40.927333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.463 [2024-09-27 15:49:40.927341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.463 [2024-09-27 15:49:40.927351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.463 [2024-09-27 15:49:40.927358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.463 [2024-09-27 15:49:40.927367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.463 [2024-09-27 15:49:40.927376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.463 [2024-09-27 15:49:40.927386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.463 [2024-09-27 15:49:40.927394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.463 [2024-09-27 15:49:40.927403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.463 [2024-09-27 15:49:40.927410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.463 [2024-09-27 15:49:40.927420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.463 [2024-09-27 15:49:40.927428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.463 [2024-09-27 15:49:40.927437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.463 [2024-09-27 15:49:40.927445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.463 [2024-09-27 15:49:40.927455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.463 [2024-09-27 15:49:40.927462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.463 [2024-09-27 15:49:40.927473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.463 [2024-09-27 15:49:40.927481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.463 [2024-09-27 15:49:40.927491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.463 [2024-09-27 15:49:40.927498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.463 [2024-09-27 15:49:40.927507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.463 [2024-09-27 15:49:40.927516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.463 [2024-09-27 15:49:40.927511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with [2024-09-27 15:49:40.927526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:1the state(6) to be set 00:31:00.463 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.463 [2024-09-27 15:49:40.927540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.463 [2024-09-27 15:49:40.927543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.463 [2024-09-27 15:49:40.927550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.463 [2024-09-27 15:49:40.927552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.463 [2024-09-27 15:49:40.927558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.463 [2024-09-27 15:49:40.927560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.463 [2024-09-27 15:49:40.927567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.463 [2024-09-27 15:49:40.927568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.463 [2024-09-27 15:49:40.927574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.463 [2024-09-27 15:49:40.927577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.463 [2024-09-27 15:49:40.927582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.463 [2024-09-27 15:49:40.927587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:1[2024-09-27 15:49:40.927588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.463 the state(6) to be set 00:31:00.463 [2024-09-27 15:49:40.927597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with [2024-09-27 15:49:40.927596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:31:00.463 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.463 [2024-09-27 15:49:40.927605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.463 [2024-09-27 15:49:40.927609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.463 [2024-09-27 15:49:40.927612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.463 [2024-09-27 15:49:40.927617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.463 [2024-09-27 15:49:40.927620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.463 [2024-09-27 15:49:40.927626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.463 [2024-09-27 15:49:40.927627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.463 [2024-09-27 15:49:40.927633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.463 [2024-09-27 15:49:40.927635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.463 [2024-09-27 15:49:40.927639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.463 [2024-09-27 15:49:40.927645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.463 [2024-09-27 15:49:40.927648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.463 [2024-09-27 15:49:40.927654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.463 [2024-09-27 15:49:40.927655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.463 [2024-09-27 15:49:40.927663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.463 [2024-09-27 15:49:40.927664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.463 [2024-09-27 15:49:40.927669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.463 [2024-09-27 15:49:40.927672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.463 [2024-09-27 15:49:40.927676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.463 [2024-09-27 15:49:40.927683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.463 [2024-09-27 15:49:40.927689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.463 [2024-09-27 15:49:40.927695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.463 [2024-09-27 15:49:40.927701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.463 [2024-09-27 15:49:40.927714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.463 [2024-09-27 15:49:40.927720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.463 [2024-09-27 15:49:40.927726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.463 [2024-09-27 15:49:40.927732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.463 [2024-09-27 15:49:40.927738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.463 [2024-09-27 15:49:40.927744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.463 [2024-09-27 15:49:40.927751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.463 [2024-09-27 15:49:40.927757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.463 [2024-09-27 15:49:40.927763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.463 [2024-09-27 15:49:40.927769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.464 [2024-09-27 15:49:40.927775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.464 [2024-09-27 15:49:40.927781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.464 [2024-09-27 15:49:40.927787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.464 [2024-09-27 15:49:40.927793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.464 [2024-09-27 15:49:40.927802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.464 [2024-09-27 15:49:40.927808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.464 [2024-09-27 15:49:40.927814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.464 [2024-09-27 15:49:40.927820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.464 [2024-09-27 15:49:40.927826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.464 [2024-09-27 15:49:40.927833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.464 [2024-09-27 15:49:40.927838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.464 [2024-09-27 15:49:40.927845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.464 [2024-09-27 15:49:40.927852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.464 [2024-09-27 15:49:40.927858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.464 [2024-09-27 15:49:40.927864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.464 [2024-09-27 15:49:40.927870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.464 [2024-09-27 15:49:40.927876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.464 [2024-09-27 15:49:40.927881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04900 is same with the state(6) to be set 00:31:00.735 [2024-09-27 15:49:40.937679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.735 [2024-09-27 15:49:40.937712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.735 [2024-09-27 15:49:40.937725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.735 [2024-09-27 15:49:40.937734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.735 [2024-09-27 15:49:40.937745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.735 [2024-09-27 15:49:40.937752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.735 [2024-09-27 15:49:40.937762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.735 [2024-09-27 15:49:40.937770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.735 [2024-09-27 15:49:40.937780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.735 [2024-09-27 15:49:40.937787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.735 [2024-09-27 15:49:40.937799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.735 [2024-09-27 15:49:40.937807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.735 [2024-09-27 15:49:40.937822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.735 [2024-09-27 15:49:40.937830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.735 [2024-09-27 15:49:40.937840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.735 [2024-09-27 15:49:40.937848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.735 [2024-09-27 15:49:40.937858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.735 [2024-09-27 15:49:40.937866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.735 [2024-09-27 15:49:40.937876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.735 [2024-09-27 15:49:40.937883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.735 [2024-09-27 15:49:40.937924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.735 [2024-09-27 15:49:40.937934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.735 [2024-09-27 15:49:40.937944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.735 [2024-09-27 15:49:40.937952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.735 [2024-09-27 15:49:40.937962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.735 [2024-09-27 15:49:40.937969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.735 [2024-09-27 15:49:40.937979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.735 [2024-09-27 15:49:40.937987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.735 [2024-09-27 15:49:40.937997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.735 [2024-09-27 15:49:40.938005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.735 [2024-09-27 15:49:40.938015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.735 [2024-09-27 15:49:40.938023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.735 [2024-09-27 15:49:40.938033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.735 [2024-09-27 15:49:40.938041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.735 [2024-09-27 15:49:40.938051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.735 [2024-09-27 15:49:40.938058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.735 [2024-09-27 15:49:40.938068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.735 [2024-09-27 15:49:40.938078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.735 [2024-09-27 15:49:40.938088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.735 [2024-09-27 15:49:40.938096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.735 [2024-09-27 15:49:40.938105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.735 [2024-09-27 15:49:40.938113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.735 [2024-09-27 15:49:40.938123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.735 [2024-09-27 15:49:40.938131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.735 [2024-09-27 15:49:40.938141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.735 [2024-09-27 15:49:40.938148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.735 [2024-09-27 15:49:40.938158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.735 [2024-09-27 15:49:40.938166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.735 [2024-09-27 15:49:40.938176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.735 [2024-09-27 15:49:40.938184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.735 [2024-09-27 15:49:40.938193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.735 [2024-09-27 15:49:40.938202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.735 [2024-09-27 15:49:40.938211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.735 [2024-09-27 15:49:40.938219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.735 [2024-09-27 15:49:40.938229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.735 [2024-09-27 15:49:40.938237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.735 [2024-09-27 15:49:40.938247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.735 [2024-09-27 15:49:40.938254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.735 [2024-09-27 15:49:40.938264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.735 [2024-09-27 15:49:40.938272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.735 [2024-09-27 15:49:40.938281] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbfab0 is same with the state(6) to be set 00:31:00.735 [2024-09-27 15:49:40.938331] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xbbfab0 was disconnected and freed. reset controller. 00:31:00.735 [2024-09-27 15:49:40.938979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.735 [2024-09-27 15:49:40.939001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.735 [2024-09-27 15:49:40.939011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.735 [2024-09-27 15:49:40.939019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.735 [2024-09-27 15:49:40.939028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.736 [2024-09-27 15:49:40.939035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.736 [2024-09-27 15:49:40.939044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.736 [2024-09-27 15:49:40.939052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.736 [2024-09-27 15:49:40.939060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f390 is same with the state(6) to be set 00:31:00.736 [2024-09-27 15:49:40.939083] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe2f190 (9): Bad file descriptor 00:31:00.736 [2024-09-27 15:49:40.939098] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ba680 (9): Bad file descriptor 00:31:00.736 [2024-09-27 15:49:40.939114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ba220 (9): Bad file descriptor 00:31:00.736 [2024-09-27 15:49:40.939127] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b02e0 (9): Bad file descriptor 00:31:00.736 [2024-09-27 15:49:40.939144] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xde5a30 (9): Bad file descriptor 00:31:00.736 [2024-09-27 15:49:40.939163] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe30620 (9): Bad file descriptor 00:31:00.736 [2024-09-27 15:49:40.939176] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8c7610 (9): Bad file descriptor 00:31:00.736 [2024-09-27 15:49:40.939189] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdda6d0 (9): Bad file descriptor 00:31:00.736 [2024-09-27 15:49:40.939207] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xde6190 (9): Bad file descriptor 00:31:00.736 [2024-09-27 15:49:40.941690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.736 [2024-09-27 15:49:40.941709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.736 [2024-09-27 15:49:40.941726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.736 [2024-09-27 15:49:40.941736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.736 [2024-09-27 15:49:40.941748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.736 [2024-09-27 15:49:40.941758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.736 [2024-09-27 15:49:40.941770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.736 [2024-09-27 15:49:40.941779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.736 [2024-09-27 15:49:40.941794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.736 [2024-09-27 15:49:40.941802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.736 [2024-09-27 15:49:40.941812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.736 [2024-09-27 15:49:40.941820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.736 [2024-09-27 15:49:40.941830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.736 [2024-09-27 15:49:40.941838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.736 [2024-09-27 15:49:40.941848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.736 [2024-09-27 15:49:40.941857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.736 [2024-09-27 15:49:40.941867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.736 [2024-09-27 15:49:40.941875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.736 [2024-09-27 15:49:40.941885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.736 [2024-09-27 15:49:40.941897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.736 [2024-09-27 15:49:40.941908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.736 [2024-09-27 15:49:40.941915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.736 [2024-09-27 15:49:40.941925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.736 [2024-09-27 15:49:40.941933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.736 [2024-09-27 15:49:40.941943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.736 [2024-09-27 15:49:40.941951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.736 [2024-09-27 15:49:40.941960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.736 [2024-09-27 15:49:40.941968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.736 [2024-09-27 15:49:40.941979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.736 [2024-09-27 15:49:40.941986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.736 [2024-09-27 15:49:40.941996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.736 [2024-09-27 15:49:40.942004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.736 [2024-09-27 15:49:40.942014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.736 [2024-09-27 15:49:40.942024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.736 [2024-09-27 15:49:40.942034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.736 [2024-09-27 15:49:40.942042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.736 [2024-09-27 15:49:40.942052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.736 [2024-09-27 15:49:40.942060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.736 [2024-09-27 15:49:40.942070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.736 [2024-09-27 15:49:40.942079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.736 [2024-09-27 15:49:40.942089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.736 [2024-09-27 15:49:40.942097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.736 [2024-09-27 15:49:40.942107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.736 [2024-09-27 15:49:40.942115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.736 [2024-09-27 15:49:40.942125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.736 [2024-09-27 15:49:40.942135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.736 [2024-09-27 15:49:40.942146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.736 [2024-09-27 15:49:40.942154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.736 [2024-09-27 15:49:40.942163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.736 [2024-09-27 15:49:40.942171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.736 [2024-09-27 15:49:40.942181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.736 [2024-09-27 15:49:40.942188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.736 [2024-09-27 15:49:40.942199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.736 [2024-09-27 15:49:40.942207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.737 [2024-09-27 15:49:40.942217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.737 [2024-09-27 15:49:40.942225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.737 [2024-09-27 15:49:40.942235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.737 [2024-09-27 15:49:40.942242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.737 [2024-09-27 15:49:40.942254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.737 [2024-09-27 15:49:40.942262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.737 [2024-09-27 15:49:40.942271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.737 [2024-09-27 15:49:40.942279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.737 [2024-09-27 15:49:40.942289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.737 [2024-09-27 15:49:40.942297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.737 [2024-09-27 15:49:40.942307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.737 [2024-09-27 15:49:40.942314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.737 [2024-09-27 15:49:40.942324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.737 [2024-09-27 15:49:40.942332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.737 [2024-09-27 15:49:40.942341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.737 [2024-09-27 15:49:40.942349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.737 [2024-09-27 15:49:40.942359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.737 [2024-09-27 15:49:40.942367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.737 [2024-09-27 15:49:40.942376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.737 [2024-09-27 15:49:40.942384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.737 [2024-09-27 15:49:40.942395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.737 [2024-09-27 15:49:40.942403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.737 [2024-09-27 15:49:40.942413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.737 [2024-09-27 15:49:40.942420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.737 [2024-09-27 15:49:40.942429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.737 [2024-09-27 15:49:40.942437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.737 [2024-09-27 15:49:40.942446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.737 [2024-09-27 15:49:40.942455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.737 [2024-09-27 15:49:40.942464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.737 [2024-09-27 15:49:40.942473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.737 [2024-09-27 15:49:40.942483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.737 [2024-09-27 15:49:40.942491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.737 [2024-09-27 15:49:40.942500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.737 [2024-09-27 15:49:40.942508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.737 [2024-09-27 15:49:40.942518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.737 [2024-09-27 15:49:40.942526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.737 [2024-09-27 15:49:40.942536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.737 [2024-09-27 15:49:40.942544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.737 [2024-09-27 15:49:40.942554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.737 [2024-09-27 15:49:40.942562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.737 [2024-09-27 15:49:40.942572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.737 [2024-09-27 15:49:40.942580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.737 [2024-09-27 15:49:40.942590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.737 [2024-09-27 15:49:40.942597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.737 [2024-09-27 15:49:40.942607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.737 [2024-09-27 15:49:40.942614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.737 [2024-09-27 15:49:40.942624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.737 [2024-09-27 15:49:40.942631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.737 [2024-09-27 15:49:40.942642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.737 [2024-09-27 15:49:40.942649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.737 [2024-09-27 15:49:40.942659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.737 [2024-09-27 15:49:40.942667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.737 [2024-09-27 15:49:40.942676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.737 [2024-09-27 15:49:40.942684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.737 [2024-09-27 15:49:40.942696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.737 [2024-09-27 15:49:40.942704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.737 [2024-09-27 15:49:40.942714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.737 [2024-09-27 15:49:40.942722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.737 [2024-09-27 15:49:40.942731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.737 [2024-09-27 15:49:40.942739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.737 [2024-09-27 15:49:40.942749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.737 [2024-09-27 15:49:40.942756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.737 [2024-09-27 15:49:40.942766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.737 [2024-09-27 15:49:40.942774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.737 [2024-09-27 15:49:40.942783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.737 [2024-09-27 15:49:40.942791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.737 [2024-09-27 15:49:40.942801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.737 [2024-09-27 15:49:40.942808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.737 [2024-09-27 15:49:40.942818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.737 [2024-09-27 15:49:40.942825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.737 [2024-09-27 15:49:40.942835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.737 [2024-09-27 15:49:40.942842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.738 [2024-09-27 15:49:40.942852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.738 [2024-09-27 15:49:40.942860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.738 [2024-09-27 15:49:40.942928] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xea9090 was disconnected and freed. reset controller. 00:31:00.738 [2024-09-27 15:49:40.943103] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:00.738 [2024-09-27 15:49:40.943119] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:31:00.738 [2024-09-27 15:49:40.944910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:00.738 [2024-09-27 15:49:40.944934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ba680 with addr=10.0.0.2, port=4420 00:31:00.738 [2024-09-27 15:49:40.944943] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba680 is same with the state(6) to be set 00:31:00.738 [2024-09-27 15:49:40.945397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:00.738 [2024-09-27 15:49:40.945439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ba220 with addr=10.0.0.2, port=4420 00:31:00.738 [2024-09-27 15:49:40.945452] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba220 is same with the state(6) to be set 00:31:00.738 [2024-09-27 15:49:40.946032] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:31:00.738 [2024-09-27 15:49:40.946119] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:31:00.738 [2024-09-27 15:49:40.946210] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:31:00.738 [2024-09-27 15:49:40.946291] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:31:00.738 [2024-09-27 15:49:40.946307] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:31:00.738 [2024-09-27 15:49:40.946340] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ba680 (9): Bad file descriptor 00:31:00.738 [2024-09-27 15:49:40.946352] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ba220 (9): Bad file descriptor 00:31:00.738 [2024-09-27 15:49:40.946700] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:31:00.738 [2024-09-27 15:49:40.947196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:00.738 [2024-09-27 15:49:40.947236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdda6d0 with addr=10.0.0.2, port=4420 00:31:00.738 [2024-09-27 15:49:40.947247] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdda6d0 is same with the state(6) to be set 00:31:00.738 [2024-09-27 15:49:40.947259] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:00.738 [2024-09-27 15:49:40.947266] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:00.738 [2024-09-27 15:49:40.947275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:00.738 [2024-09-27 15:49:40.947292] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:31:00.738 [2024-09-27 15:49:40.947299] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:31:00.738 [2024-09-27 15:49:40.947306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:31:00.738 [2024-09-27 15:49:40.947371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.738 [2024-09-27 15:49:40.947385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.738 [2024-09-27 15:49:40.947401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.738 [2024-09-27 15:49:40.947410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.738 [2024-09-27 15:49:40.947419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.738 [2024-09-27 15:49:40.947427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.738 [2024-09-27 15:49:40.947437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.738 [2024-09-27 15:49:40.947445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.738 [2024-09-27 15:49:40.947454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.738 [2024-09-27 15:49:40.947462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.738 [2024-09-27 15:49:40.947480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.738 [2024-09-27 15:49:40.947488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.738 [2024-09-27 15:49:40.947498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.738 [2024-09-27 15:49:40.947506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.738 [2024-09-27 15:49:40.947516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.738 [2024-09-27 15:49:40.947523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.738 [2024-09-27 15:49:40.947533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.738 [2024-09-27 15:49:40.947540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.738 [2024-09-27 15:49:40.947550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.738 [2024-09-27 15:49:40.947559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.738 [2024-09-27 15:49:40.947569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.738 [2024-09-27 15:49:40.947577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.738 [2024-09-27 15:49:40.947587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.738 [2024-09-27 15:49:40.947595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.738 [2024-09-27 15:49:40.947605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.738 [2024-09-27 15:49:40.947612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.738 [2024-09-27 15:49:40.947622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.738 [2024-09-27 15:49:40.947630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.738 [2024-09-27 15:49:40.947640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.738 [2024-09-27 15:49:40.947647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.738 [2024-09-27 15:49:40.947657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.738 [2024-09-27 15:49:40.947664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.738 [2024-09-27 15:49:40.947674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.738 [2024-09-27 15:49:40.947682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.738 [2024-09-27 15:49:40.947692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.738 [2024-09-27 15:49:40.947701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.738 [2024-09-27 15:49:40.947712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.738 [2024-09-27 15:49:40.947720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.738 [2024-09-27 15:49:40.947730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.738 [2024-09-27 15:49:40.947738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.738 [2024-09-27 15:49:40.947748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.738 [2024-09-27 15:49:40.947756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.738 [2024-09-27 15:49:40.947766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.738 [2024-09-27 15:49:40.947774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.738 [2024-09-27 15:49:40.947783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.738 [2024-09-27 15:49:40.947791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.739 [2024-09-27 15:49:40.947801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.739 [2024-09-27 15:49:40.947809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.739 [2024-09-27 15:49:40.947819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.739 [2024-09-27 15:49:40.947826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.739 [2024-09-27 15:49:40.947836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.739 [2024-09-27 15:49:40.947843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.739 [2024-09-27 15:49:40.947853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.739 [2024-09-27 15:49:40.947860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.739 [2024-09-27 15:49:40.947870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.739 [2024-09-27 15:49:40.947878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.739 [2024-09-27 15:49:40.947887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.739 [2024-09-27 15:49:40.947901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.739 [2024-09-27 15:49:40.947911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.739 [2024-09-27 15:49:40.947919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.739 [2024-09-27 15:49:40.947931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.739 [2024-09-27 15:49:40.947938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.739 [2024-09-27 15:49:40.947948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.739 [2024-09-27 15:49:40.947955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.739 [2024-09-27 15:49:40.947965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.739 [2024-09-27 15:49:40.947973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.739 [2024-09-27 15:49:40.947983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.739 [2024-09-27 15:49:40.947991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.739 [2024-09-27 15:49:40.948001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.739 [2024-09-27 15:49:40.948009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.739 [2024-09-27 15:49:40.948018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.739 [2024-09-27 15:49:40.948026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.739 [2024-09-27 15:49:40.948035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.739 [2024-09-27 15:49:40.948043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.739 [2024-09-27 15:49:40.948053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.739 [2024-09-27 15:49:40.948061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.739 [2024-09-27 15:49:40.948070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.739 [2024-09-27 15:49:40.948078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.739 [2024-09-27 15:49:40.948088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.739 [2024-09-27 15:49:40.948096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.739 [2024-09-27 15:49:40.948106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.739 [2024-09-27 15:49:40.948114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.739 [2024-09-27 15:49:40.948124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.739 [2024-09-27 15:49:40.948131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.739 [2024-09-27 15:49:40.948141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.739 [2024-09-27 15:49:40.948151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.739 [2024-09-27 15:49:40.948161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.739 [2024-09-27 15:49:40.948168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.739 [2024-09-27 15:49:40.948179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.739 [2024-09-27 15:49:40.948187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.739 [2024-09-27 15:49:40.948196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.739 [2024-09-27 15:49:40.948204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.739 [2024-09-27 15:49:40.948214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.739 [2024-09-27 15:49:40.948222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.739 [2024-09-27 15:49:40.948232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.739 [2024-09-27 15:49:40.948239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.739 [2024-09-27 15:49:40.948249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.739 [2024-09-27 15:49:40.948256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.739 [2024-09-27 15:49:40.948266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.739 [2024-09-27 15:49:40.948274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.739 [2024-09-27 15:49:40.948284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.739 [2024-09-27 15:49:40.948292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.739 [2024-09-27 15:49:40.948302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.739 [2024-09-27 15:49:40.948310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.739 [2024-09-27 15:49:40.948319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.739 [2024-09-27 15:49:40.948327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.739 [2024-09-27 15:49:40.948337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.739 [2024-09-27 15:49:40.948344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.739 [2024-09-27 15:49:40.948354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.739 [2024-09-27 15:49:40.948362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.739 [2024-09-27 15:49:40.948373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.740 [2024-09-27 15:49:40.948381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.740 [2024-09-27 15:49:40.948390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.740 [2024-09-27 15:49:40.948398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.740 [2024-09-27 15:49:40.948407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.740 [2024-09-27 15:49:40.948415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.740 [2024-09-27 15:49:40.948424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.740 [2024-09-27 15:49:40.948432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.740 [2024-09-27 15:49:40.948441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.740 [2024-09-27 15:49:40.948449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.740 [2024-09-27 15:49:40.948459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.740 [2024-09-27 15:49:40.948466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.740 [2024-09-27 15:49:40.948476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.740 [2024-09-27 15:49:40.948484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.740 [2024-09-27 15:49:40.948493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.740 [2024-09-27 15:49:40.948501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.740 [2024-09-27 15:49:40.948510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.740 [2024-09-27 15:49:40.948518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.740 [2024-09-27 15:49:40.948527] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6590 is same with the state(6) to be set 00:31:00.740 [2024-09-27 15:49:40.948570] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xea6590 was disconnected and freed. reset controller. 00:31:00.740 [2024-09-27 15:49:40.948614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.740 [2024-09-27 15:49:40.948624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.740 [2024-09-27 15:49:40.948637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.740 [2024-09-27 15:49:40.948645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.740 [2024-09-27 15:49:40.948656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.740 [2024-09-27 15:49:40.948666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.740 [2024-09-27 15:49:40.948676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.740 [2024-09-27 15:49:40.948684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.740 [2024-09-27 15:49:40.948694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.740 [2024-09-27 15:49:40.948701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.740 [2024-09-27 15:49:40.948711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.740 [2024-09-27 15:49:40.948719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.740 [2024-09-27 15:49:40.948730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.740 [2024-09-27 15:49:40.948738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.740 [2024-09-27 15:49:40.948748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.740 [2024-09-27 15:49:40.948756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.740 [2024-09-27 15:49:40.948767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.740 [2024-09-27 15:49:40.948775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.740 [2024-09-27 15:49:40.948785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.740 [2024-09-27 15:49:40.948793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.740 [2024-09-27 15:49:40.948803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.740 [2024-09-27 15:49:40.948811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.740 [2024-09-27 15:49:40.948821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.740 [2024-09-27 15:49:40.948829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.740 [2024-09-27 15:49:40.948839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.740 [2024-09-27 15:49:40.948847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.740 [2024-09-27 15:49:40.948857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.740 [2024-09-27 15:49:40.948865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.740 [2024-09-27 15:49:40.948875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.740 [2024-09-27 15:49:40.948882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.740 [2024-09-27 15:49:40.948901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.740 [2024-09-27 15:49:40.948909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.740 [2024-09-27 15:49:40.948921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.740 [2024-09-27 15:49:40.948930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.740 [2024-09-27 15:49:40.948940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.740 [2024-09-27 15:49:40.948947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.740 [2024-09-27 15:49:40.948958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.741 [2024-09-27 15:49:40.948966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.741 [2024-09-27 15:49:40.948976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.741 [2024-09-27 15:49:40.948983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.741 [2024-09-27 15:49:40.948993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.741 [2024-09-27 15:49:40.949001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.741 [2024-09-27 15:49:40.949012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.741 [2024-09-27 15:49:40.949019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.741 [2024-09-27 15:49:40.949030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.741 [2024-09-27 15:49:40.949038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.741 [2024-09-27 15:49:40.949049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.741 [2024-09-27 15:49:40.949058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.741 [2024-09-27 15:49:40.949067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.741 [2024-09-27 15:49:40.949075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.741 [2024-09-27 15:49:40.949085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.741 [2024-09-27 15:49:40.949093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.741 [2024-09-27 15:49:40.949103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.741 [2024-09-27 15:49:40.949111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.741 [2024-09-27 15:49:40.949121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.741 [2024-09-27 15:49:40.949130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.741 [2024-09-27 15:49:40.949140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.741 [2024-09-27 15:49:40.949148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.741 [2024-09-27 15:49:40.949158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.741 [2024-09-27 15:49:40.949166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.741 [2024-09-27 15:49:40.949176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.741 [2024-09-27 15:49:40.949184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.741 [2024-09-27 15:49:40.949193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.741 [2024-09-27 15:49:40.949202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.741 [2024-09-27 15:49:40.949212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.741 [2024-09-27 15:49:40.949220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.741 [2024-09-27 15:49:40.949230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.741 [2024-09-27 15:49:40.949237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.741 [2024-09-27 15:49:40.949248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.741 [2024-09-27 15:49:40.949256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.741 [2024-09-27 15:49:40.949266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.741 [2024-09-27 15:49:40.949274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.741 [2024-09-27 15:49:40.949284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.741 [2024-09-27 15:49:40.949292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.741 [2024-09-27 15:49:40.949302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.741 [2024-09-27 15:49:40.949310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.741 [2024-09-27 15:49:40.949320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.741 [2024-09-27 15:49:40.949328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.741 [2024-09-27 15:49:40.949338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.741 [2024-09-27 15:49:40.949346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.741 [2024-09-27 15:49:40.949358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.741 [2024-09-27 15:49:40.949365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.741 [2024-09-27 15:49:40.949375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.741 [2024-09-27 15:49:40.949383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.741 [2024-09-27 15:49:40.949393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.741 [2024-09-27 15:49:40.949401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.741 [2024-09-27 15:49:40.949411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.741 [2024-09-27 15:49:40.949418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.741 [2024-09-27 15:49:40.949429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.741 [2024-09-27 15:49:40.949436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.741 [2024-09-27 15:49:40.949447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.741 [2024-09-27 15:49:40.949455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.741 [2024-09-27 15:49:40.949465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.741 [2024-09-27 15:49:40.949472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.741 [2024-09-27 15:49:40.949482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.741 [2024-09-27 15:49:40.949490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.741 [2024-09-27 15:49:40.949500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.741 [2024-09-27 15:49:40.949508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.741 [2024-09-27 15:49:40.949518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.741 [2024-09-27 15:49:40.949526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.742 [2024-09-27 15:49:40.949535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.742 [2024-09-27 15:49:40.949543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.742 [2024-09-27 15:49:40.949553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.742 [2024-09-27 15:49:40.949561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.742 [2024-09-27 15:49:40.949571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.742 [2024-09-27 15:49:40.949580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.742 [2024-09-27 15:49:40.949590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.742 [2024-09-27 15:49:40.949598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.742 [2024-09-27 15:49:40.949608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.742 [2024-09-27 15:49:40.949615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.742 [2024-09-27 15:49:40.949625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.742 [2024-09-27 15:49:40.949633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.742 [2024-09-27 15:49:40.949643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.742 [2024-09-27 15:49:40.949650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.742 [2024-09-27 15:49:40.949660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.742 [2024-09-27 15:49:40.949668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.742 [2024-09-27 15:49:40.949678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.742 [2024-09-27 15:49:40.949685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.742 [2024-09-27 15:49:40.949695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.742 [2024-09-27 15:49:40.949703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.742 [2024-09-27 15:49:40.949714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.742 [2024-09-27 15:49:40.949721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.742 [2024-09-27 15:49:40.949731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.742 [2024-09-27 15:49:40.949739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.742 [2024-09-27 15:49:40.949749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.742 [2024-09-27 15:49:40.949757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.742 [2024-09-27 15:49:40.949767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.742 [2024-09-27 15:49:40.949776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.742 [2024-09-27 15:49:40.949785] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbb790 is same with the state(6) to be set 00:31:00.742 [2024-09-27 15:49:40.949824] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xdbb790 was disconnected and freed. reset controller. 00:31:00.742 [2024-09-27 15:49:40.949863] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:00.742 [2024-09-27 15:49:40.949876] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:00.742 [2024-09-27 15:49:40.949899] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdda6d0 (9): Bad file descriptor 00:31:00.742 [2024-09-27 15:49:40.949932] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe2f390 (9): Bad file descriptor 00:31:00.742 [2024-09-27 15:49:40.949983] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:00.742 [2024-09-27 15:49:40.949999] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:00.742 [2024-09-27 15:49:40.952463] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:31:00.742 [2024-09-27 15:49:40.952482] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:31:00.742 [2024-09-27 15:49:40.952506] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:31:00.742 [2024-09-27 15:49:40.952515] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:31:00.742 [2024-09-27 15:49:40.952526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:31:00.742 [2024-09-27 15:49:40.952566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.742 [2024-09-27 15:49:40.952576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.742 [2024-09-27 15:49:40.952589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.742 [2024-09-27 15:49:40.952596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.742 [2024-09-27 15:49:40.952607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.742 [2024-09-27 15:49:40.952615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.742 [2024-09-27 15:49:40.952624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.742 [2024-09-27 15:49:40.952633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.742 [2024-09-27 15:49:40.952642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.742 [2024-09-27 15:49:40.952651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.742 [2024-09-27 15:49:40.952661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.742 [2024-09-27 15:49:40.952668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.742 [2024-09-27 15:49:40.952679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.742 [2024-09-27 15:49:40.952686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.742 [2024-09-27 15:49:40.952696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.742 [2024-09-27 15:49:40.952705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.742 [2024-09-27 15:49:40.952714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.742 [2024-09-27 15:49:40.952726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.742 [2024-09-27 15:49:40.952735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.742 [2024-09-27 15:49:40.952743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.742 [2024-09-27 15:49:40.952754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.742 [2024-09-27 15:49:40.952761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.742 [2024-09-27 15:49:40.952771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.742 [2024-09-27 15:49:40.952779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.742 [2024-09-27 15:49:40.952789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.742 [2024-09-27 15:49:40.952797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.742 [2024-09-27 15:49:40.952807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.742 [2024-09-27 15:49:40.952814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.743 [2024-09-27 15:49:40.952825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.743 [2024-09-27 15:49:40.952832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.743 [2024-09-27 15:49:40.952842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.743 [2024-09-27 15:49:40.952849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.743 [2024-09-27 15:49:40.952859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.743 [2024-09-27 15:49:40.952867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.743 [2024-09-27 15:49:40.952877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.743 [2024-09-27 15:49:40.952885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.743 [2024-09-27 15:49:40.952900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.743 [2024-09-27 15:49:40.952909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.743 [2024-09-27 15:49:40.952919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.743 [2024-09-27 15:49:40.952927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.743 [2024-09-27 15:49:40.952937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.743 [2024-09-27 15:49:40.952946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.743 [2024-09-27 15:49:40.952957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.743 [2024-09-27 15:49:40.952965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.743 [2024-09-27 15:49:40.952975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.743 [2024-09-27 15:49:40.952984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.743 [2024-09-27 15:49:40.952995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.743 [2024-09-27 15:49:40.953003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.743 [2024-09-27 15:49:40.953013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.743 [2024-09-27 15:49:40.953021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.743 [2024-09-27 15:49:40.953031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.743 [2024-09-27 15:49:40.953039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.743 [2024-09-27 15:49:40.953049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.743 [2024-09-27 15:49:40.953057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.743 [2024-09-27 15:49:40.953067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.743 [2024-09-27 15:49:40.953074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.743 [2024-09-27 15:49:40.953085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.743 [2024-09-27 15:49:40.953093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.743 [2024-09-27 15:49:40.953103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.743 [2024-09-27 15:49:40.953111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.743 [2024-09-27 15:49:40.953121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.743 [2024-09-27 15:49:40.953130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.743 [2024-09-27 15:49:40.953140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.743 [2024-09-27 15:49:40.953148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.743 [2024-09-27 15:49:40.953158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.743 [2024-09-27 15:49:40.953166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.743 [2024-09-27 15:49:40.953176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.743 [2024-09-27 15:49:40.953187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.743 [2024-09-27 15:49:40.953197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.743 [2024-09-27 15:49:40.953205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.743 [2024-09-27 15:49:40.953215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.743 [2024-09-27 15:49:40.953223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.743 [2024-09-27 15:49:40.953233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.743 [2024-09-27 15:49:40.953241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.743 [2024-09-27 15:49:40.953250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.743 [2024-09-27 15:49:40.953258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.743 [2024-09-27 15:49:40.953268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.743 [2024-09-27 15:49:40.953276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.743 [2024-09-27 15:49:40.953286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.743 [2024-09-27 15:49:40.953294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.743 [2024-09-27 15:49:40.953303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.743 [2024-09-27 15:49:40.953311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.743 [2024-09-27 15:49:40.953321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.743 [2024-09-27 15:49:40.953329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.743 [2024-09-27 15:49:40.953339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.743 [2024-09-27 15:49:40.953346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.743 [2024-09-27 15:49:40.953356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.743 [2024-09-27 15:49:40.953363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.743 [2024-09-27 15:49:40.953373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.743 [2024-09-27 15:49:40.953380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.743 [2024-09-27 15:49:40.953391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.743 [2024-09-27 15:49:40.953398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.743 [2024-09-27 15:49:40.953410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.743 [2024-09-27 15:49:40.953418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.743 [2024-09-27 15:49:40.953428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.743 [2024-09-27 15:49:40.953435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.743 [2024-09-27 15:49:40.953445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.744 [2024-09-27 15:49:40.953452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.744 [2024-09-27 15:49:40.953462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.744 [2024-09-27 15:49:40.953470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.744 [2024-09-27 15:49:40.953480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.744 [2024-09-27 15:49:40.953487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.744 [2024-09-27 15:49:40.953497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.744 [2024-09-27 15:49:40.953504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.744 [2024-09-27 15:49:40.953514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.744 [2024-09-27 15:49:40.953521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.744 [2024-09-27 15:49:40.953531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.744 [2024-09-27 15:49:40.953539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.744 [2024-09-27 15:49:40.953548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.744 [2024-09-27 15:49:40.953556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.744 [2024-09-27 15:49:40.953566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.744 [2024-09-27 15:49:40.953573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.744 [2024-09-27 15:49:40.953583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.744 [2024-09-27 15:49:40.953590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.744 [2024-09-27 15:49:40.953601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.744 [2024-09-27 15:49:40.953609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.744 [2024-09-27 15:49:40.953619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.744 [2024-09-27 15:49:40.953631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.744 [2024-09-27 15:49:40.953641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.744 [2024-09-27 15:49:40.953650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.744 [2024-09-27 15:49:40.953660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.744 [2024-09-27 15:49:40.953668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.744 [2024-09-27 15:49:40.953677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.744 [2024-09-27 15:49:40.953685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.744 [2024-09-27 15:49:40.953695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.744 [2024-09-27 15:49:40.953703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.744 [2024-09-27 15:49:40.953713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.744 [2024-09-27 15:49:40.953721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.744 [2024-09-27 15:49:40.953729] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4fa0 is same with the state(6) to be set 00:31:00.744 [2024-09-27 15:49:40.955020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.744 [2024-09-27 15:49:40.955036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.744 [2024-09-27 15:49:40.955050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.744 [2024-09-27 15:49:40.955060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.744 [2024-09-27 15:49:40.955072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.744 [2024-09-27 15:49:40.955081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.744 [2024-09-27 15:49:40.955093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.744 [2024-09-27 15:49:40.955103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.744 [2024-09-27 15:49:40.955114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.744 [2024-09-27 15:49:40.955124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.744 [2024-09-27 15:49:40.955135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.744 [2024-09-27 15:49:40.955144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.744 [2024-09-27 15:49:40.955154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.744 [2024-09-27 15:49:40.955165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.744 [2024-09-27 15:49:40.955174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.744 [2024-09-27 15:49:40.955183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.744 [2024-09-27 15:49:40.955193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.744 [2024-09-27 15:49:40.955201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.744 [2024-09-27 15:49:40.955210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.744 [2024-09-27 15:49:40.955218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.744 [2024-09-27 15:49:40.955228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.744 [2024-09-27 15:49:40.955236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.744 [2024-09-27 15:49:40.955246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.744 [2024-09-27 15:49:40.955254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.744 [2024-09-27 15:49:40.955264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.744 [2024-09-27 15:49:40.955272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.744 [2024-09-27 15:49:40.955282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.744 [2024-09-27 15:49:40.955289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.744 [2024-09-27 15:49:40.955299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.744 [2024-09-27 15:49:40.955306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.744 [2024-09-27 15:49:40.955316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.744 [2024-09-27 15:49:40.955324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.744 [2024-09-27 15:49:40.955334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.744 [2024-09-27 15:49:40.955341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.744 [2024-09-27 15:49:40.955352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.744 [2024-09-27 15:49:40.955360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.744 [2024-09-27 15:49:40.955371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.744 [2024-09-27 15:49:40.955378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.744 [2024-09-27 15:49:40.955390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.744 [2024-09-27 15:49:40.955399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.744 [2024-09-27 15:49:40.955409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.745 [2024-09-27 15:49:40.955417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.745 [2024-09-27 15:49:40.955427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.745 [2024-09-27 15:49:40.955435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.745 [2024-09-27 15:49:40.955445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.745 [2024-09-27 15:49:40.955453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.745 [2024-09-27 15:49:40.955463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.745 [2024-09-27 15:49:40.955471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.745 [2024-09-27 15:49:40.955481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.745 [2024-09-27 15:49:40.955489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.745 [2024-09-27 15:49:40.955498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.745 [2024-09-27 15:49:40.955507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.745 [2024-09-27 15:49:40.955517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.745 [2024-09-27 15:49:40.955526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.745 [2024-09-27 15:49:40.955536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.745 [2024-09-27 15:49:40.955543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.745 [2024-09-27 15:49:40.955554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.745 [2024-09-27 15:49:40.955562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.745 [2024-09-27 15:49:40.955572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.745 [2024-09-27 15:49:40.955579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.745 [2024-09-27 15:49:40.955590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.745 [2024-09-27 15:49:40.955597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.745 [2024-09-27 15:49:40.955607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.745 [2024-09-27 15:49:40.955617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.745 [2024-09-27 15:49:40.955627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.745 [2024-09-27 15:49:40.955635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.745 [2024-09-27 15:49:40.955646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.745 [2024-09-27 15:49:40.955653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.745 [2024-09-27 15:49:40.955664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.745 [2024-09-27 15:49:40.955671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.745 [2024-09-27 15:49:40.955682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.745 [2024-09-27 15:49:40.955689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.745 [2024-09-27 15:49:40.955699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.745 [2024-09-27 15:49:40.955708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.745 [2024-09-27 15:49:40.955718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.745 [2024-09-27 15:49:40.955726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.745 [2024-09-27 15:49:40.955735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.745 [2024-09-27 15:49:40.955743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.745 [2024-09-27 15:49:40.955753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.745 [2024-09-27 15:49:40.955761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.745 [2024-09-27 15:49:40.955771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.745 [2024-09-27 15:49:40.955779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.745 [2024-09-27 15:49:40.955789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.745 [2024-09-27 15:49:40.955796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.745 [2024-09-27 15:49:40.955806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.745 [2024-09-27 15:49:40.955814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.745 [2024-09-27 15:49:40.955824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.745 [2024-09-27 15:49:40.955831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.745 [2024-09-27 15:49:40.955843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.745 [2024-09-27 15:49:40.955851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.745 [2024-09-27 15:49:40.955861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.745 [2024-09-27 15:49:40.955869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.745 [2024-09-27 15:49:40.955879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.745 [2024-09-27 15:49:40.955889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.745 [2024-09-27 15:49:40.955902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.745 [2024-09-27 15:49:40.955911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.745 [2024-09-27 15:49:40.955921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.745 [2024-09-27 15:49:40.955928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.745 [2024-09-27 15:49:40.955938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.746 [2024-09-27 15:49:40.955946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.746 [2024-09-27 15:49:40.955956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.746 [2024-09-27 15:49:40.955963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.746 [2024-09-27 15:49:40.955973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.746 [2024-09-27 15:49:40.955980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.746 [2024-09-27 15:49:40.955990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.746 [2024-09-27 15:49:40.955998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.746 [2024-09-27 15:49:40.956008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.746 [2024-09-27 15:49:40.956016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.746 [2024-09-27 15:49:40.956026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.746 [2024-09-27 15:49:40.956034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.746 [2024-09-27 15:49:40.956043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.746 [2024-09-27 15:49:40.956051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.746 [2024-09-27 15:49:40.956061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.746 [2024-09-27 15:49:40.956071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.746 [2024-09-27 15:49:40.956080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.746 [2024-09-27 15:49:40.956088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.746 [2024-09-27 15:49:40.956098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.746 [2024-09-27 15:49:40.956106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.746 [2024-09-27 15:49:40.956115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.746 [2024-09-27 15:49:40.956123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.746 [2024-09-27 15:49:40.956134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.746 [2024-09-27 15:49:40.956142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.746 [2024-09-27 15:49:40.956152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.746 [2024-09-27 15:49:40.956159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.746 [2024-09-27 15:49:40.956170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.746 [2024-09-27 15:49:40.956177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.746 [2024-09-27 15:49:40.956187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.746 [2024-09-27 15:49:40.956195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.746 [2024-09-27 15:49:40.956204] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7b10 is same with the state(6) to be set 00:31:00.746 [2024-09-27 15:49:40.957479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.746 [2024-09-27 15:49:40.957494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.746 [2024-09-27 15:49:40.957508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.746 [2024-09-27 15:49:40.957518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.746 [2024-09-27 15:49:40.957529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.746 [2024-09-27 15:49:40.957539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.746 [2024-09-27 15:49:40.957552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.746 [2024-09-27 15:49:40.957562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.746 [2024-09-27 15:49:40.957573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.746 [2024-09-27 15:49:40.957585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.746 [2024-09-27 15:49:40.957597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.746 [2024-09-27 15:49:40.957606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.746 [2024-09-27 15:49:40.957616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.746 [2024-09-27 15:49:40.957624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.746 [2024-09-27 15:49:40.957634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.746 [2024-09-27 15:49:40.957642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.746 [2024-09-27 15:49:40.957652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.746 [2024-09-27 15:49:40.957660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.746 [2024-09-27 15:49:40.957670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.746 [2024-09-27 15:49:40.957678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.746 [2024-09-27 15:49:40.957688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.746 [2024-09-27 15:49:40.957696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.746 [2024-09-27 15:49:40.957706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.746 [2024-09-27 15:49:40.957715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.746 [2024-09-27 15:49:40.957724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.746 [2024-09-27 15:49:40.957733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.746 [2024-09-27 15:49:40.957743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.746 [2024-09-27 15:49:40.957750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.746 [2024-09-27 15:49:40.957760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.746 [2024-09-27 15:49:40.957768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.746 [2024-09-27 15:49:40.957778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.746 [2024-09-27 15:49:40.957785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.746 [2024-09-27 15:49:40.957796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.746 [2024-09-27 15:49:40.957803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.746 [2024-09-27 15:49:40.957813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.746 [2024-09-27 15:49:40.957823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.746 [2024-09-27 15:49:40.957833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.746 [2024-09-27 15:49:40.957841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.746 [2024-09-27 15:49:40.957851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.747 [2024-09-27 15:49:40.957859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.747 [2024-09-27 15:49:40.957869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.747 [2024-09-27 15:49:40.957877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.747 [2024-09-27 15:49:40.957887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.747 [2024-09-27 15:49:40.957899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.747 [2024-09-27 15:49:40.957909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.747 [2024-09-27 15:49:40.957917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.747 [2024-09-27 15:49:40.957926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.747 [2024-09-27 15:49:40.957934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.747 [2024-09-27 15:49:40.957944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.747 [2024-09-27 15:49:40.957952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.747 [2024-09-27 15:49:40.957963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.747 [2024-09-27 15:49:40.957970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.747 [2024-09-27 15:49:40.957980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.747 [2024-09-27 15:49:40.957988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.747 [2024-09-27 15:49:40.957998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.747 [2024-09-27 15:49:40.958006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.747 [2024-09-27 15:49:40.958016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.747 [2024-09-27 15:49:40.958024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.747 [2024-09-27 15:49:40.958034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.747 [2024-09-27 15:49:40.958043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.747 [2024-09-27 15:49:40.958054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.747 [2024-09-27 15:49:40.958062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.747 [2024-09-27 15:49:40.958074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.747 [2024-09-27 15:49:40.958082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.747 [2024-09-27 15:49:40.958091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.747 [2024-09-27 15:49:40.958099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.747 [2024-09-27 15:49:40.958109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.747 [2024-09-27 15:49:40.958117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.747 [2024-09-27 15:49:40.958127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.747 [2024-09-27 15:49:40.958134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.747 [2024-09-27 15:49:40.958144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.747 [2024-09-27 15:49:40.958153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.747 [2024-09-27 15:49:40.958163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.747 [2024-09-27 15:49:40.958170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.747 [2024-09-27 15:49:40.958180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.747 [2024-09-27 15:49:40.958188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.747 [2024-09-27 15:49:40.958198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.747 [2024-09-27 15:49:40.958205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.747 [2024-09-27 15:49:40.958215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.747 [2024-09-27 15:49:40.958223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.747 [2024-09-27 15:49:40.958232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.747 [2024-09-27 15:49:40.958240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.747 [2024-09-27 15:49:40.958250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.747 [2024-09-27 15:49:40.958258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.747 [2024-09-27 15:49:40.958268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.747 [2024-09-27 15:49:40.958277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.747 [2024-09-27 15:49:40.958287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.747 [2024-09-27 15:49:40.958295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.747 [2024-09-27 15:49:40.958305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.747 [2024-09-27 15:49:40.958314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.747 [2024-09-27 15:49:40.958323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.747 [2024-09-27 15:49:40.958331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.747 [2024-09-27 15:49:40.958340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.747 [2024-09-27 15:49:40.958348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.747 [2024-09-27 15:49:40.958357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.747 [2024-09-27 15:49:40.958365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.747 [2024-09-27 15:49:40.958374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.747 [2024-09-27 15:49:40.958382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.747 [2024-09-27 15:49:40.958391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.747 [2024-09-27 15:49:40.958399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.747 [2024-09-27 15:49:40.958409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.747 [2024-09-27 15:49:40.958417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.747 [2024-09-27 15:49:40.958427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.747 [2024-09-27 15:49:40.958434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.747 [2024-09-27 15:49:40.958444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.747 [2024-09-27 15:49:40.958451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.747 [2024-09-27 15:49:40.958461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.748 [2024-09-27 15:49:40.958468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.748 [2024-09-27 15:49:40.958478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.748 [2024-09-27 15:49:40.958486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.748 [2024-09-27 15:49:40.958497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.748 [2024-09-27 15:49:40.958505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.748 [2024-09-27 15:49:40.958515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.748 [2024-09-27 15:49:40.958522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.748 [2024-09-27 15:49:40.958531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.748 [2024-09-27 15:49:40.958539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.748 [2024-09-27 15:49:40.958549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.748 [2024-09-27 15:49:40.958557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.748 [2024-09-27 15:49:40.958566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.748 [2024-09-27 15:49:40.958574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.748 [2024-09-27 15:49:40.958583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.748 [2024-09-27 15:49:40.958591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.748 [2024-09-27 15:49:40.958600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.748 [2024-09-27 15:49:40.958608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.748 [2024-09-27 15:49:40.958617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.748 [2024-09-27 15:49:40.958625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.748 [2024-09-27 15:49:40.958635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.748 [2024-09-27 15:49:40.958643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.748 [2024-09-27 15:49:40.958651] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbcd10 is same with the state(6) to be set 00:31:00.748 [2024-09-27 15:49:40.959934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.748 [2024-09-27 15:49:40.959949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.748 [2024-09-27 15:49:40.959961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.748 [2024-09-27 15:49:40.959970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.748 [2024-09-27 15:49:40.959979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.748 [2024-09-27 15:49:40.959988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.748 [2024-09-27 15:49:40.960003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.748 [2024-09-27 15:49:40.960011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.748 [2024-09-27 15:49:40.960021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.748 [2024-09-27 15:49:40.960029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.748 [2024-09-27 15:49:40.960039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.748 [2024-09-27 15:49:40.960047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.748 [2024-09-27 15:49:40.960057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.748 [2024-09-27 15:49:40.960065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.748 [2024-09-27 15:49:40.960075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.748 [2024-09-27 15:49:40.960083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.748 [2024-09-27 15:49:40.960093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.748 [2024-09-27 15:49:40.960101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.748 [2024-09-27 15:49:40.960111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.748 [2024-09-27 15:49:40.960119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.748 [2024-09-27 15:49:40.960129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.748 [2024-09-27 15:49:40.960137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.748 [2024-09-27 15:49:40.960146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.748 [2024-09-27 15:49:40.960155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.748 [2024-09-27 15:49:40.960165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.748 [2024-09-27 15:49:40.960173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.748 [2024-09-27 15:49:40.960183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.748 [2024-09-27 15:49:40.960191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.748 [2024-09-27 15:49:40.960201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.748 [2024-09-27 15:49:40.960209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.748 [2024-09-27 15:49:40.960219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.748 [2024-09-27 15:49:40.960228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.748 [2024-09-27 15:49:40.960238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.748 [2024-09-27 15:49:40.960246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.748 [2024-09-27 15:49:40.960256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.748 [2024-09-27 15:49:40.960264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.748 [2024-09-27 15:49:40.960275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.748 [2024-09-27 15:49:40.960282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.748 [2024-09-27 15:49:40.960293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.748 [2024-09-27 15:49:40.960300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.748 [2024-09-27 15:49:40.960312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.748 [2024-09-27 15:49:40.960319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.748 [2024-09-27 15:49:40.960330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.749 [2024-09-27 15:49:40.960338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.749 [2024-09-27 15:49:40.960348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.749 [2024-09-27 15:49:40.960357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.749 [2024-09-27 15:49:40.960367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.749 [2024-09-27 15:49:40.960375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.749 [2024-09-27 15:49:40.960386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.749 [2024-09-27 15:49:40.960393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.749 [2024-09-27 15:49:40.960403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.749 [2024-09-27 15:49:40.960411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.749 [2024-09-27 15:49:40.960421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.749 [2024-09-27 15:49:40.960430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.749 [2024-09-27 15:49:40.960439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.749 [2024-09-27 15:49:40.960447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.749 [2024-09-27 15:49:40.960459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.749 [2024-09-27 15:49:40.960467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.749 [2024-09-27 15:49:40.960477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.749 [2024-09-27 15:49:40.960485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.749 [2024-09-27 15:49:40.960495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.749 [2024-09-27 15:49:40.960503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.749 [2024-09-27 15:49:40.960513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.749 [2024-09-27 15:49:40.960521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.749 [2024-09-27 15:49:40.960530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.749 [2024-09-27 15:49:40.960538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.749 [2024-09-27 15:49:40.960548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.749 [2024-09-27 15:49:40.960557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.749 [2024-09-27 15:49:40.960567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.749 [2024-09-27 15:49:40.960575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.749 [2024-09-27 15:49:40.960585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.749 [2024-09-27 15:49:40.960593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.749 [2024-09-27 15:49:40.960603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.749 [2024-09-27 15:49:40.960611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.749 [2024-09-27 15:49:40.960621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.749 [2024-09-27 15:49:40.960629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.749 [2024-09-27 15:49:40.960639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.749 [2024-09-27 15:49:40.960646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.749 [2024-09-27 15:49:40.960657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.749 [2024-09-27 15:49:40.960664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.749 [2024-09-27 15:49:40.960675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.749 [2024-09-27 15:49:40.960684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.749 [2024-09-27 15:49:40.960695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.749 [2024-09-27 15:49:40.960704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.749 [2024-09-27 15:49:40.960714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.749 [2024-09-27 15:49:40.960722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.749 [2024-09-27 15:49:40.960732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.749 [2024-09-27 15:49:40.960739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.749 [2024-09-27 15:49:40.960749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.749 [2024-09-27 15:49:40.960757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.749 [2024-09-27 15:49:40.960767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.749 [2024-09-27 15:49:40.960775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.749 [2024-09-27 15:49:40.960785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.749 [2024-09-27 15:49:40.960793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.749 [2024-09-27 15:49:40.960803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.749 [2024-09-27 15:49:40.960810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.749 [2024-09-27 15:49:40.960820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.749 [2024-09-27 15:49:40.960828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.749 [2024-09-27 15:49:40.960837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.749 [2024-09-27 15:49:40.960845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.749 [2024-09-27 15:49:40.960855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.749 [2024-09-27 15:49:40.960863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.749 [2024-09-27 15:49:40.960872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.749 [2024-09-27 15:49:40.960880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.749 [2024-09-27 15:49:40.960890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.749 [2024-09-27 15:49:40.960906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.749 [2024-09-27 15:49:40.960920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.749 [2024-09-27 15:49:40.960928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.750 [2024-09-27 15:49:40.960938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.750 [2024-09-27 15:49:40.960945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.750 [2024-09-27 15:49:40.960955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.750 [2024-09-27 15:49:40.960963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.750 [2024-09-27 15:49:40.960973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.750 [2024-09-27 15:49:40.960980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.750 [2024-09-27 15:49:40.960990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.750 [2024-09-27 15:49:40.960998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.750 [2024-09-27 15:49:40.961008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.750 [2024-09-27 15:49:40.961015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.750 [2024-09-27 15:49:40.961024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.750 [2024-09-27 15:49:40.961033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.750 [2024-09-27 15:49:40.961042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.750 [2024-09-27 15:49:40.961050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.750 [2024-09-27 15:49:40.961060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.750 [2024-09-27 15:49:40.961067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.750 [2024-09-27 15:49:40.961077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.750 [2024-09-27 15:49:40.961085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.750 [2024-09-27 15:49:40.961094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.750 [2024-09-27 15:49:40.961102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.750 [2024-09-27 15:49:40.961111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf6d0 is same with the state(6) to be set 00:31:00.750 [2024-09-27 15:49:40.963559] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:00.750 [2024-09-27 15:49:40.963584] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:31:00.750 [2024-09-27 15:49:40.963595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:31:00.750 [2024-09-27 15:49:40.963610] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:31:00.750 [2024-09-27 15:49:40.964025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:00.750 [2024-09-27 15:49:40.964041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde6190 with addr=10.0.0.2, port=4420 00:31:00.750 [2024-09-27 15:49:40.964050] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde6190 is same with the state(6) to be set 00:31:00.750 [2024-09-27 15:49:40.964397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:00.750 [2024-09-27 15:49:40.964409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8c7610 with addr=10.0.0.2, port=4420 00:31:00.750 [2024-09-27 15:49:40.964416] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c7610 is same with the state(6) to be set 00:31:00.750 [2024-09-27 15:49:40.964461] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:00.750 [2024-09-27 15:49:40.964482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8c7610 (9): Bad file descriptor 00:31:00.750 [2024-09-27 15:49:40.964496] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xde6190 (9): Bad file descriptor 00:31:00.750 [2024-09-27 15:49:40.965068] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:31:00.750 [2024-09-27 15:49:40.965459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:00.750 [2024-09-27 15:49:40.965473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b02e0 with addr=10.0.0.2, port=4420 00:31:00.750 [2024-09-27 15:49:40.965480] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b02e0 is same with the state(6) to be set 00:31:00.750 [2024-09-27 15:49:40.965826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:00.750 [2024-09-27 15:49:40.965838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde5a30 with addr=10.0.0.2, port=4420 00:31:00.750 [2024-09-27 15:49:40.965846] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde5a30 is same with the state(6) to be set 00:31:00.750 [2024-09-27 15:49:40.966185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:00.750 [2024-09-27 15:49:40.966196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe30620 with addr=10.0.0.2, port=4420 00:31:00.750 [2024-09-27 15:49:40.966204] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30620 is same with the state(6) to be set 00:31:00.750 [2024-09-27 15:49:40.967009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.750 [2024-09-27 15:49:40.967021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.750 [2024-09-27 15:49:40.967033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.750 [2024-09-27 15:49:40.967041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.750 [2024-09-27 15:49:40.967051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.750 [2024-09-27 15:49:40.967058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.750 [2024-09-27 15:49:40.967068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.750 [2024-09-27 15:49:40.967076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.750 [2024-09-27 15:49:40.967089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.750 [2024-09-27 15:49:40.967096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.750 [2024-09-27 15:49:40.967106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.750 [2024-09-27 15:49:40.967113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.750 [2024-09-27 15:49:40.967124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.750 [2024-09-27 15:49:40.967131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.751 [2024-09-27 15:49:40.967141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.751 [2024-09-27 15:49:40.967148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.751 [2024-09-27 15:49:40.967158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.751 [2024-09-27 15:49:40.967165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.751 [2024-09-27 15:49:40.967174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.751 [2024-09-27 15:49:40.967182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.751 [2024-09-27 15:49:40.967191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.751 [2024-09-27 15:49:40.967199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.751 [2024-09-27 15:49:40.967209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.751 [2024-09-27 15:49:40.967217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.751 [2024-09-27 15:49:40.967227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.751 [2024-09-27 15:49:40.967235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.751 [2024-09-27 15:49:40.967244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.751 [2024-09-27 15:49:40.967252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.751 [2024-09-27 15:49:40.967262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.751 [2024-09-27 15:49:40.967269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.751 [2024-09-27 15:49:40.967279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.751 [2024-09-27 15:49:40.967287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.751 [2024-09-27 15:49:40.967297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.751 [2024-09-27 15:49:40.967306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.751 [2024-09-27 15:49:40.967316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.751 [2024-09-27 15:49:40.967324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.751 [2024-09-27 15:49:40.967335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.751 [2024-09-27 15:49:40.967343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.751 [2024-09-27 15:49:40.967353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.751 [2024-09-27 15:49:40.967361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.751 [2024-09-27 15:49:40.967371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.751 [2024-09-27 15:49:40.967379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.751 [2024-09-27 15:49:40.967389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.751 [2024-09-27 15:49:40.967397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.751 [2024-09-27 15:49:40.967407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.751 [2024-09-27 15:49:40.967415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.751 [2024-09-27 15:49:40.967425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.751 [2024-09-27 15:49:40.967433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.751 [2024-09-27 15:49:40.967442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.751 [2024-09-27 15:49:40.967450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.751 [2024-09-27 15:49:40.967460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.751 [2024-09-27 15:49:40.967468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.751 [2024-09-27 15:49:40.967477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.751 [2024-09-27 15:49:40.967485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.751 [2024-09-27 15:49:40.967496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.751 [2024-09-27 15:49:40.967503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.751 [2024-09-27 15:49:40.967513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.751 [2024-09-27 15:49:40.967520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.751 [2024-09-27 15:49:40.967533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.751 [2024-09-27 15:49:40.967540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.751 [2024-09-27 15:49:40.967550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.751 [2024-09-27 15:49:40.967558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.751 [2024-09-27 15:49:40.967569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.751 [2024-09-27 15:49:40.967576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.751 [2024-09-27 15:49:40.967586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.751 [2024-09-27 15:49:40.967594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.751 [2024-09-27 15:49:40.967604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.751 [2024-09-27 15:49:40.967612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.751 [2024-09-27 15:49:40.967622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.751 [2024-09-27 15:49:40.967630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.751 [2024-09-27 15:49:40.967640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.751 [2024-09-27 15:49:40.967647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.751 [2024-09-27 15:49:40.967657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.751 [2024-09-27 15:49:40.967665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.751 [2024-09-27 15:49:40.967675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.751 [2024-09-27 15:49:40.967683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.751 [2024-09-27 15:49:40.967693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.751 [2024-09-27 15:49:40.967701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.751 [2024-09-27 15:49:40.967710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.751 [2024-09-27 15:49:40.967718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.751 [2024-09-27 15:49:40.967729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.751 [2024-09-27 15:49:40.967737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.751 [2024-09-27 15:49:40.967747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.751 [2024-09-27 15:49:40.967756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.752 [2024-09-27 15:49:40.967766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.752 [2024-09-27 15:49:40.967775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.752 [2024-09-27 15:49:40.967785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.752 [2024-09-27 15:49:40.967793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.752 [2024-09-27 15:49:40.967804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.752 [2024-09-27 15:49:40.967811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.752 [2024-09-27 15:49:40.967821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.752 [2024-09-27 15:49:40.967830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.752 [2024-09-27 15:49:40.967840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.752 [2024-09-27 15:49:40.967848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.752 [2024-09-27 15:49:40.967858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.752 [2024-09-27 15:49:40.967866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.752 [2024-09-27 15:49:40.967876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.752 [2024-09-27 15:49:40.967884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.752 [2024-09-27 15:49:40.967899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.752 [2024-09-27 15:49:40.967908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.752 [2024-09-27 15:49:40.967918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.752 [2024-09-27 15:49:40.967926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.752 [2024-09-27 15:49:40.967935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.752 [2024-09-27 15:49:40.967943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.752 [2024-09-27 15:49:40.967953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.752 [2024-09-27 15:49:40.967961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.752 [2024-09-27 15:49:40.967971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.752 [2024-09-27 15:49:40.967979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.752 [2024-09-27 15:49:40.967990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.752 [2024-09-27 15:49:40.967998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.752 [2024-09-27 15:49:40.968008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.752 [2024-09-27 15:49:40.968016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.752 [2024-09-27 15:49:40.968026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.752 [2024-09-27 15:49:40.968034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.752 [2024-09-27 15:49:40.968043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.752 [2024-09-27 15:49:40.968051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.752 [2024-09-27 15:49:40.968061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.752 [2024-09-27 15:49:40.968069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.752 [2024-09-27 15:49:40.968079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.752 [2024-09-27 15:49:40.968087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.752 [2024-09-27 15:49:40.968096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.752 [2024-09-27 15:49:40.968104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.752 [2024-09-27 15:49:40.968114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.752 [2024-09-27 15:49:40.968121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.752 [2024-09-27 15:49:40.968131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.752 [2024-09-27 15:49:40.968139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.752 [2024-09-27 15:49:40.968149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.752 [2024-09-27 15:49:40.968157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.752 [2024-09-27 15:49:40.968165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbe150 is same with the state(6) to be set 00:31:00.752 [2024-09-27 15:49:40.969874] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:31:00.752 [2024-09-27 15:49:40.969901] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:00.752 [2024-09-27 15:49:40.969911] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:31:00.752 task offset: 24576 on job bdev=Nvme1n1 fails 00:31:00.752 00:31:00.752 Latency(us) 00:31:00.752 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:00.752 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:00.752 Job: Nvme1n1 ended in about 0.97 seconds with error 00:31:00.752 Verification LBA range: start 0x0 length 0x400 00:31:00.752 Nvme1n1 : 0.97 197.28 12.33 65.76 0.00 240606.29 15728.64 230686.72 00:31:00.752 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:00.752 Job: Nvme2n1 ended in about 0.97 seconds with error 00:31:00.752 Verification LBA range: start 0x0 length 0x400 00:31:00.752 Nvme2n1 : 0.97 197.05 12.32 65.68 0.00 236043.52 16602.45 267386.88 00:31:00.752 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:00.752 Job: Nvme3n1 ended in about 0.99 seconds with error 00:31:00.752 Verification LBA range: start 0x0 length 0x400 00:31:00.752 Nvme3n1 : 0.99 194.38 12.15 64.79 0.00 234359.04 14964.05 253405.87 00:31:00.752 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:00.752 Job: Nvme4n1 ended in about 0.98 seconds with error 00:31:00.752 Verification LBA range: start 0x0 length 0x400 00:31:00.752 Nvme4n1 : 0.98 199.18 12.45 65.04 0.00 225135.69 18568.53 248162.99 00:31:00.752 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:00.752 Job: Nvme5n1 ended in about 0.99 seconds with error 00:31:00.752 Verification LBA range: start 0x0 length 0x400 00:31:00.752 Nvme5n1 : 0.99 129.27 8.08 64.63 0.00 300489.39 23811.41 253405.87 00:31:00.752 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:00.752 Job: Nvme6n1 ended in about 0.98 seconds with error 00:31:00.752 Verification LBA range: start 0x0 length 0x400 00:31:00.752 Nvme6n1 : 0.98 200.55 12.53 65.48 0.00 213922.58 4915.20 246415.36 00:31:00.752 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:00.752 Job: Nvme7n1 ended in about 0.99 seconds with error 00:31:00.752 Verification LBA range: start 0x0 length 0x400 00:31:00.752 Nvme7n1 : 0.99 194.88 12.18 64.96 0.00 214368.64 27088.21 251658.24 00:31:00.752 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:00.752 Job: Nvme8n1 ended in about 0.99 seconds with error 00:31:00.752 Verification LBA range: start 0x0 length 0x400 00:31:00.753 Nvme8n1 : 0.99 193.42 12.09 64.47 0.00 211372.59 22063.79 223696.21 00:31:00.753 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:00.753 Job: Nvme9n1 ended in about 1.00 seconds with error 00:31:00.753 Verification LBA range: start 0x0 length 0x400 00:31:00.753 Nvme9n1 : 1.00 127.73 7.98 63.86 0.00 278677.05 17803.95 277872.64 00:31:00.753 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:00.753 Job: Nvme10n1 ended in about 1.00 seconds with error 00:31:00.753 Verification LBA range: start 0x0 length 0x400 00:31:00.753 Nvme10n1 : 1.00 128.63 8.04 64.31 0.00 269796.41 18786.99 269134.51 00:31:00.753 =================================================================================================================== 00:31:00.753 Total : 1762.35 110.15 649.00 0.00 239126.28 4915.20 277872.64 00:31:00.753 [2024-09-27 15:49:40.994721] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:00.753 [2024-09-27 15:49:40.994750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:31:00.753 [2024-09-27 15:49:40.995048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:00.753 [2024-09-27 15:49:40.995064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe2f190 with addr=10.0.0.2, port=4420 00:31:00.753 [2024-09-27 15:49:40.995073] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f190 is same with the state(6) to be set 00:31:00.753 [2024-09-27 15:49:40.995086] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b02e0 (9): Bad file descriptor 00:31:00.753 [2024-09-27 15:49:40.995098] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xde5a30 (9): Bad file descriptor 00:31:00.753 [2024-09-27 15:49:40.995107] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe30620 (9): Bad file descriptor 00:31:00.753 [2024-09-27 15:49:40.995122] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:31:00.753 [2024-09-27 15:49:40.995129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:31:00.753 [2024-09-27 15:49:40.995138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:31:00.753 [2024-09-27 15:49:40.995153] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:31:00.753 [2024-09-27 15:49:40.995159] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:31:00.753 [2024-09-27 15:49:40.995167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:31:00.753 [2024-09-27 15:49:40.995284] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:00.753 [2024-09-27 15:49:40.995294] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:00.753 [2024-09-27 15:49:40.995617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:00.753 [2024-09-27 15:49:40.995630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ba220 with addr=10.0.0.2, port=4420 00:31:00.753 [2024-09-27 15:49:40.995639] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba220 is same with the state(6) to be set 00:31:00.753 [2024-09-27 15:49:40.995902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:00.753 [2024-09-27 15:49:40.995913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ba680 with addr=10.0.0.2, port=4420 00:31:00.753 [2024-09-27 15:49:40.995921] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba680 is same with the state(6) to be set 00:31:00.753 [2024-09-27 15:49:40.996268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:00.753 [2024-09-27 15:49:40.996279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdda6d0 with addr=10.0.0.2, port=4420 00:31:00.753 [2024-09-27 15:49:40.996287] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdda6d0 is same with the state(6) to be set 00:31:00.753 [2024-09-27 15:49:40.996636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:00.753 [2024-09-27 15:49:40.996647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe2f390 with addr=10.0.0.2, port=4420 00:31:00.753 [2024-09-27 15:49:40.996654] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f390 is same with the state(6) to be set 00:31:00.753 [2024-09-27 15:49:40.996663] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe2f190 (9): Bad file descriptor 00:31:00.753 [2024-09-27 15:49:40.996671] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:31:00.753 [2024-09-27 15:49:40.996678] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:31:00.753 [2024-09-27 15:49:40.996685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:31:00.753 [2024-09-27 15:49:40.996695] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:31:00.753 [2024-09-27 15:49:40.996702] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:31:00.753 [2024-09-27 15:49:40.996708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:31:00.753 [2024-09-27 15:49:40.996719] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:31:00.753 [2024-09-27 15:49:40.996726] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:31:00.753 [2024-09-27 15:49:40.996733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:31:00.753 [2024-09-27 15:49:40.996768] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:00.753 [2024-09-27 15:49:40.996781] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:00.753 [2024-09-27 15:49:40.996792] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:00.753 [2024-09-27 15:49:40.996811] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:00.753 [2024-09-27 15:49:40.997130] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:00.753 [2024-09-27 15:49:40.997140] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:00.753 [2024-09-27 15:49:40.997147] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:00.753 [2024-09-27 15:49:40.997162] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ba220 (9): Bad file descriptor 00:31:00.753 [2024-09-27 15:49:40.997172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ba680 (9): Bad file descriptor 00:31:00.753 [2024-09-27 15:49:40.997181] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdda6d0 (9): Bad file descriptor 00:31:00.753 [2024-09-27 15:49:40.997190] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe2f390 (9): Bad file descriptor 00:31:00.753 [2024-09-27 15:49:40.997198] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:31:00.753 [2024-09-27 15:49:40.997205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:31:00.753 [2024-09-27 15:49:40.997212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:31:00.753 [2024-09-27 15:49:40.997480] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:31:00.753 [2024-09-27 15:49:40.997493] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:31:00.753 [2024-09-27 15:49:40.997503] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:00.753 [2024-09-27 15:49:40.997523] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:31:00.753 [2024-09-27 15:49:40.997532] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:31:00.753 [2024-09-27 15:49:40.997539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:31:00.753 [2024-09-27 15:49:40.997550] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:00.753 [2024-09-27 15:49:40.997557] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:00.753 [2024-09-27 15:49:40.997565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:00.753 [2024-09-27 15:49:40.997575] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:31:00.753 [2024-09-27 15:49:40.997582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:31:00.753 [2024-09-27 15:49:40.997590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:31:00.753 [2024-09-27 15:49:40.997600] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:31:00.753 [2024-09-27 15:49:40.997607] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:31:00.753 [2024-09-27 15:49:40.997614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:31:00.753 [2024-09-27 15:49:40.997649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:00.753 [2024-09-27 15:49:40.997658] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:00.753 [2024-09-27 15:49:40.997668] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:00.753 [2024-09-27 15:49:40.997675] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:00.753 [2024-09-27 15:49:40.997904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:00.753 [2024-09-27 15:49:40.997918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8c7610 with addr=10.0.0.2, port=4420 00:31:00.754 [2024-09-27 15:49:40.997927] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c7610 is same with the state(6) to be set 00:31:00.754 [2024-09-27 15:49:40.998227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:00.754 [2024-09-27 15:49:40.998238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde6190 with addr=10.0.0.2, port=4420 00:31:00.754 [2024-09-27 15:49:40.998247] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde6190 is same with the state(6) to be set 00:31:00.754 [2024-09-27 15:49:40.998276] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8c7610 (9): Bad file descriptor 00:31:00.754 [2024-09-27 15:49:40.998287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xde6190 (9): Bad file descriptor 00:31:00.754 [2024-09-27 15:49:40.998320] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:31:00.754 [2024-09-27 15:49:40.998329] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:31:00.754 [2024-09-27 15:49:40.998337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:31:00.754 [2024-09-27 15:49:40.998346] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:31:00.754 [2024-09-27 15:49:40.998354] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:31:00.754 [2024-09-27 15:49:40.998361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:31:00.754 [2024-09-27 15:49:40.998389] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:00.754 [2024-09-27 15:49:40.998396] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:00.754 15:49:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:31:01.695 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 511309 00:31:01.695 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:31:01.695 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 511309 00:31:01.695 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:31:01.957 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:01.957 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:31:01.957 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:01.957 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 511309 00:31:01.957 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:31:01.957 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:01.957 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:31:01.957 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:31:01.957 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:31:01.957 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:01.957 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:31:01.957 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:31:01.957 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:01.957 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:01.957 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:31:01.957 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # nvmfcleanup 00:31:01.957 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:31:01.957 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:01.957 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:31:01.957 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:01.957 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:01.957 rmmod nvme_tcp 00:31:01.957 rmmod nvme_fabrics 00:31:01.957 rmmod nvme_keyring 00:31:01.957 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:01.957 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:31:01.957 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:31:01.957 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@513 -- # '[' -n 511245 ']' 00:31:01.957 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # killprocess 511245 00:31:01.957 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 511245 ']' 00:31:01.957 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 511245 00:31:01.957 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (511245) - No such process 00:31:01.957 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 511245 is not found' 00:31:01.957 Process with pid 511245 is not found 00:31:01.957 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:31:01.957 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:31:01.957 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:31:01.957 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:31:01.957 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@787 -- # iptables-save 00:31:01.957 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:31:01.957 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@787 -- # iptables-restore 00:31:01.957 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:01.957 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:01.957 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:01.957 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:01.957 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:04.505 00:31:04.505 real 0m7.352s 00:31:04.505 user 0m17.413s 00:31:04.505 sys 0m1.228s 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:04.505 ************************************ 00:31:04.505 END TEST nvmf_shutdown_tc3 00:31:04.505 ************************************ 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:04.505 ************************************ 00:31:04.505 START TEST nvmf_shutdown_tc4 00:31:04.505 ************************************ 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # prepare_net_devs 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:04.505 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:04.505 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:04.505 Found net devices under 0000:31:00.0: cvl_0_0 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:04.505 Found net devices under 0000:31:00.1: cvl_0_1 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # is_hw=yes 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:04.505 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:04.506 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:04.506 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:04.506 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:04.506 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:04.506 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:04.506 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:04.506 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:04.506 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:04.506 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:04.506 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.533 ms 00:31:04.506 00:31:04.506 --- 10.0.0.2 ping statistics --- 00:31:04.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.506 rtt min/avg/max/mdev = 0.533/0.533/0.533/0.000 ms 00:31:04.506 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:04.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:04.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:31:04.506 00:31:04.506 --- 10.0.0.1 ping statistics --- 00:31:04.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.506 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:31:04.506 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:04.506 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # return 0 00:31:04.506 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:31:04.506 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:04.506 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:31:04.506 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:31:04.506 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:04.506 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:31:04.506 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:31:04.506 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:31:04.506 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:31:04.506 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:04.506 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:31:04.506 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # nvmfpid=512769 00:31:04.506 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # waitforlisten 512769 00:31:04.506 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:31:04.506 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 512769 ']' 00:31:04.506 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:04.506 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:04.506 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:04.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:04.506 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:04.506 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:31:04.506 [2024-09-27 15:49:44.912466] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:31:04.506 [2024-09-27 15:49:44.912537] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:04.766 [2024-09-27 15:49:45.001724] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:04.766 [2024-09-27 15:49:45.035078] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:04.766 [2024-09-27 15:49:45.035114] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:04.766 [2024-09-27 15:49:45.035120] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:04.766 [2024-09-27 15:49:45.035125] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:04.766 [2024-09-27 15:49:45.035130] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:04.766 [2024-09-27 15:49:45.035270] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:31:04.766 [2024-09-27 15:49:45.035423] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:31:04.766 [2024-09-27 15:49:45.035575] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:31:04.766 [2024-09-27 15:49:45.035577] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:31:05.336 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:05.336 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:31:05.336 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:31:05.336 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:05.336 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:31:05.336 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:05.336 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:05.336 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.336 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:31:05.336 [2024-09-27 15:49:45.751195] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:05.336 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.336 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:31:05.336 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:31:05.336 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:05.336 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:31:05.336 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:05.336 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:05.336 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:31:05.336 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:05.336 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:31:05.336 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:05.336 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:31:05.336 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:05.336 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:31:05.336 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:05.336 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:31:05.336 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:05.336 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:31:05.336 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:05.336 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:31:05.336 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:05.336 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:31:05.336 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:05.336 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:31:05.336 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:05.336 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:31:05.336 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:31:05.336 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.336 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:31:05.596 Malloc1 00:31:05.596 [2024-09-27 15:49:45.849846] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:05.596 Malloc2 00:31:05.596 Malloc3 00:31:05.596 Malloc4 00:31:05.596 Malloc5 00:31:05.596 Malloc6 00:31:05.596 Malloc7 00:31:05.856 Malloc8 00:31:05.856 Malloc9 00:31:05.856 Malloc10 00:31:05.856 15:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.856 15:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:31:05.856 15:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:05.856 15:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:31:05.856 15:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=513121 00:31:05.856 15:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:31:05.856 15:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:31:05.856 [2024-09-27 15:49:46.320591] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:11.146 15:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:11.146 15:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 512769 00:31:11.146 15:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 512769 ']' 00:31:11.146 15:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 512769 00:31:11.146 15:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:31:11.146 15:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:11.146 15:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 512769 00:31:11.146 15:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:11.146 15:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:11.146 15:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 512769' 00:31:11.146 killing process with pid 512769 00:31:11.146 15:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 512769 00:31:11.146 15:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 512769 00:31:11.146 [2024-09-27 15:49:51.325689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121c060 is same with the state(6) to be set 00:31:11.146 [2024-09-27 15:49:51.325732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121c060 is same with the state(6) to be set 00:31:11.146 [2024-09-27 15:49:51.325739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121c060 is same with the state(6) to be set 00:31:11.146 [2024-09-27 15:49:51.325814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121c530 is same with the state(6) to be set 00:31:11.146 [2024-09-27 15:49:51.325844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121c530 is same with the state(6) to be set 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 starting I/O failed: -6 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 starting I/O failed: -6 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 starting I/O failed: -6 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 starting I/O failed: -6 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 [2024-09-27 15:49:51.326705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121bb90 is same with the state(6) to be set 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 [2024-09-27 15:49:51.326728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121bb90 is same with the state(6) to be set 00:31:11.146 [2024-09-27 15:49:51.326734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121bb90 is same with the state(6) to be set 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 [2024-09-27 15:49:51.326739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121bb90 is same with the state(6) to be set 00:31:11.146 [2024-09-27 15:49:51.326745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121bb90 is same with the state(6) to be set 00:31:11.146 [2024-09-27 15:49:51.326750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121bb90 is same with the state(6) to be set 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 [2024-09-27 15:49:51.326755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121bb90 is same with the state(6) to be set 00:31:11.146 starting I/O failed: -6 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 starting I/O failed: -6 00:31:11.146 [2024-09-27 15:49:51.326867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.146 starting I/O failed: -6 00:31:11.146 starting I/O failed: -6 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 starting I/O failed: -6 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 starting I/O failed: -6 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 starting I/O failed: -6 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 starting I/O failed: -6 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 starting I/O failed: -6 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 starting I/O failed: -6 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 starting I/O failed: -6 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 starting I/O failed: -6 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 starting I/O failed: -6 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 starting I/O failed: -6 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 starting I/O failed: -6 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 starting I/O failed: -6 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 starting I/O failed: -6 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 starting I/O failed: -6 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 starting I/O failed: -6 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 starting I/O failed: -6 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 starting I/O failed: -6 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 starting I/O failed: -6 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 starting I/O failed: -6 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 starting I/O failed: -6 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 starting I/O failed: -6 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 starting I/O failed: -6 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 starting I/O failed: -6 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 starting I/O failed: -6 00:31:11.146 Write completed with error (sct=0, sc=8) 00:31:11.146 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 [2024-09-27 15:49:51.328696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 [2024-09-27 15:49:51.329410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ad20 is same with starting I/O failed: -6 00:31:11.147 the state(6) to be set 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 [2024-09-27 15:49:51.329428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ad20 is same with the state(6) to be set 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 starting I/O failed: -6 00:31:11.147 [2024-09-27 15:49:51.329606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121b1f0 is same with the state(6) to be set 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.147 [2024-09-27 15:49:51.329621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121b1f0 is same with the state(6) to be set 00:31:11.147 starting I/O failed: -6 00:31:11.147 [2024-09-27 15:49:51.329626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121b1f0 is same with the state(6) to be set 00:31:11.147 Write completed with error (sct=0, sc=8) 00:31:11.148 [2024-09-27 15:49:51.329632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121b1f0 is same with the state(6) to be set 00:31:11.148 [2024-09-27 15:49:51.329637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121b1f0 is same with the state(6) to be set 00:31:11.148 starting I/O failed: -6 00:31:11.148 [2024-09-27 15:49:51.329642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121b1f0 is same with the state(6) to be set 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 [2024-09-27 15:49:51.329647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121b1f0 is same with the state(6) to be set 00:31:11.148 [2024-09-27 15:49:51.329652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121b1f0 is same with starting I/O failed: -6 00:31:11.148 the state(6) to be set 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 starting I/O failed: -6 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 starting I/O failed: -6 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 starting I/O failed: -6 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 starting I/O failed: -6 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 starting I/O failed: -6 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 starting I/O failed: -6 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 starting I/O failed: -6 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 starting I/O failed: -6 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 starting I/O failed: -6 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 starting I/O failed: -6 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 starting I/O failed: -6 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 starting I/O failed: -6 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 starting I/O failed: -6 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 starting I/O failed: -6 00:31:11.148 [2024-09-27 15:49:51.329871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121b6c0 is same with the state(6) to be set 00:31:11.148 [2024-09-27 15:49:51.329900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121b6c0 is same with the state(6) to be set 00:31:11.148 [2024-09-27 15:49:51.329907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121b6c0 is same with the state(6) to be set 00:31:11.148 [2024-09-27 15:49:51.329913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121b6c0 is same with the state(6) to be set 00:31:11.148 [2024-09-27 15:49:51.329919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121b6c0 is same with the state(6) to be set 00:31:11.148 [2024-09-27 15:49:51.329923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121b6c0 is same with the state(6) to be set 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 starting I/O failed: -6 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 starting I/O failed: -6 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 starting I/O failed: -6 00:31:11.148 [2024-09-27 15:49:51.330474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:11.148 NVMe io qpair process completion error 00:31:11.148 [2024-09-27 15:49:51.331536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111d0a0 is same with the state(6) to be set 00:31:11.148 [2024-09-27 15:49:51.331552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111d0a0 is same with the state(6) to be set 00:31:11.148 [2024-09-27 15:49:51.331557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111d0a0 is same with the state(6) to be set 00:31:11.148 [2024-09-27 15:49:51.331562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111d0a0 is same with the state(6) to be set 00:31:11.148 [2024-09-27 15:49:51.331567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111d0a0 is same with the state(6) to be set 00:31:11.148 [2024-09-27 15:49:51.331572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111d0a0 is same with the state(6) to be set 00:31:11.148 [2024-09-27 15:49:51.331577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111d0a0 is same with the state(6) to be set 00:31:11.148 [2024-09-27 15:49:51.331582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111d0a0 is same with the state(6) to be set 00:31:11.148 [2024-09-27 15:49:51.331586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111d0a0 is same with the state(6) to be set 00:31:11.148 [2024-09-27 15:49:51.331591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111d0a0 is same with the state(6) to be set 00:31:11.148 [2024-09-27 15:49:51.331595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111d0a0 is same with the state(6) to be set 00:31:11.148 [2024-09-27 15:49:51.331600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111d0a0 is same with the state(6) to be set 00:31:11.148 [2024-09-27 15:49:51.331872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111d420 is same with the state(6) to be set 00:31:11.148 [2024-09-27 15:49:51.331886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111d420 is same with the state(6) to be set 00:31:11.148 [2024-09-27 15:49:51.331891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111d420 is same with the state(6) to be set 00:31:11.148 [2024-09-27 15:49:51.331901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111d420 is same with the state(6) to be set 00:31:11.148 [2024-09-27 15:49:51.332065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa7490 is same with the state(6) to be set 00:31:11.148 [2024-09-27 15:49:51.332081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa7490 is same with the state(6) to be set 00:31:11.148 [2024-09-27 15:49:51.332086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa7490 is same with the state(6) to be set 00:31:11.148 [2024-09-27 15:49:51.332092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa7490 is same with the state(6) to be set 00:31:11.148 [2024-09-27 15:49:51.332097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa7490 is same with the state(6) to be set 00:31:11.148 [2024-09-27 15:49:51.332102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa7490 is same with the state(6) to be set 00:31:11.148 [2024-09-27 15:49:51.332108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa7490 is same with the state(6) to be set 00:31:11.148 [2024-09-27 15:49:51.332326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111cd20 is same with the state(6) to be set 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 starting I/O failed: -6 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 starting I/O failed: -6 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 starting I/O failed: -6 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 starting I/O failed: -6 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 starting I/O failed: -6 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 starting I/O failed: -6 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 starting I/O failed: -6 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 starting I/O failed: -6 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 starting I/O failed: -6 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 starting I/O failed: -6 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 starting I/O failed: -6 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 starting I/O failed: -6 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 starting I/O failed: -6 00:31:11.148 Write completed with error (sct=0, sc=8) 00:31:11.148 starting I/O failed: -6 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 starting I/O failed: -6 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 starting I/O failed: -6 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 starting I/O failed: -6 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 starting I/O failed: -6 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 starting I/O failed: -6 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 starting I/O failed: -6 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 starting I/O failed: -6 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 starting I/O failed: -6 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 starting I/O failed: -6 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 starting I/O failed: -6 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 starting I/O failed: -6 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 starting I/O failed: -6 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 starting I/O failed: -6 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 starting I/O failed: -6 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 starting I/O failed: -6 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 starting I/O failed: -6 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 [2024-09-27 15:49:51.333877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.149 starting I/O failed: -6 00:31:11.149 starting I/O failed: -6 00:31:11.149 starting I/O failed: -6 00:31:11.149 starting I/O failed: -6 00:31:11.149 starting I/O failed: -6 00:31:11.149 starting I/O failed: -6 00:31:11.149 starting I/O failed: -6 00:31:11.149 starting I/O failed: -6 00:31:11.149 NVMe io qpair process completion error 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 starting I/O failed: -6 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 starting I/O failed: -6 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 starting I/O failed: -6 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 starting I/O failed: -6 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 starting I/O failed: -6 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 starting I/O failed: -6 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 starting I/O failed: -6 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 starting I/O failed: -6 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 starting I/O failed: -6 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 [2024-09-27 15:49:51.335695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:11.149 starting I/O failed: -6 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 starting I/O failed: -6 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 starting I/O failed: -6 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 starting I/O failed: -6 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 starting I/O failed: -6 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 starting I/O failed: -6 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 starting I/O failed: -6 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 starting I/O failed: -6 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 starting I/O failed: -6 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 starting I/O failed: -6 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 starting I/O failed: -6 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 starting I/O failed: -6 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 starting I/O failed: -6 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 starting I/O failed: -6 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 starting I/O failed: -6 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 starting I/O failed: -6 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 starting I/O failed: -6 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 starting I/O failed: -6 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 starting I/O failed: -6 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 starting I/O failed: -6 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.149 starting I/O failed: -6 00:31:11.149 [2024-09-27 15:49:51.336638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:11.149 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 [2024-09-27 15:49:51.337555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.150 Write completed with error (sct=0, sc=8) 00:31:11.150 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 [2024-09-27 15:49:51.338796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:11.151 NVMe io qpair process completion error 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 [2024-09-27 15:49:51.339879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 [2024-09-27 15:49:51.340674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.151 starting I/O failed: -6 00:31:11.151 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 [2024-09-27 15:49:51.341595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.152 Write completed with error (sct=0, sc=8) 00:31:11.152 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 [2024-09-27 15:49:51.344372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:11.153 NVMe io qpair process completion error 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 [2024-09-27 15:49:51.345442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 starting I/O failed: -6 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 Write completed with error (sct=0, sc=8) 00:31:11.153 [2024-09-27 15:49:51.346342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 [2024-09-27 15:49:51.347260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.154 starting I/O failed: -6 00:31:11.154 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 [2024-09-27 15:49:51.348838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:11.155 NVMe io qpair process completion error 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 [2024-09-27 15:49:51.349881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 [2024-09-27 15:49:51.350693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.155 Write completed with error (sct=0, sc=8) 00:31:11.155 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 [2024-09-27 15:49:51.351617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.156 Write completed with error (sct=0, sc=8) 00:31:11.156 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 [2024-09-27 15:49:51.353257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:11.157 NVMe io qpair process completion error 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 [2024-09-27 15:49:51.354311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 [2024-09-27 15:49:51.355128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.157 Write completed with error (sct=0, sc=8) 00:31:11.157 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 [2024-09-27 15:49:51.356051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.158 starting I/O failed: -6 00:31:11.158 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 [2024-09-27 15:49:51.359331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:11.159 NVMe io qpair process completion error 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 [2024-09-27 15:49:51.360618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 [2024-09-27 15:49:51.361452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.159 starting I/O failed: -6 00:31:11.159 Write completed with error (sct=0, sc=8) 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 [2024-09-27 15:49:51.362391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 Write completed with error (sct=0, sc=8) 00:31:11.160 starting I/O failed: -6 00:31:11.160 [2024-09-27 15:49:51.363848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:11.160 NVMe io qpair process completion error 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 [2024-09-27 15:49:51.365072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 [2024-09-27 15:49:51.365980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.161 starting I/O failed: -6 00:31:11.161 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 [2024-09-27 15:49:51.366859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 [2024-09-27 15:49:51.369407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:11.162 NVMe io qpair process completion error 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 Write completed with error (sct=0, sc=8) 00:31:11.162 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 [2024-09-27 15:49:51.370648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 [2024-09-27 15:49:51.371456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.163 starting I/O failed: -6 00:31:11.163 Write completed with error (sct=0, sc=8) 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 [2024-09-27 15:49:51.372395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 starting I/O failed: -6 00:31:11.164 [2024-09-27 15:49:51.374305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:11.164 NVMe io qpair process completion error 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.164 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Write completed with error (sct=0, sc=8) 00:31:11.165 Initializing NVMe Controllers 00:31:11.165 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:31:11.165 Controller IO queue size 128, less than required. 00:31:11.165 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:11.165 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:31:11.165 Controller IO queue size 128, less than required. 00:31:11.165 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:11.165 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:31:11.165 Controller IO queue size 128, less than required. 00:31:11.165 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:11.165 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:31:11.165 Controller IO queue size 128, less than required. 00:31:11.165 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:11.165 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:11.165 Controller IO queue size 128, less than required. 00:31:11.165 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:11.165 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:31:11.165 Controller IO queue size 128, less than required. 00:31:11.165 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:11.165 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:31:11.165 Controller IO queue size 128, less than required. 00:31:11.165 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:11.165 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:31:11.165 Controller IO queue size 128, less than required. 00:31:11.165 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:11.165 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:31:11.165 Controller IO queue size 128, less than required. 00:31:11.165 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:11.165 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:31:11.165 Controller IO queue size 128, less than required. 00:31:11.165 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:11.165 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:31:11.165 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:31:11.165 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:31:11.165 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:31:11.165 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:11.166 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:31:11.166 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:31:11.166 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:31:11.166 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:31:11.166 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:31:11.166 Initialization complete. Launching workers. 00:31:11.166 ======================================================== 00:31:11.166 Latency(us) 00:31:11.166 Device Information : IOPS MiB/s Average min max 00:31:11.166 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1870.78 80.39 68867.80 683.34 119304.02 00:31:11.166 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1881.07 80.83 68080.26 723.69 121562.36 00:31:11.166 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1904.42 81.83 67260.76 828.91 120511.09 00:31:11.166 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1933.56 83.08 66283.05 932.24 117590.12 00:31:11.166 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1873.78 80.51 68415.53 812.79 118342.13 00:31:11.166 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1890.92 81.25 67813.89 817.15 125866.68 00:31:11.166 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1907.42 81.96 67274.37 690.31 129270.18 00:31:11.166 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1896.06 81.47 67697.78 908.61 130872.71 00:31:11.166 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1923.06 82.63 66783.20 808.73 133503.16 00:31:11.166 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1881.71 80.85 67573.43 500.58 119106.99 00:31:11.166 ======================================================== 00:31:11.166 Total : 18962.79 814.81 67597.83 500.58 133503.16 00:31:11.166 00:31:11.166 [2024-09-27 15:49:51.378919] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fb020 is same with the state(6) to be set 00:31:11.166 [2024-09-27 15:49:51.378965] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f94f0 is same with the state(6) to be set 00:31:11.166 [2024-09-27 15:49:51.378995] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f91c0 is same with the state(6) to be set 00:31:11.166 [2024-09-27 15:49:51.379029] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8d00 is same with the state(6) to be set 00:31:11.166 [2024-09-27 15:49:51.379059] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fac40 is same with the state(6) to be set 00:31:11.166 [2024-09-27 15:49:51.379087] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9b50 is same with the state(6) to be set 00:31:11.166 [2024-09-27 15:49:51.379117] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fb350 is same with the state(6) to be set 00:31:11.166 [2024-09-27 15:49:51.379146] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fb680 is same with the state(6) to be set 00:31:11.166 [2024-09-27 15:49:51.379173] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f9820 is same with the state(6) to be set 00:31:11.166 [2024-09-27 15:49:51.379202] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8b20 is same with the state(6) to be set 00:31:11.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:31:11.166 15:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:31:12.109 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 513121 00:31:12.109 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:31:12.109 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 513121 00:31:12.109 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:31:12.109 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:12.109 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:31:12.109 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:12.109 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 513121 00:31:12.109 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:31:12.109 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:12.109 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:12.109 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:12.109 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:31:12.109 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:31:12.109 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:12.109 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:12.109 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:31:12.109 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # nvmfcleanup 00:31:12.109 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:31:12.109 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:12.109 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:31:12.109 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:12.109 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:12.109 rmmod nvme_tcp 00:31:12.370 rmmod nvme_fabrics 00:31:12.370 rmmod nvme_keyring 00:31:12.370 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:12.370 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:31:12.370 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:31:12.370 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@513 -- # '[' -n 512769 ']' 00:31:12.370 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # killprocess 512769 00:31:12.370 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 512769 ']' 00:31:12.370 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 512769 00:31:12.370 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (512769) - No such process 00:31:12.370 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # echo 'Process with pid 512769 is not found' 00:31:12.370 Process with pid 512769 is not found 00:31:12.370 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:31:12.370 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:31:12.370 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:31:12.370 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:31:12.370 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@787 -- # iptables-save 00:31:12.370 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:31:12.370 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@787 -- # iptables-restore 00:31:12.370 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:12.370 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:12.370 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:12.370 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:12.370 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:14.285 15:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:14.285 00:31:14.285 real 0m10.268s 00:31:14.285 user 0m27.903s 00:31:14.285 sys 0m3.976s 00:31:14.285 15:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:14.285 15:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:31:14.285 ************************************ 00:31:14.285 END TEST nvmf_shutdown_tc4 00:31:14.285 ************************************ 00:31:14.285 15:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:31:14.546 00:31:14.546 real 0m43.485s 00:31:14.546 user 1m45.139s 00:31:14.546 sys 0m14.017s 00:31:14.546 15:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:14.546 15:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:14.546 ************************************ 00:31:14.546 END TEST nvmf_shutdown 00:31:14.546 ************************************ 00:31:14.546 15:49:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:31:14.546 00:31:14.546 real 19m47.391s 00:31:14.546 user 51m53.847s 00:31:14.546 sys 4m49.214s 00:31:14.546 15:49:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:14.546 15:49:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:31:14.546 ************************************ 00:31:14.546 END TEST nvmf_target_extra 00:31:14.546 ************************************ 00:31:14.546 15:49:54 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:31:14.546 15:49:54 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:14.546 15:49:54 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:14.546 15:49:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:14.546 ************************************ 00:31:14.546 START TEST nvmf_host 00:31:14.546 ************************************ 00:31:14.546 15:49:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:31:14.546 * Looking for test storage... 00:31:14.546 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:31:14.546 15:49:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:14.546 15:49:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:31:14.546 15:49:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:14.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.808 --rc genhtml_branch_coverage=1 00:31:14.808 --rc genhtml_function_coverage=1 00:31:14.808 --rc genhtml_legend=1 00:31:14.808 --rc geninfo_all_blocks=1 00:31:14.808 --rc geninfo_unexecuted_blocks=1 00:31:14.808 00:31:14.808 ' 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:14.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.808 --rc genhtml_branch_coverage=1 00:31:14.808 --rc genhtml_function_coverage=1 00:31:14.808 --rc genhtml_legend=1 00:31:14.808 --rc geninfo_all_blocks=1 00:31:14.808 --rc geninfo_unexecuted_blocks=1 00:31:14.808 00:31:14.808 ' 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:14.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.808 --rc genhtml_branch_coverage=1 00:31:14.808 --rc genhtml_function_coverage=1 00:31:14.808 --rc genhtml_legend=1 00:31:14.808 --rc geninfo_all_blocks=1 00:31:14.808 --rc geninfo_unexecuted_blocks=1 00:31:14.808 00:31:14.808 ' 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:14.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.808 --rc genhtml_branch_coverage=1 00:31:14.808 --rc genhtml_function_coverage=1 00:31:14.808 --rc genhtml_legend=1 00:31:14.808 --rc geninfo_all_blocks=1 00:31:14.808 --rc geninfo_unexecuted_blocks=1 00:31:14.808 00:31:14.808 ' 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:14.808 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:14.808 15:49:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.808 ************************************ 00:31:14.808 START TEST nvmf_multicontroller 00:31:14.808 ************************************ 00:31:14.809 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:31:14.809 * Looking for test storage... 00:31:14.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:14.809 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:14.809 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lcov --version 00:31:14.809 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:15.070 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:15.070 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:15.070 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:15.070 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:15.070 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:31:15.070 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:31:15.070 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:31:15.070 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:31:15.070 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:31:15.070 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:31:15.070 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:31:15.070 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:15.070 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:31:15.070 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:31:15.070 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:15.070 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:15.070 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:31:15.070 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:15.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.071 --rc genhtml_branch_coverage=1 00:31:15.071 --rc genhtml_function_coverage=1 00:31:15.071 --rc genhtml_legend=1 00:31:15.071 --rc geninfo_all_blocks=1 00:31:15.071 --rc geninfo_unexecuted_blocks=1 00:31:15.071 00:31:15.071 ' 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:15.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.071 --rc genhtml_branch_coverage=1 00:31:15.071 --rc genhtml_function_coverage=1 00:31:15.071 --rc genhtml_legend=1 00:31:15.071 --rc geninfo_all_blocks=1 00:31:15.071 --rc geninfo_unexecuted_blocks=1 00:31:15.071 00:31:15.071 ' 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:15.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.071 --rc genhtml_branch_coverage=1 00:31:15.071 --rc genhtml_function_coverage=1 00:31:15.071 --rc genhtml_legend=1 00:31:15.071 --rc geninfo_all_blocks=1 00:31:15.071 --rc geninfo_unexecuted_blocks=1 00:31:15.071 00:31:15.071 ' 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:15.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.071 --rc genhtml_branch_coverage=1 00:31:15.071 --rc genhtml_function_coverage=1 00:31:15.071 --rc genhtml_legend=1 00:31:15.071 --rc geninfo_all_blocks=1 00:31:15.071 --rc geninfo_unexecuted_blocks=1 00:31:15.071 00:31:15.071 ' 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:15.071 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # prepare_net_devs 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@434 -- # local -g is_hw=no 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # remove_spdk_ns 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:31:15.071 15:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:23.212 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:23.212 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:23.212 Found net devices under 0000:31:00.0: cvl_0_0 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:23.212 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:23.213 Found net devices under 0000:31:00.1: cvl_0_1 00:31:23.213 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:23.213 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:31:23.213 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # is_hw=yes 00:31:23.213 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:31:23.213 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:31:23.213 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:31:23.213 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:23.213 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:23.213 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:23.213 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:23.213 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:23.213 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:23.213 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:23.213 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:23.213 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:23.213 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:23.213 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:23.213 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:23.213 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:23.213 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:23.213 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:23.213 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:23.213 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:23.213 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:23.213 15:50:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:23.213 15:50:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:23.213 15:50:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:23.213 15:50:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:23.213 15:50:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:23.213 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:23.213 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:31:23.213 00:31:23.213 --- 10.0.0.2 ping statistics --- 00:31:23.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:23.213 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:31:23.213 15:50:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:23.213 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:23.213 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:31:23.213 00:31:23.213 --- 10.0.0.1 ping statistics --- 00:31:23.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:23.213 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:31:23.213 15:50:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:23.213 15:50:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # return 0 00:31:23.213 15:50:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:31:23.213 15:50:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:23.213 15:50:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:31:23.213 15:50:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:31:23.213 15:50:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:23.213 15:50:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:31:23.213 15:50:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:31:23.213 15:50:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:31:23.213 15:50:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:31:23.213 15:50:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:23.213 15:50:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:23.213 15:50:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # nvmfpid=518618 00:31:23.213 15:50:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # waitforlisten 518618 00:31:23.213 15:50:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:23.213 15:50:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 518618 ']' 00:31:23.213 15:50:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:23.213 15:50:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:23.213 15:50:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:23.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:23.213 15:50:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:23.213 15:50:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:23.213 [2024-09-27 15:50:03.216182] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:31:23.213 [2024-09-27 15:50:03.216244] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:23.213 [2024-09-27 15:50:03.308527] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:23.213 [2024-09-27 15:50:03.355972] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:23.213 [2024-09-27 15:50:03.356030] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:23.213 [2024-09-27 15:50:03.356039] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:23.213 [2024-09-27 15:50:03.356046] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:23.213 [2024-09-27 15:50:03.356052] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:23.213 [2024-09-27 15:50:03.356209] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:31:23.213 [2024-09-27 15:50:03.356366] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:31:23.213 [2024-09-27 15:50:03.356366] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:31:23.785 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:23.785 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:31:23.785 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:31:23.785 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:23.785 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:23.785 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:23.785 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:23.785 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.785 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:23.785 [2024-09-27 15:50:04.104452] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:23.785 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.785 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:23.786 Malloc0 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:23.786 [2024-09-27 15:50:04.177536] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:23.786 [2024-09-27 15:50:04.189411] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:23.786 Malloc1 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=518850 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 518850 /var/tmp/bdevperf.sock 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 518850 ']' 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:23.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:23.786 15:50:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:24.731 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:24.731 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:31:24.731 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:31:24.731 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.731 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:24.993 NVMe0n1 00:31:24.993 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.993 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:24.993 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:31:24.993 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.993 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:24.993 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.993 1 00:31:24.993 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:31:24.993 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:31:24.993 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:31:24.993 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:24.993 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:24.993 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:24.993 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:24.993 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:31:24.993 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.993 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:24.993 request: 00:31:24.993 { 00:31:24.993 "name": "NVMe0", 00:31:24.993 "trtype": "tcp", 00:31:24.994 "traddr": "10.0.0.2", 00:31:24.994 "adrfam": "ipv4", 00:31:24.994 "trsvcid": "4420", 00:31:24.994 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:24.994 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:31:24.994 "hostaddr": "10.0.0.1", 00:31:24.994 "prchk_reftag": false, 00:31:24.994 "prchk_guard": false, 00:31:24.994 "hdgst": false, 00:31:24.994 "ddgst": false, 00:31:24.994 "allow_unrecognized_csi": false, 00:31:24.994 "method": "bdev_nvme_attach_controller", 00:31:24.994 "req_id": 1 00:31:24.994 } 00:31:24.994 Got JSON-RPC error response 00:31:24.994 response: 00:31:24.994 { 00:31:24.994 "code": -114, 00:31:24.994 "message": "A controller named NVMe0 already exists with the specified network path" 00:31:24.994 } 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:24.994 request: 00:31:24.994 { 00:31:24.994 "name": "NVMe0", 00:31:24.994 "trtype": "tcp", 00:31:24.994 "traddr": "10.0.0.2", 00:31:24.994 "adrfam": "ipv4", 00:31:24.994 "trsvcid": "4420", 00:31:24.994 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:24.994 "hostaddr": "10.0.0.1", 00:31:24.994 "prchk_reftag": false, 00:31:24.994 "prchk_guard": false, 00:31:24.994 "hdgst": false, 00:31:24.994 "ddgst": false, 00:31:24.994 "allow_unrecognized_csi": false, 00:31:24.994 "method": "bdev_nvme_attach_controller", 00:31:24.994 "req_id": 1 00:31:24.994 } 00:31:24.994 Got JSON-RPC error response 00:31:24.994 response: 00:31:24.994 { 00:31:24.994 "code": -114, 00:31:24.994 "message": "A controller named NVMe0 already exists with the specified network path" 00:31:24.994 } 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:24.994 request: 00:31:24.994 { 00:31:24.994 "name": "NVMe0", 00:31:24.994 "trtype": "tcp", 00:31:24.994 "traddr": "10.0.0.2", 00:31:24.994 "adrfam": "ipv4", 00:31:24.994 "trsvcid": "4420", 00:31:24.994 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:24.994 "hostaddr": "10.0.0.1", 00:31:24.994 "prchk_reftag": false, 00:31:24.994 "prchk_guard": false, 00:31:24.994 "hdgst": false, 00:31:24.994 "ddgst": false, 00:31:24.994 "multipath": "disable", 00:31:24.994 "allow_unrecognized_csi": false, 00:31:24.994 "method": "bdev_nvme_attach_controller", 00:31:24.994 "req_id": 1 00:31:24.994 } 00:31:24.994 Got JSON-RPC error response 00:31:24.994 response: 00:31:24.994 { 00:31:24.994 "code": -114, 00:31:24.994 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:31:24.994 } 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:24.994 request: 00:31:24.994 { 00:31:24.994 "name": "NVMe0", 00:31:24.994 "trtype": "tcp", 00:31:24.994 "traddr": "10.0.0.2", 00:31:24.994 "adrfam": "ipv4", 00:31:24.994 "trsvcid": "4420", 00:31:24.994 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:24.994 "hostaddr": "10.0.0.1", 00:31:24.994 "prchk_reftag": false, 00:31:24.994 "prchk_guard": false, 00:31:24.994 "hdgst": false, 00:31:24.994 "ddgst": false, 00:31:24.994 "multipath": "failover", 00:31:24.994 "allow_unrecognized_csi": false, 00:31:24.994 "method": "bdev_nvme_attach_controller", 00:31:24.994 "req_id": 1 00:31:24.994 } 00:31:24.994 Got JSON-RPC error response 00:31:24.994 response: 00:31:24.994 { 00:31:24.994 "code": -114, 00:31:24.994 "message": "A controller named NVMe0 already exists with the specified network path" 00:31:24.994 } 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.994 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:25.255 00:31:25.255 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.255 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:25.255 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.255 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:25.255 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.255 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:31:25.255 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.255 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:25.516 00:31:25.516 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.516 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:25.516 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:31:25.516 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.516 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:25.516 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.516 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:31:25.516 15:50:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:26.457 { 00:31:26.457 "results": [ 00:31:26.457 { 00:31:26.457 "job": "NVMe0n1", 00:31:26.457 "core_mask": "0x1", 00:31:26.457 "workload": "write", 00:31:26.457 "status": "finished", 00:31:26.457 "queue_depth": 128, 00:31:26.457 "io_size": 4096, 00:31:26.457 "runtime": 1.006184, 00:31:26.457 "iops": 28860.526504098652, 00:31:26.457 "mibps": 112.73643165663536, 00:31:26.457 "io_failed": 0, 00:31:26.457 "io_timeout": 0, 00:31:26.457 "avg_latency_us": 4423.88217408772, 00:31:26.457 "min_latency_us": 2102.6133333333332, 00:31:26.457 "max_latency_us": 10868.053333333333 00:31:26.457 } 00:31:26.457 ], 00:31:26.457 "core_count": 1 00:31:26.457 } 00:31:26.718 15:50:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:31:26.718 15:50:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.718 15:50:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:26.718 15:50:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.718 15:50:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:31:26.718 15:50:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 518850 00:31:26.718 15:50:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 518850 ']' 00:31:26.718 15:50:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 518850 00:31:26.718 15:50:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:31:26.718 15:50:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:26.718 15:50:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 518850 00:31:26.718 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:26.718 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:26.718 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 518850' 00:31:26.718 killing process with pid 518850 00:31:26.718 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 518850 00:31:26.718 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 518850 00:31:26.718 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:26.718 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.718 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:26.718 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.718 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:26.718 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.718 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:26.718 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.718 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:31:26.718 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:26.718 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:31:26.718 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:31:26.718 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:31:26.718 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:31:26.718 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:31:26.718 [2024-09-27 15:50:04.320669] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:31:26.718 [2024-09-27 15:50:04.320740] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid518850 ] 00:31:26.718 [2024-09-27 15:50:04.403775] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:26.718 [2024-09-27 15:50:04.450979] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:26.718 [2024-09-27 15:50:05.809128] bdev.c:4696:bdev_name_add: *ERROR*: Bdev name ecc09d53-5336-4883-83b7-63ea726ad897 already exists 00:31:26.718 [2024-09-27 15:50:05.809157] bdev.c:7837:bdev_register: *ERROR*: Unable to add uuid:ecc09d53-5336-4883-83b7-63ea726ad897 alias for bdev NVMe1n1 00:31:26.718 [2024-09-27 15:50:05.809166] bdev_nvme.c:4481:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:31:26.718 Running I/O for 1 seconds... 00:31:26.718 28846.00 IOPS, 112.68 MiB/s 00:31:26.718 Latency(us) 00:31:26.718 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:26.718 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:31:26.718 NVMe0n1 : 1.01 28860.53 112.74 0.00 0.00 4423.88 2102.61 10868.05 00:31:26.718 =================================================================================================================== 00:31:26.718 Total : 28860.53 112.74 0.00 0.00 4423.88 2102.61 10868.05 00:31:26.718 Received shutdown signal, test time was about 1.000000 seconds 00:31:26.718 00:31:26.718 Latency(us) 00:31:26.718 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:26.718 =================================================================================================================== 00:31:26.718 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:26.718 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:31:26.718 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:26.718 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:31:26.718 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:31:26.718 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # nvmfcleanup 00:31:26.718 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:31:26.979 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:26.979 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:31:26.979 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:26.979 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:26.979 rmmod nvme_tcp 00:31:26.979 rmmod nvme_fabrics 00:31:26.979 rmmod nvme_keyring 00:31:26.979 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:26.979 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:31:26.979 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:31:26.979 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@513 -- # '[' -n 518618 ']' 00:31:26.979 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # killprocess 518618 00:31:26.979 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 518618 ']' 00:31:26.979 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 518618 00:31:26.979 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:31:26.979 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:26.979 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 518618 00:31:26.979 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:26.979 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:26.979 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 518618' 00:31:26.979 killing process with pid 518618 00:31:26.979 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 518618 00:31:26.979 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 518618 00:31:27.240 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:31:27.240 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:31:27.240 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:31:27.240 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:31:27.240 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # iptables-save 00:31:27.240 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:31:27.240 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # iptables-restore 00:31:27.240 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:27.240 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:27.240 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:27.240 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:27.240 15:50:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:29.193 15:50:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:29.193 00:31:29.193 real 0m14.387s 00:31:29.193 user 0m17.864s 00:31:29.193 sys 0m6.562s 00:31:29.193 15:50:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:29.193 15:50:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:29.193 ************************************ 00:31:29.193 END TEST nvmf_multicontroller 00:31:29.193 ************************************ 00:31:29.193 15:50:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:31:29.193 15:50:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:29.193 15:50:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:29.193 15:50:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.193 ************************************ 00:31:29.193 START TEST nvmf_aer 00:31:29.193 ************************************ 00:31:29.193 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:31:29.455 * Looking for test storage... 00:31:29.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:29.455 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:29.455 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lcov --version 00:31:29.455 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:29.455 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:29.455 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:29.455 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:29.455 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:29.455 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:31:29.455 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:31:29.455 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:31:29.455 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:31:29.455 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:31:29.455 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:31:29.455 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:31:29.455 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:29.455 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:31:29.455 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:31:29.455 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:29.455 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:29.455 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:31:29.455 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:31:29.455 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:29.455 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:31:29.455 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:31:29.455 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:31:29.455 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:31:29.455 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:29.455 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:31:29.455 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:31:29.455 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:29.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:29.456 --rc genhtml_branch_coverage=1 00:31:29.456 --rc genhtml_function_coverage=1 00:31:29.456 --rc genhtml_legend=1 00:31:29.456 --rc geninfo_all_blocks=1 00:31:29.456 --rc geninfo_unexecuted_blocks=1 00:31:29.456 00:31:29.456 ' 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:29.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:29.456 --rc genhtml_branch_coverage=1 00:31:29.456 --rc genhtml_function_coverage=1 00:31:29.456 --rc genhtml_legend=1 00:31:29.456 --rc geninfo_all_blocks=1 00:31:29.456 --rc geninfo_unexecuted_blocks=1 00:31:29.456 00:31:29.456 ' 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:29.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:29.456 --rc genhtml_branch_coverage=1 00:31:29.456 --rc genhtml_function_coverage=1 00:31:29.456 --rc genhtml_legend=1 00:31:29.456 --rc geninfo_all_blocks=1 00:31:29.456 --rc geninfo_unexecuted_blocks=1 00:31:29.456 00:31:29.456 ' 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:29.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:29.456 --rc genhtml_branch_coverage=1 00:31:29.456 --rc genhtml_function_coverage=1 00:31:29.456 --rc genhtml_legend=1 00:31:29.456 --rc geninfo_all_blocks=1 00:31:29.456 --rc geninfo_unexecuted_blocks=1 00:31:29.456 00:31:29.456 ' 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:29.456 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # prepare_net_devs 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@434 -- # local -g is_hw=no 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # remove_spdk_ns 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:31:29.456 15:50:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:37.604 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:37.604 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:37.604 Found net devices under 0000:31:00.0: cvl_0_0 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:37.604 Found net devices under 0000:31:00.1: cvl_0_1 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # is_hw=yes 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:37.604 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:37.605 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:37.605 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:37.605 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:37.605 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:37.605 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:37.605 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:31:37.605 00:31:37.605 --- 10.0.0.2 ping statistics --- 00:31:37.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:37.605 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:31:37.605 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:37.605 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:37.605 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:31:37.605 00:31:37.605 --- 10.0.0.1 ping statistics --- 00:31:37.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:37.605 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:31:37.605 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:37.605 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # return 0 00:31:37.605 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:31:37.605 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:37.605 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:31:37.605 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:31:37.605 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:37.605 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:31:37.605 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:31:37.605 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:31:37.605 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:31:37.605 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:37.605 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:37.605 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # nvmfpid=523727 00:31:37.605 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # waitforlisten 523727 00:31:37.605 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:37.605 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 523727 ']' 00:31:37.605 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:37.605 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:37.605 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:37.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:37.605 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:37.605 15:50:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:37.605 [2024-09-27 15:50:17.625865] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:31:37.605 [2024-09-27 15:50:17.625956] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:37.605 [2024-09-27 15:50:17.714973] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:37.605 [2024-09-27 15:50:17.762383] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:37.605 [2024-09-27 15:50:17.762437] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:37.605 [2024-09-27 15:50:17.762446] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:37.605 [2024-09-27 15:50:17.762453] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:37.605 [2024-09-27 15:50:17.762459] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:37.605 [2024-09-27 15:50:17.762644] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:31:37.605 [2024-09-27 15:50:17.762800] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:31:37.605 [2024-09-27 15:50:17.762933] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:31:37.605 [2024-09-27 15:50:17.762933] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:38.178 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:38.178 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:31:38.178 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:31:38.178 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:38.178 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:38.178 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:38.178 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:38.178 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.178 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:38.178 [2024-09-27 15:50:18.496694] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:38.178 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.178 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:31:38.178 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.178 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:38.178 Malloc0 00:31:38.178 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.178 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:31:38.178 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.178 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:38.178 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.178 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:38.178 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.178 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:38.178 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.178 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:38.178 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.178 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:38.178 [2024-09-27 15:50:18.562412] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:38.178 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.178 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:31:38.178 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.178 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:38.178 [ 00:31:38.178 { 00:31:38.178 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:38.178 "subtype": "Discovery", 00:31:38.178 "listen_addresses": [], 00:31:38.178 "allow_any_host": true, 00:31:38.178 "hosts": [] 00:31:38.178 }, 00:31:38.178 { 00:31:38.178 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:38.178 "subtype": "NVMe", 00:31:38.178 "listen_addresses": [ 00:31:38.178 { 00:31:38.178 "trtype": "TCP", 00:31:38.178 "adrfam": "IPv4", 00:31:38.178 "traddr": "10.0.0.2", 00:31:38.178 "trsvcid": "4420" 00:31:38.178 } 00:31:38.178 ], 00:31:38.178 "allow_any_host": true, 00:31:38.178 "hosts": [], 00:31:38.178 "serial_number": "SPDK00000000000001", 00:31:38.178 "model_number": "SPDK bdev Controller", 00:31:38.178 "max_namespaces": 2, 00:31:38.178 "min_cntlid": 1, 00:31:38.178 "max_cntlid": 65519, 00:31:38.178 "namespaces": [ 00:31:38.178 { 00:31:38.178 "nsid": 1, 00:31:38.178 "bdev_name": "Malloc0", 00:31:38.178 "name": "Malloc0", 00:31:38.178 "nguid": "FA9098FB6E834C799C0B069DF45371E0", 00:31:38.178 "uuid": "fa9098fb-6e83-4c79-9c0b-069df45371e0" 00:31:38.178 } 00:31:38.178 ] 00:31:38.178 } 00:31:38.178 ] 00:31:38.178 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.178 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:31:38.178 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:31:38.179 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=523827 00:31:38.179 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:31:38.179 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:31:38.179 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:31:38.179 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:38.179 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:31:38.179 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:31:38.179 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:31:38.440 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:38.440 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:31:38.440 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:31:38.440 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:31:38.440 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:38.440 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:38.440 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:31:38.440 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:31:38.440 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.440 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:38.440 Malloc1 00:31:38.440 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.440 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:31:38.440 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.440 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:38.440 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.440 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:31:38.440 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.440 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:38.440 Asynchronous Event Request test 00:31:38.440 Attaching to 10.0.0.2 00:31:38.440 Attached to 10.0.0.2 00:31:38.440 Registering asynchronous event callbacks... 00:31:38.440 Starting namespace attribute notice tests for all controllers... 00:31:38.440 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:31:38.440 aer_cb - Changed Namespace 00:31:38.440 Cleaning up... 00:31:38.440 [ 00:31:38.440 { 00:31:38.440 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:38.440 "subtype": "Discovery", 00:31:38.440 "listen_addresses": [], 00:31:38.440 "allow_any_host": true, 00:31:38.440 "hosts": [] 00:31:38.440 }, 00:31:38.440 { 00:31:38.440 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:38.440 "subtype": "NVMe", 00:31:38.440 "listen_addresses": [ 00:31:38.440 { 00:31:38.440 "trtype": "TCP", 00:31:38.440 "adrfam": "IPv4", 00:31:38.440 "traddr": "10.0.0.2", 00:31:38.440 "trsvcid": "4420" 00:31:38.440 } 00:31:38.440 ], 00:31:38.440 "allow_any_host": true, 00:31:38.440 "hosts": [], 00:31:38.440 "serial_number": "SPDK00000000000001", 00:31:38.440 "model_number": "SPDK bdev Controller", 00:31:38.440 "max_namespaces": 2, 00:31:38.440 "min_cntlid": 1, 00:31:38.440 "max_cntlid": 65519, 00:31:38.440 "namespaces": [ 00:31:38.441 { 00:31:38.441 "nsid": 1, 00:31:38.441 "bdev_name": "Malloc0", 00:31:38.441 "name": "Malloc0", 00:31:38.441 "nguid": "FA9098FB6E834C799C0B069DF45371E0", 00:31:38.441 "uuid": "fa9098fb-6e83-4c79-9c0b-069df45371e0" 00:31:38.441 }, 00:31:38.441 { 00:31:38.441 "nsid": 2, 00:31:38.441 "bdev_name": "Malloc1", 00:31:38.441 "name": "Malloc1", 00:31:38.441 "nguid": "C69E371220AE45A49455ADDDEAEFD4FA", 00:31:38.441 "uuid": "c69e3712-20ae-45a4-9455-adddeaefd4fa" 00:31:38.441 } 00:31:38.441 ] 00:31:38.441 } 00:31:38.441 ] 00:31:38.441 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.441 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 523827 00:31:38.441 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:31:38.441 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.441 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:38.441 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.441 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:31:38.441 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.441 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:38.702 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.702 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:38.702 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.702 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:38.702 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.702 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:31:38.702 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:31:38.702 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # nvmfcleanup 00:31:38.702 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:31:38.702 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:38.702 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:31:38.702 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:38.702 15:50:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:38.702 rmmod nvme_tcp 00:31:38.702 rmmod nvme_fabrics 00:31:38.702 rmmod nvme_keyring 00:31:38.702 15:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:38.702 15:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:31:38.702 15:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:31:38.702 15:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@513 -- # '[' -n 523727 ']' 00:31:38.702 15:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # killprocess 523727 00:31:38.702 15:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 523727 ']' 00:31:38.702 15:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 523727 00:31:38.702 15:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:31:38.702 15:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:38.702 15:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 523727 00:31:38.702 15:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:38.702 15:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:38.702 15:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 523727' 00:31:38.702 killing process with pid 523727 00:31:38.702 15:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 523727 00:31:38.702 15:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 523727 00:31:38.964 15:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:31:38.964 15:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:31:38.964 15:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:31:38.964 15:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:31:38.964 15:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # iptables-save 00:31:38.964 15:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:31:38.964 15:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # iptables-restore 00:31:38.964 15:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:38.964 15:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:38.964 15:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:38.964 15:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:38.964 15:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:40.880 15:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:40.880 00:31:40.880 real 0m11.699s 00:31:40.880 user 0m8.263s 00:31:40.880 sys 0m6.248s 00:31:40.880 15:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:40.880 15:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:40.880 ************************************ 00:31:40.880 END TEST nvmf_aer 00:31:40.880 ************************************ 00:31:41.140 15:50:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:31:41.140 15:50:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:41.140 15:50:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:41.140 15:50:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.140 ************************************ 00:31:41.140 START TEST nvmf_async_init 00:31:41.140 ************************************ 00:31:41.141 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:31:41.141 * Looking for test storage... 00:31:41.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:41.141 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:41.141 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lcov --version 00:31:41.141 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:41.141 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:41.141 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:41.141 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:41.141 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:41.141 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:31:41.141 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:31:41.141 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:31:41.141 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:31:41.141 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:31:41.141 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:31:41.141 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:31:41.141 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:41.141 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:31:41.141 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:31:41.141 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:41.141 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:41.141 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:31:41.141 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:31:41.141 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:41.141 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:31:41.141 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:31:41.141 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:31:41.402 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:31:41.402 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:41.402 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:31:41.402 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:31:41.402 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:41.402 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:41.402 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:31:41.402 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:41.402 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:41.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:41.402 --rc genhtml_branch_coverage=1 00:31:41.402 --rc genhtml_function_coverage=1 00:31:41.402 --rc genhtml_legend=1 00:31:41.402 --rc geninfo_all_blocks=1 00:31:41.402 --rc geninfo_unexecuted_blocks=1 00:31:41.402 00:31:41.403 ' 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:41.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:41.403 --rc genhtml_branch_coverage=1 00:31:41.403 --rc genhtml_function_coverage=1 00:31:41.403 --rc genhtml_legend=1 00:31:41.403 --rc geninfo_all_blocks=1 00:31:41.403 --rc geninfo_unexecuted_blocks=1 00:31:41.403 00:31:41.403 ' 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:41.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:41.403 --rc genhtml_branch_coverage=1 00:31:41.403 --rc genhtml_function_coverage=1 00:31:41.403 --rc genhtml_legend=1 00:31:41.403 --rc geninfo_all_blocks=1 00:31:41.403 --rc geninfo_unexecuted_blocks=1 00:31:41.403 00:31:41.403 ' 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:41.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:41.403 --rc genhtml_branch_coverage=1 00:31:41.403 --rc genhtml_function_coverage=1 00:31:41.403 --rc genhtml_legend=1 00:31:41.403 --rc geninfo_all_blocks=1 00:31:41.403 --rc geninfo_unexecuted_blocks=1 00:31:41.403 00:31:41.403 ' 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:41.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=2c33f0ff7111407f8a6840f1f3cec0c8 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # prepare_net_devs 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@434 -- # local -g is_hw=no 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # remove_spdk_ns 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:31:41.403 15:50:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:49.545 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:49.545 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:49.545 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:49.564 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:49.564 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:49.564 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:49.564 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:49.564 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:49.564 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:49.564 Found net devices under 0000:31:00.0: cvl_0_0 00:31:49.564 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:49.564 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:49.564 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:49.564 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:49.564 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:49.564 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:49.564 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:49.564 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:49.564 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:49.564 Found net devices under 0000:31:00.1: cvl_0_1 00:31:49.564 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:49.564 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:31:49.564 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # is_hw=yes 00:31:49.564 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:31:49.564 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:31:49.564 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:31:49.564 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:49.564 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:49.564 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:49.564 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:49.564 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:49.564 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:49.564 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:49.565 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:49.565 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:49.565 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:49.565 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:49.565 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:49.565 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:49.565 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:49.565 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:49.565 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:49.565 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:49.565 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:49.565 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:49.565 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:49.565 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:49.565 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:49.565 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:49.565 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:49.565 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.713 ms 00:31:49.565 00:31:49.565 --- 10.0.0.2 ping statistics --- 00:31:49.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:49.565 rtt min/avg/max/mdev = 0.713/0.713/0.713/0.000 ms 00:31:49.565 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:49.565 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:49.565 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:31:49.565 00:31:49.565 --- 10.0.0.1 ping statistics --- 00:31:49.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:49.565 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:31:49.565 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:49.565 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # return 0 00:31:49.565 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:31:49.565 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:49.565 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:31:49.565 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:31:49.565 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:49.565 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:31:49.565 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:31:49.565 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:31:49.565 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:31:49.565 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:49.565 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:49.565 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # nvmfpid=528174 00:31:49.565 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # waitforlisten 528174 00:31:49.565 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:31:49.565 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 528174 ']' 00:31:49.565 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:49.565 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:49.565 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:49.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:49.565 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:49.565 15:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:49.565 [2024-09-27 15:50:29.512404] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:31:49.565 [2024-09-27 15:50:29.512477] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:49.565 [2024-09-27 15:50:29.601188] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:49.565 [2024-09-27 15:50:29.647186] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:49.565 [2024-09-27 15:50:29.647238] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:49.565 [2024-09-27 15:50:29.647253] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:49.565 [2024-09-27 15:50:29.647260] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:49.565 [2024-09-27 15:50:29.647266] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:49.565 [2024-09-27 15:50:29.647288] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:50.138 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:50.138 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:31:50.138 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:31:50.138 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:50.138 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:50.138 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:50.138 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:50.138 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.138 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:50.138 [2024-09-27 15:50:30.371028] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:50.138 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.138 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:31:50.138 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.138 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:50.138 null0 00:31:50.138 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.138 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:31:50.138 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.138 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:50.138 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.138 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:31:50.138 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.138 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:50.138 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.138 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 2c33f0ff7111407f8a6840f1f3cec0c8 00:31:50.138 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.138 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:50.138 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.138 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:50.138 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.138 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:50.138 [2024-09-27 15:50:30.431411] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:50.138 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.138 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:31:50.138 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.138 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:50.399 nvme0n1 00:31:50.399 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.399 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:50.399 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.399 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:50.399 [ 00:31:50.399 { 00:31:50.399 "name": "nvme0n1", 00:31:50.399 "aliases": [ 00:31:50.399 "2c33f0ff-7111-407f-8a68-40f1f3cec0c8" 00:31:50.399 ], 00:31:50.399 "product_name": "NVMe disk", 00:31:50.399 "block_size": 512, 00:31:50.399 "num_blocks": 2097152, 00:31:50.399 "uuid": "2c33f0ff-7111-407f-8a68-40f1f3cec0c8", 00:31:50.399 "numa_id": 0, 00:31:50.399 "assigned_rate_limits": { 00:31:50.399 "rw_ios_per_sec": 0, 00:31:50.399 "rw_mbytes_per_sec": 0, 00:31:50.399 "r_mbytes_per_sec": 0, 00:31:50.399 "w_mbytes_per_sec": 0 00:31:50.399 }, 00:31:50.399 "claimed": false, 00:31:50.399 "zoned": false, 00:31:50.399 "supported_io_types": { 00:31:50.399 "read": true, 00:31:50.399 "write": true, 00:31:50.399 "unmap": false, 00:31:50.399 "flush": true, 00:31:50.399 "reset": true, 00:31:50.399 "nvme_admin": true, 00:31:50.399 "nvme_io": true, 00:31:50.399 "nvme_io_md": false, 00:31:50.399 "write_zeroes": true, 00:31:50.399 "zcopy": false, 00:31:50.399 "get_zone_info": false, 00:31:50.399 "zone_management": false, 00:31:50.399 "zone_append": false, 00:31:50.399 "compare": true, 00:31:50.400 "compare_and_write": true, 00:31:50.400 "abort": true, 00:31:50.400 "seek_hole": false, 00:31:50.400 "seek_data": false, 00:31:50.400 "copy": true, 00:31:50.400 "nvme_iov_md": false 00:31:50.400 }, 00:31:50.400 "memory_domains": [ 00:31:50.400 { 00:31:50.400 "dma_device_id": "system", 00:31:50.400 "dma_device_type": 1 00:31:50.400 } 00:31:50.400 ], 00:31:50.400 "driver_specific": { 00:31:50.400 "nvme": [ 00:31:50.400 { 00:31:50.400 "trid": { 00:31:50.400 "trtype": "TCP", 00:31:50.400 "adrfam": "IPv4", 00:31:50.400 "traddr": "10.0.0.2", 00:31:50.400 "trsvcid": "4420", 00:31:50.400 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:50.400 }, 00:31:50.400 "ctrlr_data": { 00:31:50.400 "cntlid": 1, 00:31:50.400 "vendor_id": "0x8086", 00:31:50.400 "model_number": "SPDK bdev Controller", 00:31:50.400 "serial_number": "00000000000000000000", 00:31:50.400 "firmware_revision": "25.01", 00:31:50.400 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:50.400 "oacs": { 00:31:50.400 "security": 0, 00:31:50.400 "format": 0, 00:31:50.400 "firmware": 0, 00:31:50.400 "ns_manage": 0 00:31:50.400 }, 00:31:50.400 "multi_ctrlr": true, 00:31:50.400 "ana_reporting": false 00:31:50.400 }, 00:31:50.400 "vs": { 00:31:50.400 "nvme_version": "1.3" 00:31:50.400 }, 00:31:50.400 "ns_data": { 00:31:50.400 "id": 1, 00:31:50.400 "can_share": true 00:31:50.400 } 00:31:50.400 } 00:31:50.400 ], 00:31:50.400 "mp_policy": "active_passive" 00:31:50.400 } 00:31:50.400 } 00:31:50.400 ] 00:31:50.400 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.400 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:31:50.400 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.400 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:50.400 [2024-09-27 15:50:30.709118] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:50.400 [2024-09-27 15:50:30.709199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6bc1e0 (9): Bad file descriptor 00:31:50.400 [2024-09-27 15:50:30.843013] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:50.400 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.400 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:50.400 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.400 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:50.400 [ 00:31:50.400 { 00:31:50.400 "name": "nvme0n1", 00:31:50.400 "aliases": [ 00:31:50.400 "2c33f0ff-7111-407f-8a68-40f1f3cec0c8" 00:31:50.400 ], 00:31:50.400 "product_name": "NVMe disk", 00:31:50.400 "block_size": 512, 00:31:50.400 "num_blocks": 2097152, 00:31:50.400 "uuid": "2c33f0ff-7111-407f-8a68-40f1f3cec0c8", 00:31:50.400 "numa_id": 0, 00:31:50.400 "assigned_rate_limits": { 00:31:50.400 "rw_ios_per_sec": 0, 00:31:50.400 "rw_mbytes_per_sec": 0, 00:31:50.400 "r_mbytes_per_sec": 0, 00:31:50.400 "w_mbytes_per_sec": 0 00:31:50.400 }, 00:31:50.400 "claimed": false, 00:31:50.400 "zoned": false, 00:31:50.400 "supported_io_types": { 00:31:50.400 "read": true, 00:31:50.400 "write": true, 00:31:50.400 "unmap": false, 00:31:50.400 "flush": true, 00:31:50.400 "reset": true, 00:31:50.400 "nvme_admin": true, 00:31:50.400 "nvme_io": true, 00:31:50.400 "nvme_io_md": false, 00:31:50.400 "write_zeroes": true, 00:31:50.400 "zcopy": false, 00:31:50.400 "get_zone_info": false, 00:31:50.400 "zone_management": false, 00:31:50.400 "zone_append": false, 00:31:50.400 "compare": true, 00:31:50.400 "compare_and_write": true, 00:31:50.400 "abort": true, 00:31:50.400 "seek_hole": false, 00:31:50.400 "seek_data": false, 00:31:50.400 "copy": true, 00:31:50.400 "nvme_iov_md": false 00:31:50.400 }, 00:31:50.400 "memory_domains": [ 00:31:50.400 { 00:31:50.400 "dma_device_id": "system", 00:31:50.400 "dma_device_type": 1 00:31:50.400 } 00:31:50.400 ], 00:31:50.400 "driver_specific": { 00:31:50.400 "nvme": [ 00:31:50.400 { 00:31:50.400 "trid": { 00:31:50.400 "trtype": "TCP", 00:31:50.400 "adrfam": "IPv4", 00:31:50.400 "traddr": "10.0.0.2", 00:31:50.400 "trsvcid": "4420", 00:31:50.400 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:50.400 }, 00:31:50.400 "ctrlr_data": { 00:31:50.400 "cntlid": 2, 00:31:50.400 "vendor_id": "0x8086", 00:31:50.400 "model_number": "SPDK bdev Controller", 00:31:50.400 "serial_number": "00000000000000000000", 00:31:50.400 "firmware_revision": "25.01", 00:31:50.400 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:50.400 "oacs": { 00:31:50.400 "security": 0, 00:31:50.400 "format": 0, 00:31:50.400 "firmware": 0, 00:31:50.400 "ns_manage": 0 00:31:50.400 }, 00:31:50.400 "multi_ctrlr": true, 00:31:50.400 "ana_reporting": false 00:31:50.400 }, 00:31:50.400 "vs": { 00:31:50.400 "nvme_version": "1.3" 00:31:50.400 }, 00:31:50.400 "ns_data": { 00:31:50.400 "id": 1, 00:31:50.400 "can_share": true 00:31:50.400 } 00:31:50.400 } 00:31:50.400 ], 00:31:50.400 "mp_policy": "active_passive" 00:31:50.400 } 00:31:50.400 } 00:31:50.400 ] 00:31:50.400 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.400 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:50.400 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.400 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:50.661 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.661 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:31:50.661 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.0J5Y83ENFn 00:31:50.661 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:31:50.661 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.0J5Y83ENFn 00:31:50.661 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.0J5Y83ENFn 00:31:50.661 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.661 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:50.661 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.661 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:31:50.661 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.661 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:50.661 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.661 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:31:50.661 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.661 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:50.661 [2024-09-27 15:50:30.933825] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:50.661 [2024-09-27 15:50:30.934004] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:50.661 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.661 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:31:50.661 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.661 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:50.661 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.661 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:31:50.661 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.661 15:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:50.661 [2024-09-27 15:50:30.957907] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:50.661 nvme0n1 00:31:50.661 15:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.661 15:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:50.661 15:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.661 15:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:50.661 [ 00:31:50.661 { 00:31:50.661 "name": "nvme0n1", 00:31:50.661 "aliases": [ 00:31:50.661 "2c33f0ff-7111-407f-8a68-40f1f3cec0c8" 00:31:50.661 ], 00:31:50.661 "product_name": "NVMe disk", 00:31:50.661 "block_size": 512, 00:31:50.661 "num_blocks": 2097152, 00:31:50.661 "uuid": "2c33f0ff-7111-407f-8a68-40f1f3cec0c8", 00:31:50.661 "numa_id": 0, 00:31:50.661 "assigned_rate_limits": { 00:31:50.661 "rw_ios_per_sec": 0, 00:31:50.661 "rw_mbytes_per_sec": 0, 00:31:50.661 "r_mbytes_per_sec": 0, 00:31:50.661 "w_mbytes_per_sec": 0 00:31:50.661 }, 00:31:50.661 "claimed": false, 00:31:50.661 "zoned": false, 00:31:50.661 "supported_io_types": { 00:31:50.661 "read": true, 00:31:50.661 "write": true, 00:31:50.661 "unmap": false, 00:31:50.661 "flush": true, 00:31:50.661 "reset": true, 00:31:50.661 "nvme_admin": true, 00:31:50.661 "nvme_io": true, 00:31:50.662 "nvme_io_md": false, 00:31:50.662 "write_zeroes": true, 00:31:50.662 "zcopy": false, 00:31:50.662 "get_zone_info": false, 00:31:50.662 "zone_management": false, 00:31:50.662 "zone_append": false, 00:31:50.662 "compare": true, 00:31:50.662 "compare_and_write": true, 00:31:50.662 "abort": true, 00:31:50.662 "seek_hole": false, 00:31:50.662 "seek_data": false, 00:31:50.662 "copy": true, 00:31:50.662 "nvme_iov_md": false 00:31:50.662 }, 00:31:50.662 "memory_domains": [ 00:31:50.662 { 00:31:50.662 "dma_device_id": "system", 00:31:50.662 "dma_device_type": 1 00:31:50.662 } 00:31:50.662 ], 00:31:50.662 "driver_specific": { 00:31:50.662 "nvme": [ 00:31:50.662 { 00:31:50.662 "trid": { 00:31:50.662 "trtype": "TCP", 00:31:50.662 "adrfam": "IPv4", 00:31:50.662 "traddr": "10.0.0.2", 00:31:50.662 "trsvcid": "4421", 00:31:50.662 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:50.662 }, 00:31:50.662 "ctrlr_data": { 00:31:50.662 "cntlid": 3, 00:31:50.662 "vendor_id": "0x8086", 00:31:50.662 "model_number": "SPDK bdev Controller", 00:31:50.662 "serial_number": "00000000000000000000", 00:31:50.662 "firmware_revision": "25.01", 00:31:50.662 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:50.662 "oacs": { 00:31:50.662 "security": 0, 00:31:50.662 "format": 0, 00:31:50.662 "firmware": 0, 00:31:50.662 "ns_manage": 0 00:31:50.662 }, 00:31:50.662 "multi_ctrlr": true, 00:31:50.662 "ana_reporting": false 00:31:50.662 }, 00:31:50.662 "vs": { 00:31:50.662 "nvme_version": "1.3" 00:31:50.662 }, 00:31:50.662 "ns_data": { 00:31:50.662 "id": 1, 00:31:50.662 "can_share": true 00:31:50.662 } 00:31:50.662 } 00:31:50.662 ], 00:31:50.662 "mp_policy": "active_passive" 00:31:50.662 } 00:31:50.662 } 00:31:50.662 ] 00:31:50.662 15:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.662 15:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:50.662 15:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.662 15:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:50.662 15:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.662 15:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.0J5Y83ENFn 00:31:50.662 15:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:31:50.662 15:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:31:50.662 15:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # nvmfcleanup 00:31:50.662 15:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:31:50.662 15:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:50.662 15:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:31:50.662 15:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:50.662 15:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:50.662 rmmod nvme_tcp 00:31:50.662 rmmod nvme_fabrics 00:31:50.662 rmmod nvme_keyring 00:31:50.662 15:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:50.922 15:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:31:50.922 15:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:31:50.922 15:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@513 -- # '[' -n 528174 ']' 00:31:50.922 15:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # killprocess 528174 00:31:50.922 15:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 528174 ']' 00:31:50.922 15:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 528174 00:31:50.922 15:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:31:50.922 15:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:50.922 15:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 528174 00:31:50.922 15:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:50.922 15:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:50.922 15:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 528174' 00:31:50.923 killing process with pid 528174 00:31:50.923 15:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 528174 00:31:50.923 15:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 528174 00:31:50.923 15:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:31:50.923 15:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:31:50.923 15:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:31:50.923 15:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:31:50.923 15:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # iptables-save 00:31:50.923 15:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:31:50.923 15:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # iptables-restore 00:31:50.923 15:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:50.923 15:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:50.923 15:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:50.923 15:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:50.923 15:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:53.466 15:50:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:53.466 00:31:53.466 real 0m12.044s 00:31:53.466 user 0m4.251s 00:31:53.466 sys 0m6.340s 00:31:53.466 15:50:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:53.466 15:50:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:53.466 ************************************ 00:31:53.466 END TEST nvmf_async_init 00:31:53.466 ************************************ 00:31:53.466 15:50:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:31:53.466 15:50:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:53.466 15:50:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:53.466 15:50:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.466 ************************************ 00:31:53.466 START TEST dma 00:31:53.466 ************************************ 00:31:53.466 15:50:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:31:53.466 * Looking for test storage... 00:31:53.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:53.466 15:50:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:53.466 15:50:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lcov --version 00:31:53.466 15:50:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:53.466 15:50:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:53.466 15:50:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:53.466 15:50:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:53.466 15:50:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:53.466 15:50:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:31:53.466 15:50:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:31:53.466 15:50:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:31:53.466 15:50:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:31:53.466 15:50:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:31:53.466 15:50:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:31:53.466 15:50:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:31:53.466 15:50:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:53.466 15:50:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:31:53.466 15:50:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:31:53.466 15:50:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:53.466 15:50:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:53.466 15:50:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:31:53.466 15:50:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:31:53.466 15:50:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:53.466 15:50:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:31:53.466 15:50:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:31:53.466 15:50:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:31:53.466 15:50:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:31:53.466 15:50:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:53.466 15:50:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:31:53.466 15:50:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:31:53.466 15:50:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:53.466 15:50:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:53.466 15:50:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:31:53.466 15:50:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:53.466 15:50:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:53.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.466 --rc genhtml_branch_coverage=1 00:31:53.466 --rc genhtml_function_coverage=1 00:31:53.466 --rc genhtml_legend=1 00:31:53.466 --rc geninfo_all_blocks=1 00:31:53.466 --rc geninfo_unexecuted_blocks=1 00:31:53.466 00:31:53.466 ' 00:31:53.466 15:50:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:53.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.466 --rc genhtml_branch_coverage=1 00:31:53.466 --rc genhtml_function_coverage=1 00:31:53.466 --rc genhtml_legend=1 00:31:53.466 --rc geninfo_all_blocks=1 00:31:53.467 --rc geninfo_unexecuted_blocks=1 00:31:53.467 00:31:53.467 ' 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:53.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.467 --rc genhtml_branch_coverage=1 00:31:53.467 --rc genhtml_function_coverage=1 00:31:53.467 --rc genhtml_legend=1 00:31:53.467 --rc geninfo_all_blocks=1 00:31:53.467 --rc geninfo_unexecuted_blocks=1 00:31:53.467 00:31:53.467 ' 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:53.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.467 --rc genhtml_branch_coverage=1 00:31:53.467 --rc genhtml_function_coverage=1 00:31:53.467 --rc genhtml_legend=1 00:31:53.467 --rc geninfo_all_blocks=1 00:31:53.467 --rc geninfo_unexecuted_blocks=1 00:31:53.467 00:31:53.467 ' 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:53.467 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:31:53.467 00:31:53.467 real 0m0.239s 00:31:53.467 user 0m0.153s 00:31:53.467 sys 0m0.101s 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:31:53.467 ************************************ 00:31:53.467 END TEST dma 00:31:53.467 ************************************ 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.467 ************************************ 00:31:53.467 START TEST nvmf_identify 00:31:53.467 ************************************ 00:31:53.467 15:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:31:53.728 * Looking for test storage... 00:31:53.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:53.728 15:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:53.728 15:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:31:53.728 15:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:53.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.729 --rc genhtml_branch_coverage=1 00:31:53.729 --rc genhtml_function_coverage=1 00:31:53.729 --rc genhtml_legend=1 00:31:53.729 --rc geninfo_all_blocks=1 00:31:53.729 --rc geninfo_unexecuted_blocks=1 00:31:53.729 00:31:53.729 ' 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:53.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.729 --rc genhtml_branch_coverage=1 00:31:53.729 --rc genhtml_function_coverage=1 00:31:53.729 --rc genhtml_legend=1 00:31:53.729 --rc geninfo_all_blocks=1 00:31:53.729 --rc geninfo_unexecuted_blocks=1 00:31:53.729 00:31:53.729 ' 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:53.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.729 --rc genhtml_branch_coverage=1 00:31:53.729 --rc genhtml_function_coverage=1 00:31:53.729 --rc genhtml_legend=1 00:31:53.729 --rc geninfo_all_blocks=1 00:31:53.729 --rc geninfo_unexecuted_blocks=1 00:31:53.729 00:31:53.729 ' 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:53.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.729 --rc genhtml_branch_coverage=1 00:31:53.729 --rc genhtml_function_coverage=1 00:31:53.729 --rc genhtml_legend=1 00:31:53.729 --rc geninfo_all_blocks=1 00:31:53.729 --rc geninfo_unexecuted_blocks=1 00:31:53.729 00:31:53.729 ' 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:53.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:31:53.729 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:53.730 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # prepare_net_devs 00:31:53.730 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@434 -- # local -g is_hw=no 00:31:53.730 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # remove_spdk_ns 00:31:53.730 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:53.730 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:53.730 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:53.730 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:31:53.730 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:31:53.730 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:31:53.730 15:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:01.873 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:01.874 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:01.874 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:01.874 Found net devices under 0000:31:00.0: cvl_0_0 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:01.874 Found net devices under 0000:31:00.1: cvl_0_1 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # is_hw=yes 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:01.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:01.874 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:32:01.874 00:32:01.874 --- 10.0.0.2 ping statistics --- 00:32:01.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:01.874 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:32:01.874 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:01.874 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:01.874 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:32:01.874 00:32:01.874 --- 10.0.0.1 ping statistics --- 00:32:01.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:01.875 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:32:01.875 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:01.875 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # return 0 00:32:01.875 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:32:01.875 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:01.875 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:32:01.875 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:32:01.875 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:01.875 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:32:01.875 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:32:01.875 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:32:01.875 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:01.875 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:01.875 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=532950 00:32:01.875 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:01.875 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:01.875 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 532950 00:32:01.875 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 532950 ']' 00:32:01.875 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:01.875 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:01.875 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:01.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:01.875 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:01.875 15:50:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:01.875 [2024-09-27 15:50:41.853871] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:32:01.875 [2024-09-27 15:50:41.853950] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:01.875 [2024-09-27 15:50:41.944281] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:01.875 [2024-09-27 15:50:41.992911] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:01.875 [2024-09-27 15:50:41.992962] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:01.875 [2024-09-27 15:50:41.992970] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:01.875 [2024-09-27 15:50:41.992977] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:01.875 [2024-09-27 15:50:41.992984] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:01.875 [2024-09-27 15:50:41.993183] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:01.875 [2024-09-27 15:50:41.993336] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:32:01.875 [2024-09-27 15:50:41.993494] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:01.875 [2024-09-27 15:50:41.993494] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:32:02.449 15:50:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:02.449 15:50:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:32:02.449 15:50:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:02.449 15:50:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.449 15:50:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:02.449 [2024-09-27 15:50:42.670047] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:02.449 15:50:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.449 15:50:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:32:02.449 15:50:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:02.449 15:50:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:02.449 15:50:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:02.449 15:50:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.449 15:50:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:02.449 Malloc0 00:32:02.449 15:50:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.449 15:50:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:02.449 15:50:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.449 15:50:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:02.449 15:50:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.449 15:50:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:32:02.449 15:50:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.449 15:50:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:02.449 15:50:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.449 15:50:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:02.449 15:50:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.449 15:50:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:02.449 [2024-09-27 15:50:42.779909] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:02.449 15:50:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.449 15:50:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:02.449 15:50:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.449 15:50:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:02.449 15:50:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.449 15:50:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:32:02.449 15:50:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.449 15:50:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:02.449 [ 00:32:02.449 { 00:32:02.449 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:02.449 "subtype": "Discovery", 00:32:02.449 "listen_addresses": [ 00:32:02.449 { 00:32:02.449 "trtype": "TCP", 00:32:02.449 "adrfam": "IPv4", 00:32:02.449 "traddr": "10.0.0.2", 00:32:02.449 "trsvcid": "4420" 00:32:02.449 } 00:32:02.449 ], 00:32:02.449 "allow_any_host": true, 00:32:02.449 "hosts": [] 00:32:02.449 }, 00:32:02.449 { 00:32:02.449 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:02.449 "subtype": "NVMe", 00:32:02.449 "listen_addresses": [ 00:32:02.449 { 00:32:02.449 "trtype": "TCP", 00:32:02.449 "adrfam": "IPv4", 00:32:02.449 "traddr": "10.0.0.2", 00:32:02.449 "trsvcid": "4420" 00:32:02.449 } 00:32:02.449 ], 00:32:02.449 "allow_any_host": true, 00:32:02.449 "hosts": [], 00:32:02.449 "serial_number": "SPDK00000000000001", 00:32:02.449 "model_number": "SPDK bdev Controller", 00:32:02.449 "max_namespaces": 32, 00:32:02.449 "min_cntlid": 1, 00:32:02.449 "max_cntlid": 65519, 00:32:02.449 "namespaces": [ 00:32:02.449 { 00:32:02.449 "nsid": 1, 00:32:02.449 "bdev_name": "Malloc0", 00:32:02.449 "name": "Malloc0", 00:32:02.449 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:32:02.449 "eui64": "ABCDEF0123456789", 00:32:02.449 "uuid": "cf4b0225-fc7f-48a4-8ea2-db25349a09cf" 00:32:02.449 } 00:32:02.449 ] 00:32:02.449 } 00:32:02.449 ] 00:32:02.449 15:50:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.449 15:50:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:32:02.449 [2024-09-27 15:50:42.843764] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:32:02.449 [2024-09-27 15:50:42.843822] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid533177 ] 00:32:02.449 [2024-09-27 15:50:42.884109] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:32:02.449 [2024-09-27 15:50:42.884183] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:32:02.449 [2024-09-27 15:50:42.884189] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:32:02.450 [2024-09-27 15:50:42.884205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:32:02.450 [2024-09-27 15:50:42.884216] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:32:02.450 [2024-09-27 15:50:42.885095] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:32:02.450 [2024-09-27 15:50:42.885150] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x185d0e0 0 00:32:02.450 [2024-09-27 15:50:42.898922] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:32:02.450 [2024-09-27 15:50:42.898940] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:32:02.450 [2024-09-27 15:50:42.898945] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:32:02.450 [2024-09-27 15:50:42.898949] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:32:02.450 [2024-09-27 15:50:42.898986] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.450 [2024-09-27 15:50:42.898992] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.450 [2024-09-27 15:50:42.898996] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x185d0e0) 00:32:02.450 [2024-09-27 15:50:42.899012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:32:02.450 [2024-09-27 15:50:42.899036] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c7dc0, cid 0, qid 0 00:32:02.450 [2024-09-27 15:50:42.906908] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.450 [2024-09-27 15:50:42.906918] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.450 [2024-09-27 15:50:42.906922] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.450 [2024-09-27 15:50:42.906927] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c7dc0) on tqpair=0x185d0e0 00:32:02.450 [2024-09-27 15:50:42.906938] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:32:02.450 [2024-09-27 15:50:42.906946] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:32:02.450 [2024-09-27 15:50:42.906951] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:32:02.450 [2024-09-27 15:50:42.906967] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.450 [2024-09-27 15:50:42.906972] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.450 [2024-09-27 15:50:42.906976] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x185d0e0) 00:32:02.450 [2024-09-27 15:50:42.906984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.450 [2024-09-27 15:50:42.907000] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c7dc0, cid 0, qid 0 00:32:02.450 [2024-09-27 15:50:42.907229] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.450 [2024-09-27 15:50:42.907236] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.450 [2024-09-27 15:50:42.907239] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.450 [2024-09-27 15:50:42.907243] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c7dc0) on tqpair=0x185d0e0 00:32:02.450 [2024-09-27 15:50:42.907249] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:32:02.450 [2024-09-27 15:50:42.907257] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:32:02.450 [2024-09-27 15:50:42.907264] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.450 [2024-09-27 15:50:42.907268] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.450 [2024-09-27 15:50:42.907271] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x185d0e0) 00:32:02.450 [2024-09-27 15:50:42.907283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.450 [2024-09-27 15:50:42.907295] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c7dc0, cid 0, qid 0 00:32:02.450 [2024-09-27 15:50:42.907512] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.450 [2024-09-27 15:50:42.907518] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.450 [2024-09-27 15:50:42.907522] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.450 [2024-09-27 15:50:42.907526] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c7dc0) on tqpair=0x185d0e0 00:32:02.450 [2024-09-27 15:50:42.907531] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:32:02.450 [2024-09-27 15:50:42.907540] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:32:02.450 [2024-09-27 15:50:42.907547] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.450 [2024-09-27 15:50:42.907551] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.450 [2024-09-27 15:50:42.907554] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x185d0e0) 00:32:02.450 [2024-09-27 15:50:42.907561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.450 [2024-09-27 15:50:42.907572] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c7dc0, cid 0, qid 0 00:32:02.450 [2024-09-27 15:50:42.907782] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.450 [2024-09-27 15:50:42.907788] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.450 [2024-09-27 15:50:42.907792] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.450 [2024-09-27 15:50:42.907796] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c7dc0) on tqpair=0x185d0e0 00:32:02.450 [2024-09-27 15:50:42.907801] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:32:02.450 [2024-09-27 15:50:42.907810] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.450 [2024-09-27 15:50:42.907814] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.450 [2024-09-27 15:50:42.907818] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x185d0e0) 00:32:02.450 [2024-09-27 15:50:42.907825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.450 [2024-09-27 15:50:42.907835] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c7dc0, cid 0, qid 0 00:32:02.450 [2024-09-27 15:50:42.908013] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.450 [2024-09-27 15:50:42.908020] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.450 [2024-09-27 15:50:42.908024] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.450 [2024-09-27 15:50:42.908028] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c7dc0) on tqpair=0x185d0e0 00:32:02.450 [2024-09-27 15:50:42.908032] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:32:02.450 [2024-09-27 15:50:42.908037] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:32:02.450 [2024-09-27 15:50:42.908045] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:32:02.450 [2024-09-27 15:50:42.908151] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:32:02.450 [2024-09-27 15:50:42.908156] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:32:02.450 [2024-09-27 15:50:42.908168] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.450 [2024-09-27 15:50:42.908172] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.450 [2024-09-27 15:50:42.908176] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x185d0e0) 00:32:02.450 [2024-09-27 15:50:42.908182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.450 [2024-09-27 15:50:42.908193] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c7dc0, cid 0, qid 0 00:32:02.450 [2024-09-27 15:50:42.908419] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.450 [2024-09-27 15:50:42.908425] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.450 [2024-09-27 15:50:42.908428] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.450 [2024-09-27 15:50:42.908432] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c7dc0) on tqpair=0x185d0e0 00:32:02.450 [2024-09-27 15:50:42.908437] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:32:02.450 [2024-09-27 15:50:42.908447] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.450 [2024-09-27 15:50:42.908451] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.450 [2024-09-27 15:50:42.908454] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x185d0e0) 00:32:02.450 [2024-09-27 15:50:42.908461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.450 [2024-09-27 15:50:42.908471] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c7dc0, cid 0, qid 0 00:32:02.450 [2024-09-27 15:50:42.908675] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.450 [2024-09-27 15:50:42.908681] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.450 [2024-09-27 15:50:42.908684] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.450 [2024-09-27 15:50:42.908688] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c7dc0) on tqpair=0x185d0e0 00:32:02.450 [2024-09-27 15:50:42.908693] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:32:02.450 [2024-09-27 15:50:42.908698] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:32:02.450 [2024-09-27 15:50:42.908706] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:32:02.450 [2024-09-27 15:50:42.908715] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:32:02.450 [2024-09-27 15:50:42.908724] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.450 [2024-09-27 15:50:42.908728] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x185d0e0) 00:32:02.450 [2024-09-27 15:50:42.908735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.450 [2024-09-27 15:50:42.908746] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c7dc0, cid 0, qid 0 00:32:02.450 [2024-09-27 15:50:42.909004] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:02.450 [2024-09-27 15:50:42.909012] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:02.450 [2024-09-27 15:50:42.909016] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:02.451 [2024-09-27 15:50:42.909020] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x185d0e0): datao=0, datal=4096, cccid=0 00:32:02.451 [2024-09-27 15:50:42.909026] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18c7dc0) on tqpair(0x185d0e0): expected_datao=0, payload_size=4096 00:32:02.451 [2024-09-27 15:50:42.909030] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.451 [2024-09-27 15:50:42.909042] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:02.451 [2024-09-27 15:50:42.909046] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:02.451 [2024-09-27 15:50:42.909188] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.451 [2024-09-27 15:50:42.909195] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.451 [2024-09-27 15:50:42.909199] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.451 [2024-09-27 15:50:42.909202] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c7dc0) on tqpair=0x185d0e0 00:32:02.451 [2024-09-27 15:50:42.909211] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:32:02.451 [2024-09-27 15:50:42.909217] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:32:02.451 [2024-09-27 15:50:42.909221] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:32:02.451 [2024-09-27 15:50:42.909226] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:32:02.451 [2024-09-27 15:50:42.909231] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:32:02.451 [2024-09-27 15:50:42.909236] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:32:02.451 [2024-09-27 15:50:42.909244] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:32:02.451 [2024-09-27 15:50:42.909252] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.451 [2024-09-27 15:50:42.909256] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.451 [2024-09-27 15:50:42.909259] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x185d0e0) 00:32:02.451 [2024-09-27 15:50:42.909267] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:02.451 [2024-09-27 15:50:42.909278] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c7dc0, cid 0, qid 0 00:32:02.451 [2024-09-27 15:50:42.909481] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.451 [2024-09-27 15:50:42.909488] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.451 [2024-09-27 15:50:42.909491] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.451 [2024-09-27 15:50:42.909495] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c7dc0) on tqpair=0x185d0e0 00:32:02.451 [2024-09-27 15:50:42.909503] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.451 [2024-09-27 15:50:42.909507] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.451 [2024-09-27 15:50:42.909510] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x185d0e0) 00:32:02.451 [2024-09-27 15:50:42.909516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:02.451 [2024-09-27 15:50:42.909523] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.451 [2024-09-27 15:50:42.909527] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.451 [2024-09-27 15:50:42.909531] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x185d0e0) 00:32:02.451 [2024-09-27 15:50:42.909537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:02.451 [2024-09-27 15:50:42.909543] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.451 [2024-09-27 15:50:42.909546] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.451 [2024-09-27 15:50:42.909550] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x185d0e0) 00:32:02.451 [2024-09-27 15:50:42.909556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:02.451 [2024-09-27 15:50:42.909565] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.451 [2024-09-27 15:50:42.909569] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.451 [2024-09-27 15:50:42.909572] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x185d0e0) 00:32:02.451 [2024-09-27 15:50:42.909578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:02.451 [2024-09-27 15:50:42.909583] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:32:02.451 [2024-09-27 15:50:42.909594] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:32:02.451 [2024-09-27 15:50:42.909601] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.451 [2024-09-27 15:50:42.909604] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x185d0e0) 00:32:02.451 [2024-09-27 15:50:42.909611] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.451 [2024-09-27 15:50:42.909624] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c7dc0, cid 0, qid 0 00:32:02.451 [2024-09-27 15:50:42.909629] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c7f40, cid 1, qid 0 00:32:02.451 [2024-09-27 15:50:42.909634] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c80c0, cid 2, qid 0 00:32:02.451 [2024-09-27 15:50:42.909638] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c8240, cid 3, qid 0 00:32:02.451 [2024-09-27 15:50:42.909643] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c83c0, cid 4, qid 0 00:32:02.451 [2024-09-27 15:50:42.909881] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.451 [2024-09-27 15:50:42.909887] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.451 [2024-09-27 15:50:42.909891] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.451 [2024-09-27 15:50:42.909901] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c83c0) on tqpair=0x185d0e0 00:32:02.451 [2024-09-27 15:50:42.909907] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:32:02.451 [2024-09-27 15:50:42.909912] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:32:02.451 [2024-09-27 15:50:42.909922] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.451 [2024-09-27 15:50:42.909926] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x185d0e0) 00:32:02.451 [2024-09-27 15:50:42.909932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.451 [2024-09-27 15:50:42.909943] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c83c0, cid 4, qid 0 00:32:02.451 [2024-09-27 15:50:42.910145] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:02.451 [2024-09-27 15:50:42.910151] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:02.451 [2024-09-27 15:50:42.910155] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:02.451 [2024-09-27 15:50:42.910158] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x185d0e0): datao=0, datal=4096, cccid=4 00:32:02.451 [2024-09-27 15:50:42.910163] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18c83c0) on tqpair(0x185d0e0): expected_datao=0, payload_size=4096 00:32:02.451 [2024-09-27 15:50:42.910168] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.451 [2024-09-27 15:50:42.910185] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:02.451 [2024-09-27 15:50:42.910190] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:02.451 [2024-09-27 15:50:42.910364] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.451 [2024-09-27 15:50:42.910373] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.451 [2024-09-27 15:50:42.910376] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.451 [2024-09-27 15:50:42.910380] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c83c0) on tqpair=0x185d0e0 00:32:02.451 [2024-09-27 15:50:42.910394] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:32:02.451 [2024-09-27 15:50:42.910424] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.451 [2024-09-27 15:50:42.910428] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x185d0e0) 00:32:02.451 [2024-09-27 15:50:42.910435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.451 [2024-09-27 15:50:42.910442] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.451 [2024-09-27 15:50:42.910446] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.451 [2024-09-27 15:50:42.910450] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x185d0e0) 00:32:02.451 [2024-09-27 15:50:42.910456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:32:02.451 [2024-09-27 15:50:42.910469] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c83c0, cid 4, qid 0 00:32:02.451 [2024-09-27 15:50:42.910474] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c8540, cid 5, qid 0 00:32:02.451 [2024-09-27 15:50:42.910740] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:02.451 [2024-09-27 15:50:42.910746] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:02.451 [2024-09-27 15:50:42.910750] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:02.451 [2024-09-27 15:50:42.910753] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x185d0e0): datao=0, datal=1024, cccid=4 00:32:02.451 [2024-09-27 15:50:42.910758] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18c83c0) on tqpair(0x185d0e0): expected_datao=0, payload_size=1024 00:32:02.451 [2024-09-27 15:50:42.910762] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.451 [2024-09-27 15:50:42.910769] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:02.451 [2024-09-27 15:50:42.910773] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:02.451 [2024-09-27 15:50:42.910779] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.451 [2024-09-27 15:50:42.910784] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.451 [2024-09-27 15:50:42.910788] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.451 [2024-09-27 15:50:42.910792] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c8540) on tqpair=0x185d0e0 00:32:02.716 [2024-09-27 15:50:42.952905] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.716 [2024-09-27 15:50:42.952919] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.716 [2024-09-27 15:50:42.952923] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.716 [2024-09-27 15:50:42.952927] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c83c0) on tqpair=0x185d0e0 00:32:02.716 [2024-09-27 15:50:42.952941] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.716 [2024-09-27 15:50:42.952945] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x185d0e0) 00:32:02.716 [2024-09-27 15:50:42.952952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.716 [2024-09-27 15:50:42.952969] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c83c0, cid 4, qid 0 00:32:02.716 [2024-09-27 15:50:42.953163] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:02.716 [2024-09-27 15:50:42.953172] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:02.716 [2024-09-27 15:50:42.953177] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:02.716 [2024-09-27 15:50:42.953192] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x185d0e0): datao=0, datal=3072, cccid=4 00:32:02.716 [2024-09-27 15:50:42.953198] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18c83c0) on tqpair(0x185d0e0): expected_datao=0, payload_size=3072 00:32:02.716 [2024-09-27 15:50:42.953204] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.716 [2024-09-27 15:50:42.953223] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:02.716 [2024-09-27 15:50:42.953228] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:02.716 [2024-09-27 15:50:42.953371] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.716 [2024-09-27 15:50:42.953377] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.716 [2024-09-27 15:50:42.953381] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.716 [2024-09-27 15:50:42.953384] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c83c0) on tqpair=0x185d0e0 00:32:02.716 [2024-09-27 15:50:42.953393] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.716 [2024-09-27 15:50:42.953397] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x185d0e0) 00:32:02.716 [2024-09-27 15:50:42.953403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.716 [2024-09-27 15:50:42.953417] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c83c0, cid 4, qid 0 00:32:02.716 [2024-09-27 15:50:42.953650] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:02.716 [2024-09-27 15:50:42.953658] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:02.716 [2024-09-27 15:50:42.953661] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:02.716 [2024-09-27 15:50:42.953665] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x185d0e0): datao=0, datal=8, cccid=4 00:32:02.716 [2024-09-27 15:50:42.953669] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18c83c0) on tqpair(0x185d0e0): expected_datao=0, payload_size=8 00:32:02.716 [2024-09-27 15:50:42.953674] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.716 [2024-09-27 15:50:42.953680] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:02.716 [2024-09-27 15:50:42.953684] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:02.716 [2024-09-27 15:50:42.995069] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.716 [2024-09-27 15:50:42.995080] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.716 [2024-09-27 15:50:42.995084] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.716 [2024-09-27 15:50:42.995088] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c83c0) on tqpair=0x185d0e0 00:32:02.716 ===================================================== 00:32:02.716 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:32:02.716 ===================================================== 00:32:02.716 Controller Capabilities/Features 00:32:02.716 ================================ 00:32:02.716 Vendor ID: 0000 00:32:02.716 Subsystem Vendor ID: 0000 00:32:02.716 Serial Number: .................... 00:32:02.716 Model Number: ........................................ 00:32:02.716 Firmware Version: 25.01 00:32:02.716 Recommended Arb Burst: 0 00:32:02.716 IEEE OUI Identifier: 00 00 00 00:32:02.716 Multi-path I/O 00:32:02.716 May have multiple subsystem ports: No 00:32:02.716 May have multiple controllers: No 00:32:02.716 Associated with SR-IOV VF: No 00:32:02.716 Max Data Transfer Size: 131072 00:32:02.716 Max Number of Namespaces: 0 00:32:02.716 Max Number of I/O Queues: 1024 00:32:02.716 NVMe Specification Version (VS): 1.3 00:32:02.716 NVMe Specification Version (Identify): 1.3 00:32:02.716 Maximum Queue Entries: 128 00:32:02.716 Contiguous Queues Required: Yes 00:32:02.717 Arbitration Mechanisms Supported 00:32:02.717 Weighted Round Robin: Not Supported 00:32:02.717 Vendor Specific: Not Supported 00:32:02.717 Reset Timeout: 15000 ms 00:32:02.717 Doorbell Stride: 4 bytes 00:32:02.717 NVM Subsystem Reset: Not Supported 00:32:02.717 Command Sets Supported 00:32:02.717 NVM Command Set: Supported 00:32:02.717 Boot Partition: Not Supported 00:32:02.717 Memory Page Size Minimum: 4096 bytes 00:32:02.717 Memory Page Size Maximum: 4096 bytes 00:32:02.717 Persistent Memory Region: Not Supported 00:32:02.717 Optional Asynchronous Events Supported 00:32:02.717 Namespace Attribute Notices: Not Supported 00:32:02.717 Firmware Activation Notices: Not Supported 00:32:02.717 ANA Change Notices: Not Supported 00:32:02.717 PLE Aggregate Log Change Notices: Not Supported 00:32:02.717 LBA Status Info Alert Notices: Not Supported 00:32:02.717 EGE Aggregate Log Change Notices: Not Supported 00:32:02.717 Normal NVM Subsystem Shutdown event: Not Supported 00:32:02.717 Zone Descriptor Change Notices: Not Supported 00:32:02.717 Discovery Log Change Notices: Supported 00:32:02.717 Controller Attributes 00:32:02.717 128-bit Host Identifier: Not Supported 00:32:02.717 Non-Operational Permissive Mode: Not Supported 00:32:02.717 NVM Sets: Not Supported 00:32:02.717 Read Recovery Levels: Not Supported 00:32:02.717 Endurance Groups: Not Supported 00:32:02.717 Predictable Latency Mode: Not Supported 00:32:02.717 Traffic Based Keep ALive: Not Supported 00:32:02.717 Namespace Granularity: Not Supported 00:32:02.717 SQ Associations: Not Supported 00:32:02.717 UUID List: Not Supported 00:32:02.717 Multi-Domain Subsystem: Not Supported 00:32:02.717 Fixed Capacity Management: Not Supported 00:32:02.717 Variable Capacity Management: Not Supported 00:32:02.717 Delete Endurance Group: Not Supported 00:32:02.717 Delete NVM Set: Not Supported 00:32:02.717 Extended LBA Formats Supported: Not Supported 00:32:02.717 Flexible Data Placement Supported: Not Supported 00:32:02.717 00:32:02.717 Controller Memory Buffer Support 00:32:02.717 ================================ 00:32:02.717 Supported: No 00:32:02.717 00:32:02.717 Persistent Memory Region Support 00:32:02.717 ================================ 00:32:02.717 Supported: No 00:32:02.717 00:32:02.717 Admin Command Set Attributes 00:32:02.717 ============================ 00:32:02.717 Security Send/Receive: Not Supported 00:32:02.717 Format NVM: Not Supported 00:32:02.717 Firmware Activate/Download: Not Supported 00:32:02.717 Namespace Management: Not Supported 00:32:02.717 Device Self-Test: Not Supported 00:32:02.717 Directives: Not Supported 00:32:02.717 NVMe-MI: Not Supported 00:32:02.717 Virtualization Management: Not Supported 00:32:02.717 Doorbell Buffer Config: Not Supported 00:32:02.717 Get LBA Status Capability: Not Supported 00:32:02.717 Command & Feature Lockdown Capability: Not Supported 00:32:02.717 Abort Command Limit: 1 00:32:02.717 Async Event Request Limit: 4 00:32:02.717 Number of Firmware Slots: N/A 00:32:02.717 Firmware Slot 1 Read-Only: N/A 00:32:02.717 Firmware Activation Without Reset: N/A 00:32:02.717 Multiple Update Detection Support: N/A 00:32:02.717 Firmware Update Granularity: No Information Provided 00:32:02.717 Per-Namespace SMART Log: No 00:32:02.717 Asymmetric Namespace Access Log Page: Not Supported 00:32:02.717 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:32:02.717 Command Effects Log Page: Not Supported 00:32:02.717 Get Log Page Extended Data: Supported 00:32:02.717 Telemetry Log Pages: Not Supported 00:32:02.717 Persistent Event Log Pages: Not Supported 00:32:02.717 Supported Log Pages Log Page: May Support 00:32:02.717 Commands Supported & Effects Log Page: Not Supported 00:32:02.717 Feature Identifiers & Effects Log Page:May Support 00:32:02.717 NVMe-MI Commands & Effects Log Page: May Support 00:32:02.717 Data Area 4 for Telemetry Log: Not Supported 00:32:02.717 Error Log Page Entries Supported: 128 00:32:02.717 Keep Alive: Not Supported 00:32:02.717 00:32:02.717 NVM Command Set Attributes 00:32:02.717 ========================== 00:32:02.717 Submission Queue Entry Size 00:32:02.717 Max: 1 00:32:02.717 Min: 1 00:32:02.717 Completion Queue Entry Size 00:32:02.717 Max: 1 00:32:02.717 Min: 1 00:32:02.717 Number of Namespaces: 0 00:32:02.717 Compare Command: Not Supported 00:32:02.717 Write Uncorrectable Command: Not Supported 00:32:02.717 Dataset Management Command: Not Supported 00:32:02.717 Write Zeroes Command: Not Supported 00:32:02.717 Set Features Save Field: Not Supported 00:32:02.717 Reservations: Not Supported 00:32:02.717 Timestamp: Not Supported 00:32:02.717 Copy: Not Supported 00:32:02.717 Volatile Write Cache: Not Present 00:32:02.717 Atomic Write Unit (Normal): 1 00:32:02.717 Atomic Write Unit (PFail): 1 00:32:02.717 Atomic Compare & Write Unit: 1 00:32:02.717 Fused Compare & Write: Supported 00:32:02.717 Scatter-Gather List 00:32:02.717 SGL Command Set: Supported 00:32:02.717 SGL Keyed: Supported 00:32:02.717 SGL Bit Bucket Descriptor: Not Supported 00:32:02.717 SGL Metadata Pointer: Not Supported 00:32:02.717 Oversized SGL: Not Supported 00:32:02.717 SGL Metadata Address: Not Supported 00:32:02.717 SGL Offset: Supported 00:32:02.717 Transport SGL Data Block: Not Supported 00:32:02.717 Replay Protected Memory Block: Not Supported 00:32:02.717 00:32:02.717 Firmware Slot Information 00:32:02.717 ========================= 00:32:02.717 Active slot: 0 00:32:02.717 00:32:02.717 00:32:02.717 Error Log 00:32:02.717 ========= 00:32:02.717 00:32:02.717 Active Namespaces 00:32:02.717 ================= 00:32:02.717 Discovery Log Page 00:32:02.717 ================== 00:32:02.717 Generation Counter: 2 00:32:02.717 Number of Records: 2 00:32:02.717 Record Format: 0 00:32:02.717 00:32:02.717 Discovery Log Entry 0 00:32:02.717 ---------------------- 00:32:02.717 Transport Type: 3 (TCP) 00:32:02.717 Address Family: 1 (IPv4) 00:32:02.717 Subsystem Type: 3 (Current Discovery Subsystem) 00:32:02.717 Entry Flags: 00:32:02.717 Duplicate Returned Information: 1 00:32:02.717 Explicit Persistent Connection Support for Discovery: 1 00:32:02.717 Transport Requirements: 00:32:02.717 Secure Channel: Not Required 00:32:02.717 Port ID: 0 (0x0000) 00:32:02.717 Controller ID: 65535 (0xffff) 00:32:02.717 Admin Max SQ Size: 128 00:32:02.717 Transport Service Identifier: 4420 00:32:02.717 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:32:02.717 Transport Address: 10.0.0.2 00:32:02.717 Discovery Log Entry 1 00:32:02.717 ---------------------- 00:32:02.717 Transport Type: 3 (TCP) 00:32:02.717 Address Family: 1 (IPv4) 00:32:02.717 Subsystem Type: 2 (NVM Subsystem) 00:32:02.717 Entry Flags: 00:32:02.717 Duplicate Returned Information: 0 00:32:02.717 Explicit Persistent Connection Support for Discovery: 0 00:32:02.717 Transport Requirements: 00:32:02.717 Secure Channel: Not Required 00:32:02.717 Port ID: 0 (0x0000) 00:32:02.717 Controller ID: 65535 (0xffff) 00:32:02.717 Admin Max SQ Size: 128 00:32:02.717 Transport Service Identifier: 4420 00:32:02.717 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:32:02.717 Transport Address: 10.0.0.2 [2024-09-27 15:50:42.995191] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:32:02.717 [2024-09-27 15:50:42.995204] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c7dc0) on tqpair=0x185d0e0 00:32:02.717 [2024-09-27 15:50:42.995211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.717 [2024-09-27 15:50:42.995217] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c7f40) on tqpair=0x185d0e0 00:32:02.717 [2024-09-27 15:50:42.995221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.717 [2024-09-27 15:50:42.995227] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c80c0) on tqpair=0x185d0e0 00:32:02.717 [2024-09-27 15:50:42.995231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.717 [2024-09-27 15:50:42.995236] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c8240) on tqpair=0x185d0e0 00:32:02.717 [2024-09-27 15:50:42.995241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.717 [2024-09-27 15:50:42.995250] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.717 [2024-09-27 15:50:42.995256] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.717 [2024-09-27 15:50:42.995259] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x185d0e0) 00:32:02.717 [2024-09-27 15:50:42.995267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.717 [2024-09-27 15:50:42.995281] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c8240, cid 3, qid 0 00:32:02.718 [2024-09-27 15:50:42.995350] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.718 [2024-09-27 15:50:42.995356] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.718 [2024-09-27 15:50:42.995360] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.718 [2024-09-27 15:50:42.995364] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c8240) on tqpair=0x185d0e0 00:32:02.718 [2024-09-27 15:50:42.995371] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.718 [2024-09-27 15:50:42.995375] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.718 [2024-09-27 15:50:42.995378] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x185d0e0) 00:32:02.718 [2024-09-27 15:50:42.995385] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.718 [2024-09-27 15:50:42.995399] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c8240, cid 3, qid 0 00:32:02.718 [2024-09-27 15:50:42.995582] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.718 [2024-09-27 15:50:42.995588] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.718 [2024-09-27 15:50:42.995592] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.718 [2024-09-27 15:50:42.995596] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c8240) on tqpair=0x185d0e0 00:32:02.718 [2024-09-27 15:50:42.995601] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:32:02.718 [2024-09-27 15:50:42.995609] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:32:02.718 [2024-09-27 15:50:42.995619] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.718 [2024-09-27 15:50:42.995623] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.718 [2024-09-27 15:50:42.995626] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x185d0e0) 00:32:02.718 [2024-09-27 15:50:42.995633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.718 [2024-09-27 15:50:42.995644] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c8240, cid 3, qid 0 00:32:02.718 [2024-09-27 15:50:42.995852] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.718 [2024-09-27 15:50:42.995859] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.718 [2024-09-27 15:50:42.995862] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.718 [2024-09-27 15:50:42.995866] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c8240) on tqpair=0x185d0e0 00:32:02.718 [2024-09-27 15:50:42.995876] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.718 [2024-09-27 15:50:42.995880] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.718 [2024-09-27 15:50:42.995883] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x185d0e0) 00:32:02.718 [2024-09-27 15:50:42.995890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.718 [2024-09-27 15:50:42.995908] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c8240, cid 3, qid 0 00:32:02.718 [2024-09-27 15:50:42.996126] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.718 [2024-09-27 15:50:42.996133] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.718 [2024-09-27 15:50:42.996136] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.718 [2024-09-27 15:50:42.996142] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c8240) on tqpair=0x185d0e0 00:32:02.718 [2024-09-27 15:50:42.996153] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.718 [2024-09-27 15:50:42.996157] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.718 [2024-09-27 15:50:42.996161] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x185d0e0) 00:32:02.718 [2024-09-27 15:50:42.996167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.718 [2024-09-27 15:50:42.996177] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c8240, cid 3, qid 0 00:32:02.718 [2024-09-27 15:50:42.996367] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.718 [2024-09-27 15:50:42.996373] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.718 [2024-09-27 15:50:42.996376] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.718 [2024-09-27 15:50:42.996380] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c8240) on tqpair=0x185d0e0 00:32:02.718 [2024-09-27 15:50:42.996390] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.718 [2024-09-27 15:50:42.996394] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.718 [2024-09-27 15:50:42.996397] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x185d0e0) 00:32:02.718 [2024-09-27 15:50:42.996404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.718 [2024-09-27 15:50:42.996414] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c8240, cid 3, qid 0 00:32:02.718 [2024-09-27 15:50:42.996618] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.718 [2024-09-27 15:50:42.996624] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.718 [2024-09-27 15:50:42.996627] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.718 [2024-09-27 15:50:42.996631] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c8240) on tqpair=0x185d0e0 00:32:02.718 [2024-09-27 15:50:42.996641] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.718 [2024-09-27 15:50:42.996645] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.718 [2024-09-27 15:50:42.996648] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x185d0e0) 00:32:02.718 [2024-09-27 15:50:42.996655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.718 [2024-09-27 15:50:42.996665] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c8240, cid 3, qid 0 00:32:02.718 [2024-09-27 15:50:42.996892] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.718 [2024-09-27 15:50:43.000727] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.718 [2024-09-27 15:50:43.000731] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.718 [2024-09-27 15:50:43.000735] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c8240) on tqpair=0x185d0e0 00:32:02.718 [2024-09-27 15:50:43.000747] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.718 [2024-09-27 15:50:43.000751] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.718 [2024-09-27 15:50:43.000754] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x185d0e0) 00:32:02.718 [2024-09-27 15:50:43.000761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.718 [2024-09-27 15:50:43.000773] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18c8240, cid 3, qid 0 00:32:02.718 [2024-09-27 15:50:43.000988] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.718 [2024-09-27 15:50:43.000995] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.718 [2024-09-27 15:50:43.001000] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.718 [2024-09-27 15:50:43.001004] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18c8240) on tqpair=0x185d0e0 00:32:02.718 [2024-09-27 15:50:43.001015] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:32:02.718 00:32:02.718 15:50:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:32:02.718 [2024-09-27 15:50:43.046476] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:32:02.718 [2024-09-27 15:50:43.046527] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid533274 ] 00:32:02.718 [2024-09-27 15:50:43.081611] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:32:02.718 [2024-09-27 15:50:43.081669] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:32:02.718 [2024-09-27 15:50:43.081675] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:32:02.718 [2024-09-27 15:50:43.081690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:32:02.718 [2024-09-27 15:50:43.081700] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:32:02.718 [2024-09-27 15:50:43.085185] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:32:02.718 [2024-09-27 15:50:43.085230] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x5410e0 0 00:32:02.718 [2024-09-27 15:50:43.092912] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:32:02.718 [2024-09-27 15:50:43.092928] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:32:02.718 [2024-09-27 15:50:43.092933] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:32:02.718 [2024-09-27 15:50:43.092936] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:32:02.718 [2024-09-27 15:50:43.092965] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.718 [2024-09-27 15:50:43.092970] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.718 [2024-09-27 15:50:43.092975] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5410e0) 00:32:02.718 [2024-09-27 15:50:43.092990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:32:02.718 [2024-09-27 15:50:43.093014] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5abdc0, cid 0, qid 0 00:32:02.718 [2024-09-27 15:50:43.099906] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.718 [2024-09-27 15:50:43.099927] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.718 [2024-09-27 15:50:43.099932] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.718 [2024-09-27 15:50:43.099937] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5abdc0) on tqpair=0x5410e0 00:32:02.718 [2024-09-27 15:50:43.099947] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:32:02.718 [2024-09-27 15:50:43.099954] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:32:02.718 [2024-09-27 15:50:43.099959] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:32:02.718 [2024-09-27 15:50:43.099975] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.718 [2024-09-27 15:50:43.099979] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.719 [2024-09-27 15:50:43.099983] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5410e0) 00:32:02.719 [2024-09-27 15:50:43.099997] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.719 [2024-09-27 15:50:43.100013] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5abdc0, cid 0, qid 0 00:32:02.719 [2024-09-27 15:50:43.100226] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.719 [2024-09-27 15:50:43.100233] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.719 [2024-09-27 15:50:43.100236] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.719 [2024-09-27 15:50:43.100241] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5abdc0) on tqpair=0x5410e0 00:32:02.719 [2024-09-27 15:50:43.100246] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:32:02.719 [2024-09-27 15:50:43.100254] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:32:02.719 [2024-09-27 15:50:43.100261] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.719 [2024-09-27 15:50:43.100265] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.719 [2024-09-27 15:50:43.100268] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5410e0) 00:32:02.719 [2024-09-27 15:50:43.100275] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.719 [2024-09-27 15:50:43.100286] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5abdc0, cid 0, qid 0 00:32:02.719 [2024-09-27 15:50:43.100488] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.719 [2024-09-27 15:50:43.100495] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.719 [2024-09-27 15:50:43.100498] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.719 [2024-09-27 15:50:43.100502] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5abdc0) on tqpair=0x5410e0 00:32:02.719 [2024-09-27 15:50:43.100507] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:32:02.719 [2024-09-27 15:50:43.100516] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:32:02.719 [2024-09-27 15:50:43.100523] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.719 [2024-09-27 15:50:43.100526] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.719 [2024-09-27 15:50:43.100530] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5410e0) 00:32:02.719 [2024-09-27 15:50:43.100537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.719 [2024-09-27 15:50:43.100547] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5abdc0, cid 0, qid 0 00:32:02.719 [2024-09-27 15:50:43.100735] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.719 [2024-09-27 15:50:43.100741] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.719 [2024-09-27 15:50:43.100745] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.719 [2024-09-27 15:50:43.100749] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5abdc0) on tqpair=0x5410e0 00:32:02.719 [2024-09-27 15:50:43.100754] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:32:02.719 [2024-09-27 15:50:43.100763] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.719 [2024-09-27 15:50:43.100767] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.719 [2024-09-27 15:50:43.100771] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5410e0) 00:32:02.719 [2024-09-27 15:50:43.100778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.719 [2024-09-27 15:50:43.100788] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5abdc0, cid 0, qid 0 00:32:02.719 [2024-09-27 15:50:43.100960] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.719 [2024-09-27 15:50:43.100969] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.719 [2024-09-27 15:50:43.100973] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.719 [2024-09-27 15:50:43.100977] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5abdc0) on tqpair=0x5410e0 00:32:02.719 [2024-09-27 15:50:43.100981] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:32:02.719 [2024-09-27 15:50:43.100986] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:32:02.719 [2024-09-27 15:50:43.100994] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:32:02.719 [2024-09-27 15:50:43.101100] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:32:02.719 [2024-09-27 15:50:43.101104] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:32:02.719 [2024-09-27 15:50:43.101112] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.719 [2024-09-27 15:50:43.101115] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.719 [2024-09-27 15:50:43.101119] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5410e0) 00:32:02.719 [2024-09-27 15:50:43.101126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.719 [2024-09-27 15:50:43.101137] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5abdc0, cid 0, qid 0 00:32:02.719 [2024-09-27 15:50:43.101350] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.719 [2024-09-27 15:50:43.101357] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.719 [2024-09-27 15:50:43.101360] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.719 [2024-09-27 15:50:43.101364] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5abdc0) on tqpair=0x5410e0 00:32:02.719 [2024-09-27 15:50:43.101369] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:32:02.719 [2024-09-27 15:50:43.101378] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.719 [2024-09-27 15:50:43.101382] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.719 [2024-09-27 15:50:43.101386] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5410e0) 00:32:02.719 [2024-09-27 15:50:43.101392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.719 [2024-09-27 15:50:43.101403] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5abdc0, cid 0, qid 0 00:32:02.719 [2024-09-27 15:50:43.101673] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.719 [2024-09-27 15:50:43.101679] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.719 [2024-09-27 15:50:43.101682] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.719 [2024-09-27 15:50:43.101686] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5abdc0) on tqpair=0x5410e0 00:32:02.719 [2024-09-27 15:50:43.101690] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:32:02.719 [2024-09-27 15:50:43.101695] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:32:02.719 [2024-09-27 15:50:43.101703] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:32:02.719 [2024-09-27 15:50:43.101717] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:32:02.719 [2024-09-27 15:50:43.101726] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.719 [2024-09-27 15:50:43.101732] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5410e0) 00:32:02.719 [2024-09-27 15:50:43.101739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.719 [2024-09-27 15:50:43.101750] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5abdc0, cid 0, qid 0 00:32:02.719 [2024-09-27 15:50:43.102021] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:02.719 [2024-09-27 15:50:43.102028] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:02.719 [2024-09-27 15:50:43.102032] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:02.719 [2024-09-27 15:50:43.102036] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5410e0): datao=0, datal=4096, cccid=0 00:32:02.719 [2024-09-27 15:50:43.102041] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5abdc0) on tqpair(0x5410e0): expected_datao=0, payload_size=4096 00:32:02.719 [2024-09-27 15:50:43.102046] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.719 [2024-09-27 15:50:43.102061] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:02.719 [2024-09-27 15:50:43.102065] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:02.719 [2024-09-27 15:50:43.145903] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.719 [2024-09-27 15:50:43.145914] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.719 [2024-09-27 15:50:43.145917] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.719 [2024-09-27 15:50:43.145922] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5abdc0) on tqpair=0x5410e0 00:32:02.719 [2024-09-27 15:50:43.145932] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:32:02.719 [2024-09-27 15:50:43.145937] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:32:02.719 [2024-09-27 15:50:43.145941] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:32:02.719 [2024-09-27 15:50:43.145945] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:32:02.719 [2024-09-27 15:50:43.145950] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:32:02.719 [2024-09-27 15:50:43.145955] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:32:02.719 [2024-09-27 15:50:43.145964] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:32:02.719 [2024-09-27 15:50:43.145971] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.719 [2024-09-27 15:50:43.145976] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.719 [2024-09-27 15:50:43.145979] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5410e0) 00:32:02.719 [2024-09-27 15:50:43.145987] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:02.719 [2024-09-27 15:50:43.145999] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5abdc0, cid 0, qid 0 00:32:02.719 [2024-09-27 15:50:43.146176] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.720 [2024-09-27 15:50:43.146183] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.720 [2024-09-27 15:50:43.146186] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.720 [2024-09-27 15:50:43.146190] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5abdc0) on tqpair=0x5410e0 00:32:02.720 [2024-09-27 15:50:43.146197] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.720 [2024-09-27 15:50:43.146201] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.720 [2024-09-27 15:50:43.146204] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5410e0) 00:32:02.720 [2024-09-27 15:50:43.146211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:02.720 [2024-09-27 15:50:43.146221] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.720 [2024-09-27 15:50:43.146225] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.720 [2024-09-27 15:50:43.146229] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x5410e0) 00:32:02.720 [2024-09-27 15:50:43.146235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:02.720 [2024-09-27 15:50:43.146241] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.720 [2024-09-27 15:50:43.146245] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.720 [2024-09-27 15:50:43.146248] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x5410e0) 00:32:02.720 [2024-09-27 15:50:43.146254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:02.720 [2024-09-27 15:50:43.146260] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.720 [2024-09-27 15:50:43.146264] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.720 [2024-09-27 15:50:43.146268] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5410e0) 00:32:02.720 [2024-09-27 15:50:43.146273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:02.720 [2024-09-27 15:50:43.146278] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:32:02.720 [2024-09-27 15:50:43.146291] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:32:02.720 [2024-09-27 15:50:43.146297] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.720 [2024-09-27 15:50:43.146301] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5410e0) 00:32:02.720 [2024-09-27 15:50:43.146308] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.720 [2024-09-27 15:50:43.146320] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5abdc0, cid 0, qid 0 00:32:02.720 [2024-09-27 15:50:43.146326] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5abf40, cid 1, qid 0 00:32:02.720 [2024-09-27 15:50:43.146330] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ac0c0, cid 2, qid 0 00:32:02.720 [2024-09-27 15:50:43.146335] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ac240, cid 3, qid 0 00:32:02.720 [2024-09-27 15:50:43.146340] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ac3c0, cid 4, qid 0 00:32:02.720 [2024-09-27 15:50:43.146594] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.720 [2024-09-27 15:50:43.146600] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.720 [2024-09-27 15:50:43.146604] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.720 [2024-09-27 15:50:43.146608] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ac3c0) on tqpair=0x5410e0 00:32:02.720 [2024-09-27 15:50:43.146612] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:32:02.720 [2024-09-27 15:50:43.146617] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:32:02.720 [2024-09-27 15:50:43.146626] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:32:02.720 [2024-09-27 15:50:43.146634] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:32:02.720 [2024-09-27 15:50:43.146641] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.720 [2024-09-27 15:50:43.146645] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.720 [2024-09-27 15:50:43.146651] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5410e0) 00:32:02.720 [2024-09-27 15:50:43.146657] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:02.720 [2024-09-27 15:50:43.146668] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ac3c0, cid 4, qid 0 00:32:02.720 [2024-09-27 15:50:43.146840] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.720 [2024-09-27 15:50:43.146846] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.720 [2024-09-27 15:50:43.146849] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.720 [2024-09-27 15:50:43.146853] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ac3c0) on tqpair=0x5410e0 00:32:02.720 [2024-09-27 15:50:43.146927] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:32:02.720 [2024-09-27 15:50:43.146937] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:32:02.720 [2024-09-27 15:50:43.146944] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.720 [2024-09-27 15:50:43.146948] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5410e0) 00:32:02.720 [2024-09-27 15:50:43.146955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.720 [2024-09-27 15:50:43.146967] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ac3c0, cid 4, qid 0 00:32:02.720 [2024-09-27 15:50:43.147191] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:02.720 [2024-09-27 15:50:43.147198] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:02.720 [2024-09-27 15:50:43.147202] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:02.720 [2024-09-27 15:50:43.147205] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5410e0): datao=0, datal=4096, cccid=4 00:32:02.720 [2024-09-27 15:50:43.147210] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5ac3c0) on tqpair(0x5410e0): expected_datao=0, payload_size=4096 00:32:02.720 [2024-09-27 15:50:43.147215] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.720 [2024-09-27 15:50:43.147230] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:02.720 [2024-09-27 15:50:43.147234] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:02.720 [2024-09-27 15:50:43.188072] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.720 [2024-09-27 15:50:43.188082] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.720 [2024-09-27 15:50:43.188086] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.720 [2024-09-27 15:50:43.188090] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ac3c0) on tqpair=0x5410e0 00:32:02.720 [2024-09-27 15:50:43.188101] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:32:02.720 [2024-09-27 15:50:43.188115] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:32:02.720 [2024-09-27 15:50:43.188126] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:32:02.720 [2024-09-27 15:50:43.188133] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.720 [2024-09-27 15:50:43.188137] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5410e0) 00:32:02.720 [2024-09-27 15:50:43.188145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.720 [2024-09-27 15:50:43.188157] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ac3c0, cid 4, qid 0 00:32:02.720 [2024-09-27 15:50:43.188259] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:02.720 [2024-09-27 15:50:43.188265] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:02.720 [2024-09-27 15:50:43.188272] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:02.720 [2024-09-27 15:50:43.188276] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5410e0): datao=0, datal=4096, cccid=4 00:32:02.720 [2024-09-27 15:50:43.188281] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5ac3c0) on tqpair(0x5410e0): expected_datao=0, payload_size=4096 00:32:02.720 [2024-09-27 15:50:43.188285] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.720 [2024-09-27 15:50:43.188297] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:02.720 [2024-09-27 15:50:43.188301] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:02.985 [2024-09-27 15:50:43.233901] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.985 [2024-09-27 15:50:43.233912] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.985 [2024-09-27 15:50:43.233916] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.985 [2024-09-27 15:50:43.233920] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ac3c0) on tqpair=0x5410e0 00:32:02.985 [2024-09-27 15:50:43.233935] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:32:02.985 [2024-09-27 15:50:43.233945] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:32:02.985 [2024-09-27 15:50:43.233953] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.985 [2024-09-27 15:50:43.233957] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5410e0) 00:32:02.985 [2024-09-27 15:50:43.233963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.985 [2024-09-27 15:50:43.233975] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ac3c0, cid 4, qid 0 00:32:02.985 [2024-09-27 15:50:43.234160] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:02.985 [2024-09-27 15:50:43.234166] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:02.985 [2024-09-27 15:50:43.234170] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:02.985 [2024-09-27 15:50:43.234174] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5410e0): datao=0, datal=4096, cccid=4 00:32:02.985 [2024-09-27 15:50:43.234178] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5ac3c0) on tqpair(0x5410e0): expected_datao=0, payload_size=4096 00:32:02.985 [2024-09-27 15:50:43.234183] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.985 [2024-09-27 15:50:43.234196] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:02.985 [2024-09-27 15:50:43.234200] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:02.985 [2024-09-27 15:50:43.278902] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.985 [2024-09-27 15:50:43.278911] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.985 [2024-09-27 15:50:43.278915] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.985 [2024-09-27 15:50:43.278919] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ac3c0) on tqpair=0x5410e0 00:32:02.985 [2024-09-27 15:50:43.278928] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:32:02.985 [2024-09-27 15:50:43.278937] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:32:02.985 [2024-09-27 15:50:43.278947] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:32:02.985 [2024-09-27 15:50:43.278954] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:32:02.985 [2024-09-27 15:50:43.278959] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:32:02.985 [2024-09-27 15:50:43.278968] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:32:02.985 [2024-09-27 15:50:43.278974] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:32:02.985 [2024-09-27 15:50:43.278978] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:32:02.985 [2024-09-27 15:50:43.278984] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:32:02.985 [2024-09-27 15:50:43.279002] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.985 [2024-09-27 15:50:43.279006] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5410e0) 00:32:02.985 [2024-09-27 15:50:43.279013] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.985 [2024-09-27 15:50:43.279020] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.985 [2024-09-27 15:50:43.279024] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.985 [2024-09-27 15:50:43.279027] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5410e0) 00:32:02.985 [2024-09-27 15:50:43.279033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:32:02.985 [2024-09-27 15:50:43.279047] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ac3c0, cid 4, qid 0 00:32:02.985 [2024-09-27 15:50:43.279052] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ac540, cid 5, qid 0 00:32:02.985 [2024-09-27 15:50:43.279148] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.985 [2024-09-27 15:50:43.279155] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.985 [2024-09-27 15:50:43.279158] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.985 [2024-09-27 15:50:43.279162] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ac3c0) on tqpair=0x5410e0 00:32:02.985 [2024-09-27 15:50:43.279169] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.985 [2024-09-27 15:50:43.279175] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.985 [2024-09-27 15:50:43.279178] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.985 [2024-09-27 15:50:43.279182] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ac540) on tqpair=0x5410e0 00:32:02.985 [2024-09-27 15:50:43.279191] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.985 [2024-09-27 15:50:43.279195] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5410e0) 00:32:02.985 [2024-09-27 15:50:43.279202] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.985 [2024-09-27 15:50:43.279213] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ac540, cid 5, qid 0 00:32:02.985 [2024-09-27 15:50:43.279390] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.985 [2024-09-27 15:50:43.279396] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.985 [2024-09-27 15:50:43.279399] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.985 [2024-09-27 15:50:43.279403] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ac540) on tqpair=0x5410e0 00:32:02.985 [2024-09-27 15:50:43.279412] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.985 [2024-09-27 15:50:43.279416] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5410e0) 00:32:02.985 [2024-09-27 15:50:43.279422] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.985 [2024-09-27 15:50:43.279432] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ac540, cid 5, qid 0 00:32:02.985 [2024-09-27 15:50:43.279651] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.985 [2024-09-27 15:50:43.279660] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.985 [2024-09-27 15:50:43.279664] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.985 [2024-09-27 15:50:43.279668] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ac540) on tqpair=0x5410e0 00:32:02.985 [2024-09-27 15:50:43.279677] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.985 [2024-09-27 15:50:43.279681] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5410e0) 00:32:02.985 [2024-09-27 15:50:43.279687] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.985 [2024-09-27 15:50:43.279697] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ac540, cid 5, qid 0 00:32:02.985 [2024-09-27 15:50:43.279898] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.985 [2024-09-27 15:50:43.279905] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.985 [2024-09-27 15:50:43.279909] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.985 [2024-09-27 15:50:43.279913] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ac540) on tqpair=0x5410e0 00:32:02.985 [2024-09-27 15:50:43.279928] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.985 [2024-09-27 15:50:43.279932] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5410e0) 00:32:02.985 [2024-09-27 15:50:43.279939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.985 [2024-09-27 15:50:43.279947] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.985 [2024-09-27 15:50:43.279950] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5410e0) 00:32:02.985 [2024-09-27 15:50:43.279957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.985 [2024-09-27 15:50:43.279964] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.985 [2024-09-27 15:50:43.279968] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x5410e0) 00:32:02.985 [2024-09-27 15:50:43.279974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.985 [2024-09-27 15:50:43.279983] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.985 [2024-09-27 15:50:43.279987] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x5410e0) 00:32:02.985 [2024-09-27 15:50:43.279993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.985 [2024-09-27 15:50:43.280005] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ac540, cid 5, qid 0 00:32:02.985 [2024-09-27 15:50:43.280010] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ac3c0, cid 4, qid 0 00:32:02.986 [2024-09-27 15:50:43.280015] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ac6c0, cid 6, qid 0 00:32:02.986 [2024-09-27 15:50:43.280020] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ac840, cid 7, qid 0 00:32:02.986 [2024-09-27 15:50:43.280305] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:02.986 [2024-09-27 15:50:43.280311] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:02.986 [2024-09-27 15:50:43.280315] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:02.986 [2024-09-27 15:50:43.280319] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5410e0): datao=0, datal=8192, cccid=5 00:32:02.986 [2024-09-27 15:50:43.280323] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5ac540) on tqpair(0x5410e0): expected_datao=0, payload_size=8192 00:32:02.986 [2024-09-27 15:50:43.280328] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.986 [2024-09-27 15:50:43.280422] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:02.986 [2024-09-27 15:50:43.280427] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:02.986 [2024-09-27 15:50:43.280433] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:02.986 [2024-09-27 15:50:43.280439] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:02.986 [2024-09-27 15:50:43.280442] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:02.986 [2024-09-27 15:50:43.280446] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5410e0): datao=0, datal=512, cccid=4 00:32:02.986 [2024-09-27 15:50:43.280451] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5ac3c0) on tqpair(0x5410e0): expected_datao=0, payload_size=512 00:32:02.986 [2024-09-27 15:50:43.280455] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.986 [2024-09-27 15:50:43.280461] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:02.986 [2024-09-27 15:50:43.280465] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:02.986 [2024-09-27 15:50:43.280471] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:02.986 [2024-09-27 15:50:43.280477] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:02.986 [2024-09-27 15:50:43.280480] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:02.986 [2024-09-27 15:50:43.280484] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5410e0): datao=0, datal=512, cccid=6 00:32:02.986 [2024-09-27 15:50:43.280488] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5ac6c0) on tqpair(0x5410e0): expected_datao=0, payload_size=512 00:32:02.986 [2024-09-27 15:50:43.280492] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.986 [2024-09-27 15:50:43.280499] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:02.986 [2024-09-27 15:50:43.280502] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:02.986 [2024-09-27 15:50:43.280508] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:02.986 [2024-09-27 15:50:43.280514] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:02.986 [2024-09-27 15:50:43.280517] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:02.986 [2024-09-27 15:50:43.280521] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5410e0): datao=0, datal=4096, cccid=7 00:32:02.986 [2024-09-27 15:50:43.280525] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5ac840) on tqpair(0x5410e0): expected_datao=0, payload_size=4096 00:32:02.986 [2024-09-27 15:50:43.280529] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.986 [2024-09-27 15:50:43.280536] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:02.986 [2024-09-27 15:50:43.280540] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:02.986 [2024-09-27 15:50:43.280549] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.986 [2024-09-27 15:50:43.280554] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.986 [2024-09-27 15:50:43.280558] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.986 [2024-09-27 15:50:43.280562] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ac540) on tqpair=0x5410e0 00:32:02.986 [2024-09-27 15:50:43.280575] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.986 [2024-09-27 15:50:43.280580] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.986 [2024-09-27 15:50:43.280584] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.986 [2024-09-27 15:50:43.280588] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ac3c0) on tqpair=0x5410e0 00:32:02.986 [2024-09-27 15:50:43.280598] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.986 [2024-09-27 15:50:43.280604] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.986 [2024-09-27 15:50:43.280608] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.986 [2024-09-27 15:50:43.280612] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ac6c0) on tqpair=0x5410e0 00:32:02.986 [2024-09-27 15:50:43.280618] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.986 [2024-09-27 15:50:43.280626] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.986 [2024-09-27 15:50:43.280630] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.986 [2024-09-27 15:50:43.280634] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ac840) on tqpair=0x5410e0 00:32:02.986 ===================================================== 00:32:02.986 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:02.986 ===================================================== 00:32:02.986 Controller Capabilities/Features 00:32:02.986 ================================ 00:32:02.986 Vendor ID: 8086 00:32:02.986 Subsystem Vendor ID: 8086 00:32:02.986 Serial Number: SPDK00000000000001 00:32:02.986 Model Number: SPDK bdev Controller 00:32:02.986 Firmware Version: 25.01 00:32:02.986 Recommended Arb Burst: 6 00:32:02.986 IEEE OUI Identifier: e4 d2 5c 00:32:02.986 Multi-path I/O 00:32:02.986 May have multiple subsystem ports: Yes 00:32:02.986 May have multiple controllers: Yes 00:32:02.986 Associated with SR-IOV VF: No 00:32:02.986 Max Data Transfer Size: 131072 00:32:02.986 Max Number of Namespaces: 32 00:32:02.986 Max Number of I/O Queues: 127 00:32:02.986 NVMe Specification Version (VS): 1.3 00:32:02.986 NVMe Specification Version (Identify): 1.3 00:32:02.986 Maximum Queue Entries: 128 00:32:02.986 Contiguous Queues Required: Yes 00:32:02.986 Arbitration Mechanisms Supported 00:32:02.986 Weighted Round Robin: Not Supported 00:32:02.986 Vendor Specific: Not Supported 00:32:02.986 Reset Timeout: 15000 ms 00:32:02.986 Doorbell Stride: 4 bytes 00:32:02.986 NVM Subsystem Reset: Not Supported 00:32:02.986 Command Sets Supported 00:32:02.986 NVM Command Set: Supported 00:32:02.986 Boot Partition: Not Supported 00:32:02.986 Memory Page Size Minimum: 4096 bytes 00:32:02.986 Memory Page Size Maximum: 4096 bytes 00:32:02.986 Persistent Memory Region: Not Supported 00:32:02.986 Optional Asynchronous Events Supported 00:32:02.986 Namespace Attribute Notices: Supported 00:32:02.986 Firmware Activation Notices: Not Supported 00:32:02.986 ANA Change Notices: Not Supported 00:32:02.986 PLE Aggregate Log Change Notices: Not Supported 00:32:02.986 LBA Status Info Alert Notices: Not Supported 00:32:02.986 EGE Aggregate Log Change Notices: Not Supported 00:32:02.986 Normal NVM Subsystem Shutdown event: Not Supported 00:32:02.986 Zone Descriptor Change Notices: Not Supported 00:32:02.986 Discovery Log Change Notices: Not Supported 00:32:02.986 Controller Attributes 00:32:02.986 128-bit Host Identifier: Supported 00:32:02.986 Non-Operational Permissive Mode: Not Supported 00:32:02.986 NVM Sets: Not Supported 00:32:02.986 Read Recovery Levels: Not Supported 00:32:02.986 Endurance Groups: Not Supported 00:32:02.986 Predictable Latency Mode: Not Supported 00:32:02.986 Traffic Based Keep ALive: Not Supported 00:32:02.986 Namespace Granularity: Not Supported 00:32:02.986 SQ Associations: Not Supported 00:32:02.986 UUID List: Not Supported 00:32:02.986 Multi-Domain Subsystem: Not Supported 00:32:02.986 Fixed Capacity Management: Not Supported 00:32:02.986 Variable Capacity Management: Not Supported 00:32:02.986 Delete Endurance Group: Not Supported 00:32:02.986 Delete NVM Set: Not Supported 00:32:02.986 Extended LBA Formats Supported: Not Supported 00:32:02.986 Flexible Data Placement Supported: Not Supported 00:32:02.986 00:32:02.986 Controller Memory Buffer Support 00:32:02.986 ================================ 00:32:02.986 Supported: No 00:32:02.986 00:32:02.986 Persistent Memory Region Support 00:32:02.986 ================================ 00:32:02.986 Supported: No 00:32:02.986 00:32:02.986 Admin Command Set Attributes 00:32:02.986 ============================ 00:32:02.986 Security Send/Receive: Not Supported 00:32:02.986 Format NVM: Not Supported 00:32:02.986 Firmware Activate/Download: Not Supported 00:32:02.986 Namespace Management: Not Supported 00:32:02.986 Device Self-Test: Not Supported 00:32:02.986 Directives: Not Supported 00:32:02.986 NVMe-MI: Not Supported 00:32:02.986 Virtualization Management: Not Supported 00:32:02.986 Doorbell Buffer Config: Not Supported 00:32:02.986 Get LBA Status Capability: Not Supported 00:32:02.986 Command & Feature Lockdown Capability: Not Supported 00:32:02.986 Abort Command Limit: 4 00:32:02.986 Async Event Request Limit: 4 00:32:02.986 Number of Firmware Slots: N/A 00:32:02.986 Firmware Slot 1 Read-Only: N/A 00:32:02.986 Firmware Activation Without Reset: N/A 00:32:02.986 Multiple Update Detection Support: N/A 00:32:02.986 Firmware Update Granularity: No Information Provided 00:32:02.986 Per-Namespace SMART Log: No 00:32:02.986 Asymmetric Namespace Access Log Page: Not Supported 00:32:02.986 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:32:02.986 Command Effects Log Page: Supported 00:32:02.986 Get Log Page Extended Data: Supported 00:32:02.986 Telemetry Log Pages: Not Supported 00:32:02.987 Persistent Event Log Pages: Not Supported 00:32:02.987 Supported Log Pages Log Page: May Support 00:32:02.987 Commands Supported & Effects Log Page: Not Supported 00:32:02.987 Feature Identifiers & Effects Log Page:May Support 00:32:02.987 NVMe-MI Commands & Effects Log Page: May Support 00:32:02.987 Data Area 4 for Telemetry Log: Not Supported 00:32:02.987 Error Log Page Entries Supported: 128 00:32:02.987 Keep Alive: Supported 00:32:02.987 Keep Alive Granularity: 10000 ms 00:32:02.987 00:32:02.987 NVM Command Set Attributes 00:32:02.987 ========================== 00:32:02.987 Submission Queue Entry Size 00:32:02.987 Max: 64 00:32:02.987 Min: 64 00:32:02.987 Completion Queue Entry Size 00:32:02.987 Max: 16 00:32:02.987 Min: 16 00:32:02.987 Number of Namespaces: 32 00:32:02.987 Compare Command: Supported 00:32:02.987 Write Uncorrectable Command: Not Supported 00:32:02.987 Dataset Management Command: Supported 00:32:02.987 Write Zeroes Command: Supported 00:32:02.987 Set Features Save Field: Not Supported 00:32:02.987 Reservations: Supported 00:32:02.987 Timestamp: Not Supported 00:32:02.987 Copy: Supported 00:32:02.987 Volatile Write Cache: Present 00:32:02.987 Atomic Write Unit (Normal): 1 00:32:02.987 Atomic Write Unit (PFail): 1 00:32:02.987 Atomic Compare & Write Unit: 1 00:32:02.987 Fused Compare & Write: Supported 00:32:02.987 Scatter-Gather List 00:32:02.987 SGL Command Set: Supported 00:32:02.987 SGL Keyed: Supported 00:32:02.987 SGL Bit Bucket Descriptor: Not Supported 00:32:02.987 SGL Metadata Pointer: Not Supported 00:32:02.987 Oversized SGL: Not Supported 00:32:02.987 SGL Metadata Address: Not Supported 00:32:02.987 SGL Offset: Supported 00:32:02.987 Transport SGL Data Block: Not Supported 00:32:02.987 Replay Protected Memory Block: Not Supported 00:32:02.987 00:32:02.987 Firmware Slot Information 00:32:02.987 ========================= 00:32:02.987 Active slot: 1 00:32:02.987 Slot 1 Firmware Revision: 25.01 00:32:02.987 00:32:02.987 00:32:02.987 Commands Supported and Effects 00:32:02.987 ============================== 00:32:02.987 Admin Commands 00:32:02.987 -------------- 00:32:02.987 Get Log Page (02h): Supported 00:32:02.987 Identify (06h): Supported 00:32:02.987 Abort (08h): Supported 00:32:02.987 Set Features (09h): Supported 00:32:02.987 Get Features (0Ah): Supported 00:32:02.987 Asynchronous Event Request (0Ch): Supported 00:32:02.987 Keep Alive (18h): Supported 00:32:02.987 I/O Commands 00:32:02.987 ------------ 00:32:02.987 Flush (00h): Supported LBA-Change 00:32:02.987 Write (01h): Supported LBA-Change 00:32:02.987 Read (02h): Supported 00:32:02.987 Compare (05h): Supported 00:32:02.987 Write Zeroes (08h): Supported LBA-Change 00:32:02.987 Dataset Management (09h): Supported LBA-Change 00:32:02.987 Copy (19h): Supported LBA-Change 00:32:02.987 00:32:02.987 Error Log 00:32:02.987 ========= 00:32:02.987 00:32:02.987 Arbitration 00:32:02.987 =========== 00:32:02.987 Arbitration Burst: 1 00:32:02.987 00:32:02.987 Power Management 00:32:02.987 ================ 00:32:02.987 Number of Power States: 1 00:32:02.987 Current Power State: Power State #0 00:32:02.987 Power State #0: 00:32:02.987 Max Power: 0.00 W 00:32:02.987 Non-Operational State: Operational 00:32:02.987 Entry Latency: Not Reported 00:32:02.987 Exit Latency: Not Reported 00:32:02.987 Relative Read Throughput: 0 00:32:02.987 Relative Read Latency: 0 00:32:02.987 Relative Write Throughput: 0 00:32:02.987 Relative Write Latency: 0 00:32:02.987 Idle Power: Not Reported 00:32:02.987 Active Power: Not Reported 00:32:02.987 Non-Operational Permissive Mode: Not Supported 00:32:02.987 00:32:02.987 Health Information 00:32:02.987 ================== 00:32:02.987 Critical Warnings: 00:32:02.987 Available Spare Space: OK 00:32:02.987 Temperature: OK 00:32:02.987 Device Reliability: OK 00:32:02.987 Read Only: No 00:32:02.987 Volatile Memory Backup: OK 00:32:02.987 Current Temperature: 0 Kelvin (-273 Celsius) 00:32:02.987 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:32:02.987 Available Spare: 0% 00:32:02.987 Available Spare Threshold: 0% 00:32:02.987 Life Percentage Used:[2024-09-27 15:50:43.280740] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.987 [2024-09-27 15:50:43.280745] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x5410e0) 00:32:02.987 [2024-09-27 15:50:43.280752] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.987 [2024-09-27 15:50:43.280764] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ac840, cid 7, qid 0 00:32:02.987 [2024-09-27 15:50:43.280987] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.987 [2024-09-27 15:50:43.280995] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.987 [2024-09-27 15:50:43.280998] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.987 [2024-09-27 15:50:43.281002] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ac840) on tqpair=0x5410e0 00:32:02.987 [2024-09-27 15:50:43.281037] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:32:02.987 [2024-09-27 15:50:43.281047] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5abdc0) on tqpair=0x5410e0 00:32:02.987 [2024-09-27 15:50:43.281054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.987 [2024-09-27 15:50:43.281059] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5abf40) on tqpair=0x5410e0 00:32:02.987 [2024-09-27 15:50:43.281064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.987 [2024-09-27 15:50:43.281069] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ac0c0) on tqpair=0x5410e0 00:32:02.987 [2024-09-27 15:50:43.281074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.987 [2024-09-27 15:50:43.281079] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ac240) on tqpair=0x5410e0 00:32:02.987 [2024-09-27 15:50:43.281083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.987 [2024-09-27 15:50:43.281092] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.987 [2024-09-27 15:50:43.281096] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.987 [2024-09-27 15:50:43.281100] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5410e0) 00:32:02.987 [2024-09-27 15:50:43.281107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.987 [2024-09-27 15:50:43.281119] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ac240, cid 3, qid 0 00:32:02.987 [2024-09-27 15:50:43.281310] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.987 [2024-09-27 15:50:43.281316] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.987 [2024-09-27 15:50:43.281320] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.987 [2024-09-27 15:50:43.281324] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ac240) on tqpair=0x5410e0 00:32:02.987 [2024-09-27 15:50:43.281331] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.987 [2024-09-27 15:50:43.281334] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.987 [2024-09-27 15:50:43.281338] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5410e0) 00:32:02.987 [2024-09-27 15:50:43.281345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.987 [2024-09-27 15:50:43.281358] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ac240, cid 3, qid 0 00:32:02.987 [2024-09-27 15:50:43.281587] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.987 [2024-09-27 15:50:43.281596] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.987 [2024-09-27 15:50:43.281599] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.987 [2024-09-27 15:50:43.281603] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ac240) on tqpair=0x5410e0 00:32:02.987 [2024-09-27 15:50:43.281608] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:32:02.987 [2024-09-27 15:50:43.281613] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:32:02.987 [2024-09-27 15:50:43.281623] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.987 [2024-09-27 15:50:43.281627] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.987 [2024-09-27 15:50:43.281630] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5410e0) 00:32:02.987 [2024-09-27 15:50:43.281637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.987 [2024-09-27 15:50:43.281648] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ac240, cid 3, qid 0 00:32:02.987 [2024-09-27 15:50:43.281844] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.987 [2024-09-27 15:50:43.281850] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.987 [2024-09-27 15:50:43.281854] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.987 [2024-09-27 15:50:43.281858] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ac240) on tqpair=0x5410e0 00:32:02.987 [2024-09-27 15:50:43.281868] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.987 [2024-09-27 15:50:43.281872] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.987 [2024-09-27 15:50:43.281876] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5410e0) 00:32:02.987 [2024-09-27 15:50:43.281882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.987 [2024-09-27 15:50:43.281896] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ac240, cid 3, qid 0 00:32:02.988 [2024-09-27 15:50:43.282086] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.988 [2024-09-27 15:50:43.282092] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.988 [2024-09-27 15:50:43.282096] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.988 [2024-09-27 15:50:43.282100] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ac240) on tqpair=0x5410e0 00:32:02.988 [2024-09-27 15:50:43.282110] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.988 [2024-09-27 15:50:43.282114] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.988 [2024-09-27 15:50:43.282117] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5410e0) 00:32:02.988 [2024-09-27 15:50:43.282124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.988 [2024-09-27 15:50:43.282134] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ac240, cid 3, qid 0 00:32:02.988 [2024-09-27 15:50:43.282336] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.988 [2024-09-27 15:50:43.282342] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.988 [2024-09-27 15:50:43.282345] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.988 [2024-09-27 15:50:43.282349] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ac240) on tqpair=0x5410e0 00:32:02.988 [2024-09-27 15:50:43.282359] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.988 [2024-09-27 15:50:43.282363] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.988 [2024-09-27 15:50:43.282367] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5410e0) 00:32:02.988 [2024-09-27 15:50:43.282373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.988 [2024-09-27 15:50:43.282385] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ac240, cid 3, qid 0 00:32:02.988 [2024-09-27 15:50:43.282594] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.988 [2024-09-27 15:50:43.282600] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.988 [2024-09-27 15:50:43.282604] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.988 [2024-09-27 15:50:43.282608] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ac240) on tqpair=0x5410e0 00:32:02.988 [2024-09-27 15:50:43.282618] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.988 [2024-09-27 15:50:43.282622] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.988 [2024-09-27 15:50:43.282625] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5410e0) 00:32:02.988 [2024-09-27 15:50:43.282632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.988 [2024-09-27 15:50:43.282642] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ac240, cid 3, qid 0 00:32:02.988 [2024-09-27 15:50:43.282841] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.988 [2024-09-27 15:50:43.282847] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.988 [2024-09-27 15:50:43.282850] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.988 [2024-09-27 15:50:43.282854] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ac240) on tqpair=0x5410e0 00:32:02.988 [2024-09-27 15:50:43.282864] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.988 [2024-09-27 15:50:43.282868] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.988 [2024-09-27 15:50:43.282872] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5410e0) 00:32:02.988 [2024-09-27 15:50:43.282878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.988 [2024-09-27 15:50:43.282888] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ac240, cid 3, qid 0 00:32:02.988 [2024-09-27 15:50:43.283104] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.988 [2024-09-27 15:50:43.283111] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.988 [2024-09-27 15:50:43.283114] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.988 [2024-09-27 15:50:43.283118] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ac240) on tqpair=0x5410e0 00:32:02.988 [2024-09-27 15:50:43.283128] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.988 [2024-09-27 15:50:43.283132] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.988 [2024-09-27 15:50:43.283135] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5410e0) 00:32:02.988 [2024-09-27 15:50:43.283142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.988 [2024-09-27 15:50:43.283152] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ac240, cid 3, qid 0 00:32:02.988 [2024-09-27 15:50:43.283371] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.988 [2024-09-27 15:50:43.283378] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.988 [2024-09-27 15:50:43.283381] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.988 [2024-09-27 15:50:43.283385] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ac240) on tqpair=0x5410e0 00:32:02.988 [2024-09-27 15:50:43.283395] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.988 [2024-09-27 15:50:43.283399] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.988 [2024-09-27 15:50:43.283403] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5410e0) 00:32:02.988 [2024-09-27 15:50:43.283410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.988 [2024-09-27 15:50:43.283420] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ac240, cid 3, qid 0 00:32:02.988 [2024-09-27 15:50:43.283600] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.988 [2024-09-27 15:50:43.283606] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.988 [2024-09-27 15:50:43.283610] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.988 [2024-09-27 15:50:43.283614] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ac240) on tqpair=0x5410e0 00:32:02.988 [2024-09-27 15:50:43.283624] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.988 [2024-09-27 15:50:43.283628] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.988 [2024-09-27 15:50:43.283631] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5410e0) 00:32:02.988 [2024-09-27 15:50:43.283638] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.988 [2024-09-27 15:50:43.283648] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ac240, cid 3, qid 0 00:32:02.988 [2024-09-27 15:50:43.283850] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.988 [2024-09-27 15:50:43.283857] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.988 [2024-09-27 15:50:43.283860] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.988 [2024-09-27 15:50:43.283864] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ac240) on tqpair=0x5410e0 00:32:02.988 [2024-09-27 15:50:43.283874] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:02.988 [2024-09-27 15:50:43.283878] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:02.988 [2024-09-27 15:50:43.283881] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5410e0) 00:32:02.988 [2024-09-27 15:50:43.283888] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.988 [2024-09-27 15:50:43.287904] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ac240, cid 3, qid 0 00:32:02.988 [2024-09-27 15:50:43.288117] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:02.988 [2024-09-27 15:50:43.288123] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:02.988 [2024-09-27 15:50:43.288127] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:02.988 [2024-09-27 15:50:43.288131] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ac240) on tqpair=0x5410e0 00:32:02.988 [2024-09-27 15:50:43.288138] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:32:02.988 0% 00:32:02.988 Data Units Read: 0 00:32:02.988 Data Units Written: 0 00:32:02.988 Host Read Commands: 0 00:32:02.988 Host Write Commands: 0 00:32:02.988 Controller Busy Time: 0 minutes 00:32:02.988 Power Cycles: 0 00:32:02.988 Power On Hours: 0 hours 00:32:02.988 Unsafe Shutdowns: 0 00:32:02.988 Unrecoverable Media Errors: 0 00:32:02.988 Lifetime Error Log Entries: 0 00:32:02.988 Warning Temperature Time: 0 minutes 00:32:02.988 Critical Temperature Time: 0 minutes 00:32:02.988 00:32:02.988 Number of Queues 00:32:02.988 ================ 00:32:02.988 Number of I/O Submission Queues: 127 00:32:02.988 Number of I/O Completion Queues: 127 00:32:02.988 00:32:02.988 Active Namespaces 00:32:02.988 ================= 00:32:02.988 Namespace ID:1 00:32:02.988 Error Recovery Timeout: Unlimited 00:32:02.988 Command Set Identifier: NVM (00h) 00:32:02.988 Deallocate: Supported 00:32:02.988 Deallocated/Unwritten Error: Not Supported 00:32:02.988 Deallocated Read Value: Unknown 00:32:02.988 Deallocate in Write Zeroes: Not Supported 00:32:02.988 Deallocated Guard Field: 0xFFFF 00:32:02.988 Flush: Supported 00:32:02.988 Reservation: Supported 00:32:02.988 Namespace Sharing Capabilities: Multiple Controllers 00:32:02.988 Size (in LBAs): 131072 (0GiB) 00:32:02.988 Capacity (in LBAs): 131072 (0GiB) 00:32:02.988 Utilization (in LBAs): 131072 (0GiB) 00:32:02.988 NGUID: ABCDEF0123456789ABCDEF0123456789 00:32:02.988 EUI64: ABCDEF0123456789 00:32:02.988 UUID: cf4b0225-fc7f-48a4-8ea2-db25349a09cf 00:32:02.988 Thin Provisioning: Not Supported 00:32:02.988 Per-NS Atomic Units: Yes 00:32:02.988 Atomic Boundary Size (Normal): 0 00:32:02.988 Atomic Boundary Size (PFail): 0 00:32:02.988 Atomic Boundary Offset: 0 00:32:02.988 Maximum Single Source Range Length: 65535 00:32:02.988 Maximum Copy Length: 65535 00:32:02.988 Maximum Source Range Count: 1 00:32:02.988 NGUID/EUI64 Never Reused: No 00:32:02.988 Namespace Write Protected: No 00:32:02.988 Number of LBA Formats: 1 00:32:02.988 Current LBA Format: LBA Format #00 00:32:02.988 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:02.988 00:32:02.988 15:50:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:32:02.989 15:50:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:02.989 15:50:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.989 15:50:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:02.989 15:50:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.989 15:50:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:32:02.989 15:50:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:32:02.989 15:50:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@512 -- # nvmfcleanup 00:32:02.989 15:50:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:32:02.989 15:50:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:02.989 15:50:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:32:02.989 15:50:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:02.989 15:50:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:02.989 rmmod nvme_tcp 00:32:02.989 rmmod nvme_fabrics 00:32:02.989 rmmod nvme_keyring 00:32:02.989 15:50:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:02.989 15:50:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:32:02.989 15:50:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:32:02.989 15:50:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@513 -- # '[' -n 532950 ']' 00:32:02.989 15:50:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # killprocess 532950 00:32:02.989 15:50:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 532950 ']' 00:32:02.989 15:50:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 532950 00:32:02.989 15:50:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:32:02.989 15:50:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:02.989 15:50:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 532950 00:32:02.989 15:50:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:02.989 15:50:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:02.989 15:50:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 532950' 00:32:02.989 killing process with pid 532950 00:32:02.989 15:50:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 532950 00:32:02.989 15:50:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 532950 00:32:03.251 15:50:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:32:03.251 15:50:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:32:03.251 15:50:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:32:03.251 15:50:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:32:03.251 15:50:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-save 00:32:03.251 15:50:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:32:03.251 15:50:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-restore 00:32:03.251 15:50:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:03.251 15:50:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:03.251 15:50:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:03.251 15:50:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:03.251 15:50:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:05.796 15:50:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:05.796 00:32:05.796 real 0m11.856s 00:32:05.796 user 0m8.800s 00:32:05.796 sys 0m6.255s 00:32:05.796 15:50:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:05.796 15:50:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:05.796 ************************************ 00:32:05.796 END TEST nvmf_identify 00:32:05.796 ************************************ 00:32:05.796 15:50:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:32:05.796 15:50:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:05.796 15:50:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:05.796 15:50:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.796 ************************************ 00:32:05.796 START TEST nvmf_perf 00:32:05.796 ************************************ 00:32:05.796 15:50:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:32:05.796 * Looking for test storage... 00:32:05.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:05.796 15:50:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:05.796 15:50:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:32:05.796 15:50:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:05.796 15:50:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:05.796 15:50:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:05.796 15:50:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:05.796 15:50:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:05.796 15:50:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:32:05.796 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:32:05.796 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:32:05.796 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:32:05.796 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:32:05.796 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:32:05.796 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:32:05.796 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:05.796 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:32:05.796 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:32:05.796 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:05.796 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:05.796 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:32:05.796 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:32:05.796 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:05.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.797 --rc genhtml_branch_coverage=1 00:32:05.797 --rc genhtml_function_coverage=1 00:32:05.797 --rc genhtml_legend=1 00:32:05.797 --rc geninfo_all_blocks=1 00:32:05.797 --rc geninfo_unexecuted_blocks=1 00:32:05.797 00:32:05.797 ' 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:05.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.797 --rc genhtml_branch_coverage=1 00:32:05.797 --rc genhtml_function_coverage=1 00:32:05.797 --rc genhtml_legend=1 00:32:05.797 --rc geninfo_all_blocks=1 00:32:05.797 --rc geninfo_unexecuted_blocks=1 00:32:05.797 00:32:05.797 ' 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:05.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.797 --rc genhtml_branch_coverage=1 00:32:05.797 --rc genhtml_function_coverage=1 00:32:05.797 --rc genhtml_legend=1 00:32:05.797 --rc geninfo_all_blocks=1 00:32:05.797 --rc geninfo_unexecuted_blocks=1 00:32:05.797 00:32:05.797 ' 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:05.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.797 --rc genhtml_branch_coverage=1 00:32:05.797 --rc genhtml_function_coverage=1 00:32:05.797 --rc genhtml_legend=1 00:32:05.797 --rc geninfo_all_blocks=1 00:32:05.797 --rc geninfo_unexecuted_blocks=1 00:32:05.797 00:32:05.797 ' 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:05.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # prepare_net_devs 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:32:05.797 15:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:13.941 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:13.941 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:32:13.941 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:13.942 Found net devices under 0000:31:00.0: cvl_0_0 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:13.942 Found net devices under 0000:31:00.1: cvl_0_1 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # is_hw=yes 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:13.942 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:13.942 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.689 ms 00:32:13.942 00:32:13.942 --- 10.0.0.2 ping statistics --- 00:32:13.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:13.942 rtt min/avg/max/mdev = 0.689/0.689/0.689/0.000 ms 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:13.942 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:13.942 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:32:13.942 00:32:13.942 --- 10.0.0.1 ping statistics --- 00:32:13.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:13.942 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # return 0 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # nvmfpid=537539 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # waitforlisten 537539 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 537539 ']' 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:13.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:13.942 15:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:13.942 [2024-09-27 15:50:53.825197] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:32:13.942 [2024-09-27 15:50:53.825267] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:13.942 [2024-09-27 15:50:53.914948] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:13.942 [2024-09-27 15:50:53.962570] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:13.942 [2024-09-27 15:50:53.962624] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:13.942 [2024-09-27 15:50:53.962632] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:13.942 [2024-09-27 15:50:53.962639] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:13.942 [2024-09-27 15:50:53.962646] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:13.942 [2024-09-27 15:50:53.962751] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:13.942 [2024-09-27 15:50:53.962938] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:32:13.942 [2024-09-27 15:50:53.963052] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:32:13.942 [2024-09-27 15:50:53.963053] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:14.203 15:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:14.203 15:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:32:14.203 15:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:32:14.203 15:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:14.203 15:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:14.464 15:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:14.464 15:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:14.464 15:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:32:15.034 15:50:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:32:15.034 15:50:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:32:15.034 15:50:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:32:15.034 15:50:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:15.296 15:50:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:32:15.296 15:50:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:32:15.296 15:50:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:32:15.296 15:50:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:32:15.296 15:50:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:32:15.559 [2024-09-27 15:50:55.792435] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:15.559 15:50:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:15.559 15:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:32:15.559 15:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:15.821 15:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:32:15.821 15:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:32:16.082 15:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:16.342 [2024-09-27 15:50:56.603563] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:16.342 15:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:16.342 15:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:32:16.342 15:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:32:16.342 15:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:32:16.343 15:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:32:17.727 Initializing NVMe Controllers 00:32:17.727 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:32:17.727 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:32:17.727 Initialization complete. Launching workers. 00:32:17.727 ======================================================== 00:32:17.727 Latency(us) 00:32:17.727 Device Information : IOPS MiB/s Average min max 00:32:17.727 PCIE (0000:65:00.0) NSID 1 from core 0: 78117.86 305.15 409.09 13.26 5411.54 00:32:17.727 ======================================================== 00:32:17.727 Total : 78117.86 305.15 409.09 13.26 5411.54 00:32:17.727 00:32:17.727 15:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:19.111 Initializing NVMe Controllers 00:32:19.111 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:19.111 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:19.111 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:19.111 Initialization complete. Launching workers. 00:32:19.111 ======================================================== 00:32:19.111 Latency(us) 00:32:19.111 Device Information : IOPS MiB/s Average min max 00:32:19.111 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 118.00 0.46 8540.34 100.23 45035.71 00:32:19.111 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 76.00 0.30 13231.45 7955.03 47888.84 00:32:19.111 ======================================================== 00:32:19.111 Total : 194.00 0.76 10378.10 100.23 47888.84 00:32:19.111 00:32:19.111 15:50:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:20.495 Initializing NVMe Controllers 00:32:20.495 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:20.495 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:20.495 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:20.495 Initialization complete. Launching workers. 00:32:20.495 ======================================================== 00:32:20.495 Latency(us) 00:32:20.495 Device Information : IOPS MiB/s Average min max 00:32:20.495 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11833.59 46.22 2703.94 443.02 7732.53 00:32:20.495 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3807.22 14.87 8576.23 4938.63 49691.53 00:32:20.495 ======================================================== 00:32:20.495 Total : 15640.81 61.10 4133.35 443.02 49691.53 00:32:20.495 00:32:20.495 15:51:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:32:20.495 15:51:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:32:20.495 15:51:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:23.037 Initializing NVMe Controllers 00:32:23.037 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:23.037 Controller IO queue size 128, less than required. 00:32:23.037 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:23.037 Controller IO queue size 128, less than required. 00:32:23.037 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:23.037 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:23.037 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:23.037 Initialization complete. Launching workers. 00:32:23.037 ======================================================== 00:32:23.037 Latency(us) 00:32:23.037 Device Information : IOPS MiB/s Average min max 00:32:23.037 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1902.00 475.50 68913.38 34228.87 119778.93 00:32:23.037 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 597.50 149.37 223480.79 57081.10 342652.08 00:32:23.037 ======================================================== 00:32:23.037 Total : 2499.50 624.87 105862.38 34228.87 342652.08 00:32:23.037 00:32:23.037 15:51:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:32:23.297 No valid NVMe controllers or AIO or URING devices found 00:32:23.297 Initializing NVMe Controllers 00:32:23.297 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:23.297 Controller IO queue size 128, less than required. 00:32:23.297 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:23.297 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:32:23.297 Controller IO queue size 128, less than required. 00:32:23.297 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:23.297 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:32:23.297 WARNING: Some requested NVMe devices were skipped 00:32:23.297 15:51:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:32:25.849 Initializing NVMe Controllers 00:32:25.849 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:25.849 Controller IO queue size 128, less than required. 00:32:25.849 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:25.849 Controller IO queue size 128, less than required. 00:32:25.849 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:25.849 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:25.849 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:25.849 Initialization complete. Launching workers. 00:32:25.849 00:32:25.849 ==================== 00:32:25.849 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:32:25.849 TCP transport: 00:32:25.849 polls: 37008 00:32:25.849 idle_polls: 21187 00:32:25.849 sock_completions: 15821 00:32:25.849 nvme_completions: 7321 00:32:25.849 submitted_requests: 10960 00:32:25.849 queued_requests: 1 00:32:25.849 00:32:25.849 ==================== 00:32:25.849 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:32:25.849 TCP transport: 00:32:25.849 polls: 37081 00:32:25.849 idle_polls: 21093 00:32:25.849 sock_completions: 15988 00:32:25.849 nvme_completions: 7433 00:32:25.849 submitted_requests: 11106 00:32:25.849 queued_requests: 1 00:32:25.849 ======================================================== 00:32:25.849 Latency(us) 00:32:25.849 Device Information : IOPS MiB/s Average min max 00:32:25.849 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1829.99 457.50 71582.41 40260.09 126337.07 00:32:25.849 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1857.99 464.50 69650.83 30206.95 116282.54 00:32:25.849 ======================================================== 00:32:25.849 Total : 3687.98 921.99 70609.29 30206.95 126337.07 00:32:25.849 00:32:25.849 15:51:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:32:25.849 15:51:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:26.110 15:51:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:32:26.110 15:51:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:65:00.0 ']' 00:32:26.110 15:51:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:32:27.055 15:51:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=21a04a50-522c-47d1-a861-df67fa031637 00:32:27.055 15:51:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 21a04a50-522c-47d1-a861-df67fa031637 00:32:27.055 15:51:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=21a04a50-522c-47d1-a861-df67fa031637 00:32:27.055 15:51:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:32:27.055 15:51:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:32:27.055 15:51:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:32:27.055 15:51:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:27.316 15:51:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:32:27.316 { 00:32:27.316 "uuid": "21a04a50-522c-47d1-a861-df67fa031637", 00:32:27.316 "name": "lvs_0", 00:32:27.316 "base_bdev": "Nvme0n1", 00:32:27.316 "total_data_clusters": 457407, 00:32:27.316 "free_clusters": 457407, 00:32:27.316 "block_size": 512, 00:32:27.316 "cluster_size": 4194304 00:32:27.316 } 00:32:27.316 ]' 00:32:27.316 15:51:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="21a04a50-522c-47d1-a861-df67fa031637") .free_clusters' 00:32:27.316 15:51:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=457407 00:32:27.316 15:51:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="21a04a50-522c-47d1-a861-df67fa031637") .cluster_size' 00:32:27.316 15:51:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:32:27.316 15:51:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=1829628 00:32:27.316 15:51:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 1829628 00:32:27.316 1829628 00:32:27.316 15:51:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 1829628 -gt 20480 ']' 00:32:27.316 15:51:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:32:27.316 15:51:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 21a04a50-522c-47d1-a861-df67fa031637 lbd_0 20480 00:32:27.576 15:51:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=0ade6f65-0f5c-4889-baa7-ea6353b713ca 00:32:27.576 15:51:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 0ade6f65-0f5c-4889-baa7-ea6353b713ca lvs_n_0 00:32:29.492 15:51:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=379e6358-8b31-4bd3-a49a-3c78a6df28ab 00:32:29.492 15:51:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 379e6358-8b31-4bd3-a49a-3c78a6df28ab 00:32:29.492 15:51:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=379e6358-8b31-4bd3-a49a-3c78a6df28ab 00:32:29.492 15:51:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:32:29.492 15:51:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:32:29.492 15:51:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:32:29.493 15:51:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:29.493 15:51:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:32:29.493 { 00:32:29.493 "uuid": "21a04a50-522c-47d1-a861-df67fa031637", 00:32:29.493 "name": "lvs_0", 00:32:29.493 "base_bdev": "Nvme0n1", 00:32:29.493 "total_data_clusters": 457407, 00:32:29.493 "free_clusters": 452287, 00:32:29.493 "block_size": 512, 00:32:29.493 "cluster_size": 4194304 00:32:29.493 }, 00:32:29.493 { 00:32:29.493 "uuid": "379e6358-8b31-4bd3-a49a-3c78a6df28ab", 00:32:29.493 "name": "lvs_n_0", 00:32:29.493 "base_bdev": "0ade6f65-0f5c-4889-baa7-ea6353b713ca", 00:32:29.493 "total_data_clusters": 5114, 00:32:29.493 "free_clusters": 5114, 00:32:29.493 "block_size": 512, 00:32:29.493 "cluster_size": 4194304 00:32:29.493 } 00:32:29.493 ]' 00:32:29.493 15:51:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="379e6358-8b31-4bd3-a49a-3c78a6df28ab") .free_clusters' 00:32:29.493 15:51:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:32:29.493 15:51:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="379e6358-8b31-4bd3-a49a-3c78a6df28ab") .cluster_size' 00:32:29.493 15:51:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:32:29.493 15:51:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:32:29.493 15:51:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:32:29.493 20456 00:32:29.493 15:51:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:32:29.493 15:51:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 379e6358-8b31-4bd3-a49a-3c78a6df28ab lbd_nest_0 20456 00:32:29.493 15:51:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=951963c5-5f79-41ba-b485-da86938d0be7 00:32:29.493 15:51:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:29.753 15:51:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:32:29.753 15:51:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 951963c5-5f79-41ba-b485-da86938d0be7 00:32:30.018 15:51:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:30.018 15:51:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:32:30.018 15:51:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:32:30.018 15:51:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:30.018 15:51:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:30.018 15:51:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:42.249 Initializing NVMe Controllers 00:32:42.249 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:42.249 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:42.249 Initialization complete. Launching workers. 00:32:42.249 ======================================================== 00:32:42.249 Latency(us) 00:32:42.249 Device Information : IOPS MiB/s Average min max 00:32:42.249 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 44.30 0.02 22623.87 98.42 45868.13 00:32:42.249 ======================================================== 00:32:42.250 Total : 44.30 0.02 22623.87 98.42 45868.13 00:32:42.250 00:32:42.250 15:51:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:42.250 15:51:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:52.253 Initializing NVMe Controllers 00:32:52.253 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:52.253 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:52.253 Initialization complete. Launching workers. 00:32:52.253 ======================================================== 00:32:52.253 Latency(us) 00:32:52.253 Device Information : IOPS MiB/s Average min max 00:32:52.253 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 58.19 7.27 17199.85 7840.97 50881.29 00:32:52.253 ======================================================== 00:32:52.253 Total : 58.19 7.27 17199.85 7840.97 50881.29 00:32:52.253 00:32:52.253 15:51:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:52.253 15:51:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:52.253 15:51:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:02.277 Initializing NVMe Controllers 00:33:02.277 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:02.277 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:02.277 Initialization complete. Launching workers. 00:33:02.277 ======================================================== 00:33:02.277 Latency(us) 00:33:02.277 Device Information : IOPS MiB/s Average min max 00:33:02.277 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9464.00 4.62 3381.13 256.49 9047.43 00:33:02.277 ======================================================== 00:33:02.277 Total : 9464.00 4.62 3381.13 256.49 9047.43 00:33:02.277 00:33:02.277 15:51:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:02.277 15:51:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:12.283 Initializing NVMe Controllers 00:33:12.283 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:12.283 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:12.283 Initialization complete. Launching workers. 00:33:12.283 ======================================================== 00:33:12.283 Latency(us) 00:33:12.283 Device Information : IOPS MiB/s Average min max 00:33:12.283 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4836.50 604.56 6619.94 592.70 16127.60 00:33:12.283 ======================================================== 00:33:12.283 Total : 4836.50 604.56 6619.94 592.70 16127.60 00:33:12.283 00:33:12.283 15:51:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:33:12.283 15:51:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:12.283 15:51:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:22.288 Initializing NVMe Controllers 00:33:22.288 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:22.288 Controller IO queue size 128, less than required. 00:33:22.288 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:22.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:22.288 Initialization complete. Launching workers. 00:33:22.288 ======================================================== 00:33:22.288 Latency(us) 00:33:22.288 Device Information : IOPS MiB/s Average min max 00:33:22.288 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15896.50 7.76 8055.60 1683.26 21502.68 00:33:22.288 ======================================================== 00:33:22.288 Total : 15896.50 7.76 8055.60 1683.26 21502.68 00:33:22.288 00:33:22.288 15:52:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:22.288 15:52:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:32.286 Initializing NVMe Controllers 00:33:32.286 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:32.286 Controller IO queue size 128, less than required. 00:33:32.286 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:32.286 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:32.286 Initialization complete. Launching workers. 00:33:32.286 ======================================================== 00:33:32.286 Latency(us) 00:33:32.286 Device Information : IOPS MiB/s Average min max 00:33:32.286 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1195.93 149.49 107838.30 22876.36 227527.72 00:33:32.286 ======================================================== 00:33:32.286 Total : 1195.93 149.49 107838.30 22876.36 227527.72 00:33:32.286 00:33:32.286 15:52:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:32.286 15:52:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 951963c5-5f79-41ba-b485-da86938d0be7 00:33:34.201 15:52:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:33:34.201 15:52:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0ade6f65-0f5c-4889-baa7-ea6353b713ca 00:33:34.201 15:52:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:33:34.461 15:52:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:33:34.461 15:52:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:33:34.461 15:52:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # nvmfcleanup 00:33:34.461 15:52:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:33:34.461 15:52:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:34.461 15:52:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:33:34.461 15:52:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:34.461 15:52:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:34.461 rmmod nvme_tcp 00:33:34.461 rmmod nvme_fabrics 00:33:34.461 rmmod nvme_keyring 00:33:34.462 15:52:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:34.462 15:52:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:33:34.462 15:52:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:33:34.462 15:52:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@513 -- # '[' -n 537539 ']' 00:33:34.462 15:52:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # killprocess 537539 00:33:34.462 15:52:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 537539 ']' 00:33:34.462 15:52:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 537539 00:33:34.462 15:52:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:33:34.462 15:52:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:34.462 15:52:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 537539 00:33:34.722 15:52:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:34.722 15:52:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:34.722 15:52:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 537539' 00:33:34.722 killing process with pid 537539 00:33:34.722 15:52:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 537539 00:33:34.722 15:52:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 537539 00:33:36.636 15:52:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:33:36.636 15:52:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:33:36.636 15:52:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:33:36.636 15:52:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:33:36.636 15:52:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-save 00:33:36.636 15:52:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:33:36.636 15:52:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-restore 00:33:36.636 15:52:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:36.636 15:52:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:36.636 15:52:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:36.636 15:52:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:36.636 15:52:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:38.549 15:52:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:38.549 00:33:38.549 real 1m33.179s 00:33:38.549 user 5m27.131s 00:33:38.549 sys 0m16.235s 00:33:38.549 15:52:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:38.549 15:52:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:33:38.549 ************************************ 00:33:38.549 END TEST nvmf_perf 00:33:38.549 ************************************ 00:33:38.549 15:52:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:33:38.549 15:52:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:38.549 15:52:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:38.549 15:52:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.809 ************************************ 00:33:38.809 START TEST nvmf_fio_host 00:33:38.809 ************************************ 00:33:38.809 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:33:38.809 * Looking for test storage... 00:33:38.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:38.809 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:38.809 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:33:38.809 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:38.809 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:38.809 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:38.809 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:38.809 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:38.809 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:33:38.809 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:33:38.809 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:33:38.809 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:33:38.809 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:33:38.809 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:33:38.809 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:33:38.809 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:38.809 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:33:38.809 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:33:38.809 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:38.809 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:38.809 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:33:38.809 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:33:38.809 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:38.809 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:33:38.809 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:33:38.809 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:33:38.809 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:33:38.810 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:38.810 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:33:38.810 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:33:38.810 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:38.810 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:38.810 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:33:38.810 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:38.810 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:38.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.810 --rc genhtml_branch_coverage=1 00:33:38.810 --rc genhtml_function_coverage=1 00:33:38.810 --rc genhtml_legend=1 00:33:38.810 --rc geninfo_all_blocks=1 00:33:38.810 --rc geninfo_unexecuted_blocks=1 00:33:38.810 00:33:38.810 ' 00:33:38.810 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:38.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.810 --rc genhtml_branch_coverage=1 00:33:38.810 --rc genhtml_function_coverage=1 00:33:38.810 --rc genhtml_legend=1 00:33:38.810 --rc geninfo_all_blocks=1 00:33:38.810 --rc geninfo_unexecuted_blocks=1 00:33:38.810 00:33:38.810 ' 00:33:38.810 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:38.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.810 --rc genhtml_branch_coverage=1 00:33:38.810 --rc genhtml_function_coverage=1 00:33:38.810 --rc genhtml_legend=1 00:33:38.810 --rc geninfo_all_blocks=1 00:33:38.810 --rc geninfo_unexecuted_blocks=1 00:33:38.810 00:33:38.810 ' 00:33:38.810 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:38.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.810 --rc genhtml_branch_coverage=1 00:33:38.810 --rc genhtml_function_coverage=1 00:33:38.810 --rc genhtml_legend=1 00:33:38.810 --rc geninfo_all_blocks=1 00:33:38.810 --rc geninfo_unexecuted_blocks=1 00:33:38.810 00:33:38.810 ' 00:33:38.810 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:38.810 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:33:38.810 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:38.810 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:38.810 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:38.810 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.810 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.810 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.810 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:33:38.810 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.810 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:38.810 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:33:38.810 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:38.810 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:38.810 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:38.810 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:38.810 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:38.810 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:38.810 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:38.810 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:38.810 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:39.071 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:39.071 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:39.071 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:39.071 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:39.071 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:39.071 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:39.071 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:39.071 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:39.071 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:33:39.071 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:39.071 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:39.071 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:39.071 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.071 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.071 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.071 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:33:39.071 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.071 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:33:39.071 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:39.071 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:39.071 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:39.071 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:39.071 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:39.071 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:39.071 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:39.071 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:39.071 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:39.071 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:39.071 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:39.071 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:33:39.071 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:33:39.071 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:39.071 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:33:39.071 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:33:39.071 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:33:39.071 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:39.071 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:39.071 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:39.071 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:33:39.071 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:33:39.071 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:33:39.071 15:52:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.217 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:47.217 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:33:47.217 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:47.217 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:47.217 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:47.217 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:47.217 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:47.217 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:33:47.217 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:47.217 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:33:47.217 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:33:47.217 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:33:47.217 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:33:47.217 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:33:47.217 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:33:47.217 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:47.217 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:47.218 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:47.218 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:47.218 Found net devices under 0000:31:00.0: cvl_0_0 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:47.218 Found net devices under 0000:31:00.1: cvl_0_1 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # is_hw=yes 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:47.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:47.218 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:33:47.218 00:33:47.218 --- 10.0.0.2 ping statistics --- 00:33:47.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:47.218 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:47.218 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:47.218 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:33:47.218 00:33:47.218 --- 10.0.0.1 ping statistics --- 00:33:47.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:47.218 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # return 0 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:47.218 15:52:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.218 15:52:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=558073 00:33:47.218 15:52:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:47.218 15:52:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:47.218 15:52:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 558073 00:33:47.218 15:52:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 558073 ']' 00:33:47.218 15:52:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:47.218 15:52:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:47.219 15:52:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:47.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:47.219 15:52:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:47.219 15:52:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.219 [2024-09-27 15:52:27.062219] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:33:47.219 [2024-09-27 15:52:27.062291] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:47.219 [2024-09-27 15:52:27.151705] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:47.219 [2024-09-27 15:52:27.199235] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:47.219 [2024-09-27 15:52:27.199287] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:47.219 [2024-09-27 15:52:27.199295] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:47.219 [2024-09-27 15:52:27.199302] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:47.219 [2024-09-27 15:52:27.199308] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:47.219 [2024-09-27 15:52:27.199447] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:33:47.219 [2024-09-27 15:52:27.199603] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:33:47.219 [2024-09-27 15:52:27.199756] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:47.219 [2024-09-27 15:52:27.199758] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:33:47.480 15:52:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:47.480 15:52:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:33:47.480 15:52:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:47.742 [2024-09-27 15:52:28.048611] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:47.742 15:52:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:33:47.742 15:52:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:47.742 15:52:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.742 15:52:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:33:48.004 Malloc1 00:33:48.004 15:52:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:48.266 15:52:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:48.266 15:52:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:48.527 [2024-09-27 15:52:28.913438] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:48.527 15:52:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:48.789 15:52:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:33:48.789 15:52:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:48.789 15:52:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:48.789 15:52:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:48.789 15:52:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:48.789 15:52:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:48.789 15:52:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:48.789 15:52:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:33:48.789 15:52:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:48.789 15:52:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:48.789 15:52:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:48.789 15:52:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:33:48.789 15:52:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:48.789 15:52:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:48.789 15:52:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:48.789 15:52:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:48.789 15:52:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:48.789 15:52:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:48.789 15:52:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:48.789 15:52:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:48.789 15:52:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:48.789 15:52:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:48.789 15:52:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:49.050 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:49.050 fio-3.35 00:33:49.050 Starting 1 thread 00:33:51.596 00:33:51.596 test: (groupid=0, jobs=1): err= 0: pid=558625: Fri Sep 27 15:52:31 2024 00:33:51.596 read: IOPS=14.0k, BW=54.5MiB/s (57.2MB/s)(109MiB/2004msec) 00:33:51.596 slat (usec): min=2, max=350, avg= 2.15, stdev= 2.77 00:33:51.596 clat (usec): min=3120, max=8721, avg=5038.80, stdev=368.87 00:33:51.596 lat (usec): min=3122, max=8723, avg=5040.95, stdev=369.03 00:33:51.596 clat percentiles (usec): 00:33:51.596 | 1.00th=[ 4228], 5.00th=[ 4490], 10.00th=[ 4621], 20.00th=[ 4752], 00:33:51.596 | 30.00th=[ 4883], 40.00th=[ 4948], 50.00th=[ 5014], 60.00th=[ 5145], 00:33:51.596 | 70.00th=[ 5211], 80.00th=[ 5276], 90.00th=[ 5473], 95.00th=[ 5604], 00:33:51.596 | 99.00th=[ 5932], 99.50th=[ 6325], 99.90th=[ 7701], 99.95th=[ 7898], 00:33:51.596 | 99.99th=[ 8586] 00:33:51.596 bw ( KiB/s): min=54328, max=56424, per=99.91%, avg=55804.00, stdev=995.12, samples=4 00:33:51.596 iops : min=13582, max=14106, avg=13951.00, stdev=248.78, samples=4 00:33:51.597 write: IOPS=14.0k, BW=54.6MiB/s (57.2MB/s)(109MiB/2004msec); 0 zone resets 00:33:51.597 slat (usec): min=2, max=277, avg= 2.22, stdev= 1.81 00:33:51.597 clat (usec): min=2583, max=7906, avg=4067.03, stdev=303.84 00:33:51.597 lat (usec): min=2585, max=7908, avg=4069.25, stdev=304.03 00:33:51.597 clat percentiles (usec): 00:33:51.597 | 1.00th=[ 3392], 5.00th=[ 3621], 10.00th=[ 3720], 20.00th=[ 3851], 00:33:51.597 | 30.00th=[ 3916], 40.00th=[ 3982], 50.00th=[ 4047], 60.00th=[ 4146], 00:33:51.597 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4490], 00:33:51.597 | 99.00th=[ 4752], 99.50th=[ 5145], 99.90th=[ 6194], 99.95th=[ 6849], 00:33:51.597 | 99.99th=[ 7832] 00:33:51.597 bw ( KiB/s): min=54712, max=56424, per=100.00%, avg=55908.00, stdev=803.79, samples=4 00:33:51.597 iops : min=13678, max=14106, avg=13977.00, stdev=200.95, samples=4 00:33:51.597 lat (msec) : 4=20.46%, 10=79.54% 00:33:51.597 cpu : usr=76.19%, sys=22.52%, ctx=30, majf=0, minf=17 00:33:51.597 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:33:51.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.597 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:51.597 issued rwts: total=27983,28000,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.597 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:51.597 00:33:51.597 Run status group 0 (all jobs): 00:33:51.597 READ: bw=54.5MiB/s (57.2MB/s), 54.5MiB/s-54.5MiB/s (57.2MB/s-57.2MB/s), io=109MiB (115MB), run=2004-2004msec 00:33:51.597 WRITE: bw=54.6MiB/s (57.2MB/s), 54.6MiB/s-54.6MiB/s (57.2MB/s-57.2MB/s), io=109MiB (115MB), run=2004-2004msec 00:33:51.597 15:52:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:51.597 15:52:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:51.597 15:52:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:51.597 15:52:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:51.597 15:52:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:51.597 15:52:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:51.597 15:52:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:33:51.597 15:52:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:51.597 15:52:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:51.597 15:52:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:51.597 15:52:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:33:51.597 15:52:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:51.597 15:52:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:51.597 15:52:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:51.597 15:52:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:51.597 15:52:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:51.597 15:52:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:51.597 15:52:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:51.597 15:52:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:51.597 15:52:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:51.597 15:52:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:51.597 15:52:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:51.858 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:33:51.858 fio-3.35 00:33:51.858 Starting 1 thread 00:33:54.405 00:33:54.405 test: (groupid=0, jobs=1): err= 0: pid=559419: Fri Sep 27 15:52:34 2024 00:33:54.405 read: IOPS=9645, BW=151MiB/s (158MB/s)(302MiB/2006msec) 00:33:54.405 slat (usec): min=3, max=113, avg= 3.59, stdev= 1.56 00:33:54.405 clat (usec): min=1125, max=17057, avg=7986.24, stdev=1883.73 00:33:54.405 lat (usec): min=1129, max=17060, avg=7989.82, stdev=1883.81 00:33:54.405 clat percentiles (usec): 00:33:54.405 | 1.00th=[ 4113], 5.00th=[ 5145], 10.00th=[ 5604], 20.00th=[ 6259], 00:33:54.405 | 30.00th=[ 6849], 40.00th=[ 7373], 50.00th=[ 7898], 60.00th=[ 8455], 00:33:54.405 | 70.00th=[ 8979], 80.00th=[ 9765], 90.00th=[10421], 95.00th=[11207], 00:33:54.405 | 99.00th=[12387], 99.50th=[12911], 99.90th=[13304], 99.95th=[13435], 00:33:54.405 | 99.99th=[13566] 00:33:54.405 bw ( KiB/s): min=71968, max=85504, per=50.24%, avg=77536.00, stdev=6188.56, samples=4 00:33:54.405 iops : min= 4498, max= 5344, avg=4846.00, stdev=386.79, samples=4 00:33:54.405 write: IOPS=5609, BW=87.6MiB/s (91.9MB/s)(158MiB/1801msec); 0 zone resets 00:33:54.405 slat (usec): min=39, max=327, avg=40.79, stdev= 6.27 00:33:54.405 clat (usec): min=2211, max=13822, avg=9061.39, stdev=1381.14 00:33:54.405 lat (usec): min=2251, max=13862, avg=9102.18, stdev=1382.16 00:33:54.405 clat percentiles (usec): 00:33:54.405 | 1.00th=[ 5866], 5.00th=[ 7046], 10.00th=[ 7504], 20.00th=[ 7963], 00:33:54.405 | 30.00th=[ 8291], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9372], 00:33:54.405 | 70.00th=[ 9765], 80.00th=[10159], 90.00th=[10945], 95.00th=[11338], 00:33:54.405 | 99.00th=[12649], 99.50th=[13042], 99.90th=[13435], 99.95th=[13566], 00:33:54.405 | 99.99th=[13698] 00:33:54.405 bw ( KiB/s): min=74816, max=88960, per=89.62%, avg=80432.00, stdev=6171.41, samples=4 00:33:54.405 iops : min= 4676, max= 5560, avg=5027.00, stdev=385.71, samples=4 00:33:54.405 lat (msec) : 2=0.04%, 4=0.65%, 10=81.22%, 20=18.09% 00:33:54.405 cpu : usr=86.63%, sys=12.22%, ctx=9, majf=0, minf=35 00:33:54.405 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:33:54.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.405 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:54.405 issued rwts: total=19349,10102,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.405 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:54.405 00:33:54.405 Run status group 0 (all jobs): 00:33:54.405 READ: bw=151MiB/s (158MB/s), 151MiB/s-151MiB/s (158MB/s-158MB/s), io=302MiB (317MB), run=2006-2006msec 00:33:54.405 WRITE: bw=87.6MiB/s (91.9MB/s), 87.6MiB/s-87.6MiB/s (91.9MB/s-91.9MB/s), io=158MiB (166MB), run=1801-1801msec 00:33:54.405 15:52:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:54.405 15:52:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:33:54.405 15:52:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:33:54.405 15:52:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:33:54.405 15:52:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # bdfs=() 00:33:54.405 15:52:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # local bdfs 00:33:54.405 15:52:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:54.405 15:52:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:54.405 15:52:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:33:54.405 15:52:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:33:54.405 15:52:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:33:54.405 15:52:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 -i 10.0.0.2 00:33:54.978 Nvme0n1 00:33:54.978 15:52:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:33:55.553 15:52:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=4c935557-88fd-4a41-9960-9cb8b6320ead 00:33:55.553 15:52:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 4c935557-88fd-4a41-9960-9cb8b6320ead 00:33:55.553 15:52:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=4c935557-88fd-4a41-9960-9cb8b6320ead 00:33:55.553 15:52:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:33:55.553 15:52:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:33:55.553 15:52:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:33:55.553 15:52:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:55.812 15:52:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:33:55.812 { 00:33:55.812 "uuid": "4c935557-88fd-4a41-9960-9cb8b6320ead", 00:33:55.812 "name": "lvs_0", 00:33:55.812 "base_bdev": "Nvme0n1", 00:33:55.812 "total_data_clusters": 1787, 00:33:55.812 "free_clusters": 1787, 00:33:55.812 "block_size": 512, 00:33:55.812 "cluster_size": 1073741824 00:33:55.812 } 00:33:55.812 ]' 00:33:55.812 15:52:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="4c935557-88fd-4a41-9960-9cb8b6320ead") .free_clusters' 00:33:55.812 15:52:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=1787 00:33:55.812 15:52:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="4c935557-88fd-4a41-9960-9cb8b6320ead") .cluster_size' 00:33:55.812 15:52:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:33:55.812 15:52:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=1829888 00:33:55.812 15:52:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 1829888 00:33:55.812 1829888 00:33:55.812 15:52:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1829888 00:33:55.812 ff137197-864d-4b7e-993d-ed5f2efe659a 00:33:56.073 15:52:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:33:56.073 15:52:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:33:56.334 15:52:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:56.594 15:52:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:56.594 15:52:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:56.594 15:52:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:56.594 15:52:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:56.594 15:52:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:56.594 15:52:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:56.594 15:52:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:33:56.594 15:52:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:56.594 15:52:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:56.594 15:52:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:33:56.594 15:52:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:56.594 15:52:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:56.594 15:52:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:56.594 15:52:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:56.594 15:52:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:56.594 15:52:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:56.594 15:52:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:56.594 15:52:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:56.594 15:52:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:56.594 15:52:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:56.595 15:52:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:56.595 15:52:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:56.856 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:56.856 fio-3.35 00:33:56.856 Starting 1 thread 00:33:59.404 00:33:59.404 test: (groupid=0, jobs=1): err= 0: pid=560514: Fri Sep 27 15:52:39 2024 00:33:59.404 read: IOPS=10.5k, BW=40.9MiB/s (42.8MB/s)(81.9MiB/2005msec) 00:33:59.404 slat (usec): min=2, max=110, avg= 2.20, stdev= 1.06 00:33:59.404 clat (usec): min=2445, max=11111, avg=6764.94, stdev=495.44 00:33:59.404 lat (usec): min=2463, max=11113, avg=6767.15, stdev=495.39 00:33:59.404 clat percentiles (usec): 00:33:59.404 | 1.00th=[ 5604], 5.00th=[ 5932], 10.00th=[ 6128], 20.00th=[ 6390], 00:33:59.404 | 30.00th=[ 6521], 40.00th=[ 6652], 50.00th=[ 6783], 60.00th=[ 6915], 00:33:59.404 | 70.00th=[ 7046], 80.00th=[ 7177], 90.00th=[ 7373], 95.00th=[ 7504], 00:33:59.404 | 99.00th=[ 7832], 99.50th=[ 7963], 99.90th=[ 8979], 99.95th=[10159], 00:33:59.405 | 99.99th=[11076] 00:33:59.405 bw ( KiB/s): min=40870, max=42320, per=99.86%, avg=41781.50, stdev=661.66, samples=4 00:33:59.405 iops : min=10217, max=10580, avg=10445.25, stdev=165.64, samples=4 00:33:59.405 write: IOPS=10.5k, BW=40.9MiB/s (42.8MB/s)(81.9MiB/2005msec); 0 zone resets 00:33:59.405 slat (usec): min=2, max=326, avg= 2.28, stdev= 2.31 00:33:59.405 clat (usec): min=1261, max=10112, avg=5414.99, stdev=432.07 00:33:59.405 lat (usec): min=1268, max=10114, avg=5417.27, stdev=432.09 00:33:59.405 clat percentiles (usec): 00:33:59.405 | 1.00th=[ 4424], 5.00th=[ 4752], 10.00th=[ 4883], 20.00th=[ 5080], 00:33:59.405 | 30.00th=[ 5211], 40.00th=[ 5342], 50.00th=[ 5407], 60.00th=[ 5538], 00:33:59.405 | 70.00th=[ 5604], 80.00th=[ 5735], 90.00th=[ 5932], 95.00th=[ 6063], 00:33:59.405 | 99.00th=[ 6325], 99.50th=[ 6456], 99.90th=[ 8291], 99.95th=[ 8586], 00:33:59.405 | 99.99th=[10028] 00:33:59.405 bw ( KiB/s): min=41461, max=42120, per=99.93%, avg=41807.25, stdev=270.99, samples=4 00:33:59.405 iops : min=10365, max=10530, avg=10451.75, stdev=67.85, samples=4 00:33:59.405 lat (msec) : 2=0.02%, 4=0.10%, 10=99.84%, 20=0.04% 00:33:59.405 cpu : usr=74.15%, sys=24.80%, ctx=62, majf=0, minf=20 00:33:59.405 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:59.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:59.405 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:59.405 issued rwts: total=20972,20970,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:59.405 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:59.405 00:33:59.405 Run status group 0 (all jobs): 00:33:59.405 READ: bw=40.9MiB/s (42.8MB/s), 40.9MiB/s-40.9MiB/s (42.8MB/s-42.8MB/s), io=81.9MiB (85.9MB), run=2005-2005msec 00:33:59.405 WRITE: bw=40.9MiB/s (42.8MB/s), 40.9MiB/s-40.9MiB/s (42.8MB/s-42.8MB/s), io=81.9MiB (85.9MB), run=2005-2005msec 00:33:59.405 15:52:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:59.666 15:52:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:34:00.609 15:52:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=08c6f418-7bdc-4732-bd4e-9d9657e9c2ba 00:34:00.609 15:52:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 08c6f418-7bdc-4732-bd4e-9d9657e9c2ba 00:34:00.609 15:52:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=08c6f418-7bdc-4732-bd4e-9d9657e9c2ba 00:34:00.609 15:52:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:34:00.609 15:52:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:34:00.609 15:52:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:34:00.609 15:52:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:00.609 15:52:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:34:00.609 { 00:34:00.609 "uuid": "4c935557-88fd-4a41-9960-9cb8b6320ead", 00:34:00.609 "name": "lvs_0", 00:34:00.609 "base_bdev": "Nvme0n1", 00:34:00.609 "total_data_clusters": 1787, 00:34:00.609 "free_clusters": 0, 00:34:00.609 "block_size": 512, 00:34:00.609 "cluster_size": 1073741824 00:34:00.609 }, 00:34:00.609 { 00:34:00.609 "uuid": "08c6f418-7bdc-4732-bd4e-9d9657e9c2ba", 00:34:00.609 "name": "lvs_n_0", 00:34:00.609 "base_bdev": "ff137197-864d-4b7e-993d-ed5f2efe659a", 00:34:00.609 "total_data_clusters": 457025, 00:34:00.609 "free_clusters": 457025, 00:34:00.609 "block_size": 512, 00:34:00.609 "cluster_size": 4194304 00:34:00.609 } 00:34:00.609 ]' 00:34:00.609 15:52:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="08c6f418-7bdc-4732-bd4e-9d9657e9c2ba") .free_clusters' 00:34:00.609 15:52:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=457025 00:34:00.609 15:52:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="08c6f418-7bdc-4732-bd4e-9d9657e9c2ba") .cluster_size' 00:34:00.609 15:52:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:34:00.609 15:52:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=1828100 00:34:00.609 15:52:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 1828100 00:34:00.609 1828100 00:34:00.609 15:52:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1828100 00:34:01.553 e159b250-5038-440e-b71e-cb05f4c21d6a 00:34:01.553 15:52:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:34:01.553 15:52:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:34:01.813 15:52:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:34:02.075 15:52:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:02.075 15:52:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:02.075 15:52:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:02.075 15:52:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:02.075 15:52:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:02.075 15:52:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:02.075 15:52:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:34:02.075 15:52:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:02.075 15:52:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:02.075 15:52:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:34:02.075 15:52:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:02.075 15:52:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:02.075 15:52:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:02.075 15:52:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:02.075 15:52:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:02.075 15:52:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:02.075 15:52:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:02.075 15:52:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:02.075 15:52:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:02.075 15:52:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:02.075 15:52:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:34:02.075 15:52:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:02.336 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:34:02.336 fio-3.35 00:34:02.336 Starting 1 thread 00:34:04.883 00:34:04.883 test: (groupid=0, jobs=1): err= 0: pid=561693: Fri Sep 27 15:52:45 2024 00:34:04.883 read: IOPS=9262, BW=36.2MiB/s (37.9MB/s)(72.6MiB/2006msec) 00:34:04.883 slat (usec): min=2, max=110, avg= 2.22, stdev= 1.15 00:34:04.883 clat (usec): min=2076, max=12715, avg=7637.78, stdev=588.75 00:34:04.883 lat (usec): min=2093, max=12717, avg=7640.00, stdev=588.69 00:34:04.883 clat percentiles (usec): 00:34:04.883 | 1.00th=[ 6259], 5.00th=[ 6718], 10.00th=[ 6915], 20.00th=[ 7177], 00:34:04.883 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 7767], 00:34:04.883 | 70.00th=[ 7898], 80.00th=[ 8094], 90.00th=[ 8356], 95.00th=[ 8586], 00:34:04.883 | 99.00th=[ 8979], 99.50th=[ 9110], 99.90th=[10945], 99.95th=[11863], 00:34:04.883 | 99.99th=[12649] 00:34:04.883 bw ( KiB/s): min=35912, max=37672, per=99.92%, avg=37020.00, stdev=766.23, samples=4 00:34:04.883 iops : min= 8978, max= 9418, avg=9255.00, stdev=191.56, samples=4 00:34:04.883 write: IOPS=9267, BW=36.2MiB/s (38.0MB/s)(72.6MiB/2006msec); 0 zone resets 00:34:04.883 slat (nsec): min=2096, max=120147, avg=2290.01, stdev=916.54 00:34:04.883 clat (usec): min=1062, max=11200, avg=6087.94, stdev=509.70 00:34:04.883 lat (usec): min=1069, max=11202, avg=6090.23, stdev=509.68 00:34:04.883 clat percentiles (usec): 00:34:04.883 | 1.00th=[ 4883], 5.00th=[ 5342], 10.00th=[ 5473], 20.00th=[ 5669], 00:34:04.883 | 30.00th=[ 5866], 40.00th=[ 5997], 50.00th=[ 6063], 60.00th=[ 6194], 00:34:04.883 | 70.00th=[ 6325], 80.00th=[ 6456], 90.00th=[ 6718], 95.00th=[ 6849], 00:34:04.883 | 99.00th=[ 7177], 99.50th=[ 7308], 99.90th=[ 9503], 99.95th=[10290], 00:34:04.883 | 99.99th=[11076] 00:34:04.883 bw ( KiB/s): min=36752, max=37312, per=99.99%, avg=37066.00, stdev=271.17, samples=4 00:34:04.883 iops : min= 9188, max= 9328, avg=9266.50, stdev=67.79, samples=4 00:34:04.883 lat (msec) : 2=0.01%, 4=0.10%, 10=99.78%, 20=0.11% 00:34:04.883 cpu : usr=72.32%, sys=26.78%, ctx=58, majf=0, minf=20 00:34:04.883 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:34:04.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.883 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:04.883 issued rwts: total=18580,18590,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:04.883 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:04.883 00:34:04.883 Run status group 0 (all jobs): 00:34:04.883 READ: bw=36.2MiB/s (37.9MB/s), 36.2MiB/s-36.2MiB/s (37.9MB/s-37.9MB/s), io=72.6MiB (76.1MB), run=2006-2006msec 00:34:04.883 WRITE: bw=36.2MiB/s (38.0MB/s), 36.2MiB/s-36.2MiB/s (38.0MB/s-38.0MB/s), io=72.6MiB (76.1MB), run=2006-2006msec 00:34:04.883 15:52:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:34:04.883 15:52:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:34:04.883 15:52:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:34:06.797 15:52:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:34:07.059 15:52:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:34:07.633 15:52:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:34:07.633 15:52:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:34:10.182 15:52:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:34:10.182 15:52:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:34:10.182 15:52:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:34:10.182 15:52:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:34:10.182 15:52:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:34:10.182 15:52:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:10.182 15:52:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:34:10.182 15:52:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:10.182 15:52:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:10.182 rmmod nvme_tcp 00:34:10.182 rmmod nvme_fabrics 00:34:10.182 rmmod nvme_keyring 00:34:10.182 15:52:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:10.182 15:52:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:34:10.182 15:52:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:34:10.182 15:52:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@513 -- # '[' -n 558073 ']' 00:34:10.182 15:52:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # killprocess 558073 00:34:10.182 15:52:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 558073 ']' 00:34:10.182 15:52:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 558073 00:34:10.182 15:52:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:34:10.182 15:52:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:10.182 15:52:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 558073 00:34:10.182 15:52:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:10.182 15:52:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:10.182 15:52:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 558073' 00:34:10.182 killing process with pid 558073 00:34:10.182 15:52:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 558073 00:34:10.182 15:52:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 558073 00:34:10.182 15:52:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:34:10.182 15:52:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:34:10.182 15:52:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:34:10.182 15:52:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:34:10.182 15:52:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-save 00:34:10.182 15:52:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:34:10.182 15:52:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-restore 00:34:10.182 15:52:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:10.182 15:52:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:10.182 15:52:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:10.182 15:52:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:10.182 15:52:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:12.099 15:52:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:12.099 00:34:12.099 real 0m33.371s 00:34:12.099 user 2m40.952s 00:34:12.099 sys 0m10.068s 00:34:12.099 15:52:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:12.099 15:52:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.099 ************************************ 00:34:12.099 END TEST nvmf_fio_host 00:34:12.099 ************************************ 00:34:12.099 15:52:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:34:12.099 15:52:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:12.099 15:52:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:12.099 15:52:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.099 ************************************ 00:34:12.099 START TEST nvmf_failover 00:34:12.099 ************************************ 00:34:12.099 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:34:12.361 * Looking for test storage... 00:34:12.361 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:12.361 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:12.361 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:34:12.361 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:12.361 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:12.361 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:12.361 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:12.361 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:12.361 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:34:12.361 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:34:12.361 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:34:12.361 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:34:12.361 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:34:12.361 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:34:12.361 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:34:12.361 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:12.361 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:34:12.361 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:34:12.361 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:12.361 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:12.361 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:34:12.361 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:34:12.361 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:12.361 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:34:12.361 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:34:12.361 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:34:12.361 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:34:12.361 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:12.361 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:34:12.361 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:34:12.361 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:12.361 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:12.361 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:34:12.361 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:12.361 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:12.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.361 --rc genhtml_branch_coverage=1 00:34:12.361 --rc genhtml_function_coverage=1 00:34:12.361 --rc genhtml_legend=1 00:34:12.361 --rc geninfo_all_blocks=1 00:34:12.361 --rc geninfo_unexecuted_blocks=1 00:34:12.361 00:34:12.361 ' 00:34:12.361 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:12.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.361 --rc genhtml_branch_coverage=1 00:34:12.361 --rc genhtml_function_coverage=1 00:34:12.361 --rc genhtml_legend=1 00:34:12.361 --rc geninfo_all_blocks=1 00:34:12.361 --rc geninfo_unexecuted_blocks=1 00:34:12.361 00:34:12.361 ' 00:34:12.361 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:12.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.361 --rc genhtml_branch_coverage=1 00:34:12.361 --rc genhtml_function_coverage=1 00:34:12.361 --rc genhtml_legend=1 00:34:12.361 --rc geninfo_all_blocks=1 00:34:12.361 --rc geninfo_unexecuted_blocks=1 00:34:12.361 00:34:12.361 ' 00:34:12.361 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:12.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.361 --rc genhtml_branch_coverage=1 00:34:12.361 --rc genhtml_function_coverage=1 00:34:12.361 --rc genhtml_legend=1 00:34:12.361 --rc geninfo_all_blocks=1 00:34:12.361 --rc geninfo_unexecuted_blocks=1 00:34:12.361 00:34:12.361 ' 00:34:12.361 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:12.361 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:34:12.361 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:12.362 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # prepare_net_devs 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@434 -- # local -g is_hw=no 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # remove_spdk_ns 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:34:12.362 15:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:20.509 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:20.509 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:34:20.509 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:20.509 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:20.509 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:20.509 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:20.509 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:20.509 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:34:20.509 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:20.509 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:34:20.509 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:34:20.509 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:34:20.509 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:34:20.509 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:34:20.509 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:20.510 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:20.510 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ up == up ]] 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:20.510 Found net devices under 0000:31:00.0: cvl_0_0 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ up == up ]] 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:20.510 Found net devices under 0000:31:00.1: cvl_0_1 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # is_hw=yes 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:20.510 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:20.510 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:34:20.510 00:34:20.510 --- 10.0.0.2 ping statistics --- 00:34:20.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:20.510 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:20.510 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:20.510 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:34:20.510 00:34:20.510 --- 10.0.0.1 ping statistics --- 00:34:20.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:20.510 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # return 0 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # nvmfpid=567230 00:34:20.510 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # waitforlisten 567230 00:34:20.511 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:20.511 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 567230 ']' 00:34:20.511 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:20.511 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:20.511 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:20.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:20.511 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:20.511 15:53:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:20.511 [2024-09-27 15:53:00.473196] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:34:20.511 [2024-09-27 15:53:00.473263] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:20.511 [2024-09-27 15:53:00.565659] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:20.511 [2024-09-27 15:53:00.612536] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:20.511 [2024-09-27 15:53:00.612597] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:20.511 [2024-09-27 15:53:00.612606] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:20.511 [2024-09-27 15:53:00.612613] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:20.511 [2024-09-27 15:53:00.612619] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:20.511 [2024-09-27 15:53:00.612787] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:34:20.511 [2024-09-27 15:53:00.612841] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:34:20.511 [2024-09-27 15:53:00.612842] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:34:21.085 15:53:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:21.085 15:53:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:34:21.085 15:53:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:34:21.085 15:53:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:21.085 15:53:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:21.085 15:53:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:21.085 15:53:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:21.085 [2024-09-27 15:53:01.506354] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:21.085 15:53:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:21.346 Malloc0 00:34:21.346 15:53:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:21.606 15:53:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:21.867 15:53:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:21.867 [2024-09-27 15:53:02.326317] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:22.129 15:53:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:22.129 [2024-09-27 15:53:02.518820] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:22.129 15:53:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:34:22.391 [2024-09-27 15:53:02.703454] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:34:22.391 15:53:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=567765 00:34:22.391 15:53:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:34:22.391 15:53:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:22.391 15:53:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 567765 /var/tmp/bdevperf.sock 00:34:22.391 15:53:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 567765 ']' 00:34:22.391 15:53:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:22.391 15:53:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:22.391 15:53:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:22.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:22.391 15:53:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:22.391 15:53:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:23.334 15:53:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:23.334 15:53:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:34:23.334 15:53:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:23.334 NVMe0n1 00:34:23.595 15:53:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:23.856 00:34:23.856 15:53:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=567936 00:34:23.856 15:53:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:23.856 15:53:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:34:24.798 15:53:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:24.798 [2024-09-27 15:53:05.279768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d05d0 is same with the state(6) to be set 00:34:24.798 [2024-09-27 15:53:05.279828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d05d0 is same with the state(6) to be set 00:34:24.798 [2024-09-27 15:53:05.279835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d05d0 is same with the state(6) to be set 00:34:24.798 [2024-09-27 15:53:05.279840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d05d0 is same with the state(6) to be set 00:34:24.798 [2024-09-27 15:53:05.279845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d05d0 is same with the state(6) to be set 00:34:24.798 [2024-09-27 15:53:05.279849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d05d0 is same with the state(6) to be set 00:34:24.798 [2024-09-27 15:53:05.279854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d05d0 is same with the state(6) to be set 00:34:24.798 [2024-09-27 15:53:05.279858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d05d0 is same with the state(6) to be set 00:34:24.798 [2024-09-27 15:53:05.279863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d05d0 is same with the state(6) to be set 00:34:24.798 [2024-09-27 15:53:05.279867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d05d0 is same with the state(6) to be set 00:34:24.798 [2024-09-27 15:53:05.279872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d05d0 is same with the state(6) to be set 00:34:24.798 [2024-09-27 15:53:05.279876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d05d0 is same with the state(6) to be set 00:34:24.798 [2024-09-27 15:53:05.279881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d05d0 is same with the state(6) to be set 00:34:24.798 [2024-09-27 15:53:05.279885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d05d0 is same with the state(6) to be set 00:34:24.798 [2024-09-27 15:53:05.279889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d05d0 is same with the state(6) to be set 00:34:24.798 [2024-09-27 15:53:05.279898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d05d0 is same with the state(6) to be set 00:34:24.798 [2024-09-27 15:53:05.279903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d05d0 is same with the state(6) to be set 00:34:24.798 [2024-09-27 15:53:05.279907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d05d0 is same with the state(6) to be set 00:34:24.798 [2024-09-27 15:53:05.279911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d05d0 is same with the state(6) to be set 00:34:24.798 [2024-09-27 15:53:05.279916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d05d0 is same with the state(6) to be set 00:34:24.798 [2024-09-27 15:53:05.279920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d05d0 is same with the state(6) to be set 00:34:24.798 [2024-09-27 15:53:05.279925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d05d0 is same with the state(6) to be set 00:34:25.059 15:53:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:34:28.360 15:53:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:28.360 00:34:28.360 15:53:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:28.360 [2024-09-27 15:53:08.739093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.360 [2024-09-27 15:53:08.739131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.360 [2024-09-27 15:53:08.739137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.360 [2024-09-27 15:53:08.739142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.360 [2024-09-27 15:53:08.739147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.360 [2024-09-27 15:53:08.739152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.360 [2024-09-27 15:53:08.739157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.360 [2024-09-27 15:53:08.739161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.360 [2024-09-27 15:53:08.739166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.360 [2024-09-27 15:53:08.739170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.360 [2024-09-27 15:53:08.739175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.360 [2024-09-27 15:53:08.739180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.360 [2024-09-27 15:53:08.739185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.360 [2024-09-27 15:53:08.739189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.360 [2024-09-27 15:53:08.739194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.360 [2024-09-27 15:53:08.739199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.360 [2024-09-27 15:53:08.739203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.360 [2024-09-27 15:53:08.739208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.360 [2024-09-27 15:53:08.739213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.360 [2024-09-27 15:53:08.739218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.360 [2024-09-27 15:53:08.739222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.360 [2024-09-27 15:53:08.739227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.360 [2024-09-27 15:53:08.739231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.360 [2024-09-27 15:53:08.739236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.360 [2024-09-27 15:53:08.739246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.360 [2024-09-27 15:53:08.739251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.360 [2024-09-27 15:53:08.739256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.360 [2024-09-27 15:53:08.739260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.360 [2024-09-27 15:53:08.739265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.361 [2024-09-27 15:53:08.739270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.361 [2024-09-27 15:53:08.739274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.361 [2024-09-27 15:53:08.739279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.361 [2024-09-27 15:53:08.739283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.361 [2024-09-27 15:53:08.739288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.361 [2024-09-27 15:53:08.739292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.361 [2024-09-27 15:53:08.739297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.361 [2024-09-27 15:53:08.739302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.361 [2024-09-27 15:53:08.739306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.361 [2024-09-27 15:53:08.739311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.361 [2024-09-27 15:53:08.739315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.361 [2024-09-27 15:53:08.739319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.361 [2024-09-27 15:53:08.739324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.361 [2024-09-27 15:53:08.739328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.361 [2024-09-27 15:53:08.739332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.361 [2024-09-27 15:53:08.739337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.361 [2024-09-27 15:53:08.739341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.361 [2024-09-27 15:53:08.739345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.361 [2024-09-27 15:53:08.739350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.361 [2024-09-27 15:53:08.739355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.361 [2024-09-27 15:53:08.739359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.361 [2024-09-27 15:53:08.739364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.361 [2024-09-27 15:53:08.739370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.361 [2024-09-27 15:53:08.739374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1230 is same with the state(6) to be set 00:34:28.361 15:53:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:34:31.661 15:53:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:31.661 [2024-09-27 15:53:11.933008] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:31.661 15:53:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:34:32.601 15:53:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:34:32.862 15:53:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 567936 00:34:39.453 { 00:34:39.453 "results": [ 00:34:39.453 { 00:34:39.453 "job": "NVMe0n1", 00:34:39.453 "core_mask": "0x1", 00:34:39.453 "workload": "verify", 00:34:39.453 "status": "finished", 00:34:39.453 "verify_range": { 00:34:39.453 "start": 0, 00:34:39.453 "length": 16384 00:34:39.453 }, 00:34:39.453 "queue_depth": 128, 00:34:39.453 "io_size": 4096, 00:34:39.453 "runtime": 15.009113, 00:34:39.453 "iops": 12478.552196921963, 00:34:39.453 "mibps": 48.744344519226416, 00:34:39.453 "io_failed": 13341, 00:34:39.453 "io_timeout": 0, 00:34:39.453 "avg_latency_us": 9554.73648781606, 00:34:39.453 "min_latency_us": 349.8666666666667, 00:34:39.453 "max_latency_us": 12670.293333333333 00:34:39.453 } 00:34:39.453 ], 00:34:39.453 "core_count": 1 00:34:39.453 } 00:34:39.453 15:53:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 567765 00:34:39.453 15:53:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 567765 ']' 00:34:39.453 15:53:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 567765 00:34:39.453 15:53:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:34:39.453 15:53:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:39.453 15:53:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 567765 00:34:39.453 15:53:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:39.453 15:53:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:39.453 15:53:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 567765' 00:34:39.453 killing process with pid 567765 00:34:39.453 15:53:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 567765 00:34:39.453 15:53:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 567765 00:34:39.453 15:53:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:39.453 [2024-09-27 15:53:02.791544] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:34:39.453 [2024-09-27 15:53:02.791620] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid567765 ] 00:34:39.453 [2024-09-27 15:53:02.876912] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:39.453 [2024-09-27 15:53:02.923515] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:34:39.453 Running I/O for 15 seconds... 00:34:39.453 11060.00 IOPS, 43.20 MiB/s [2024-09-27 15:53:05.281066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:39.453 [2024-09-27 15:53:05.281102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.453 [2024-09-27 15:53:05.281113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:39.453 [2024-09-27 15:53:05.281121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.453 [2024-09-27 15:53:05.281131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:39.453 [2024-09-27 15:53:05.281139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.453 [2024-09-27 15:53:05.281147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:39.453 [2024-09-27 15:53:05.281155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.453 [2024-09-27 15:53:05.281163] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c8d0 is same with the state(6) to be set 00:34:39.453 [2024-09-27 15:53:05.281228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:94536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.453 [2024-09-27 15:53:05.281238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.453 [2024-09-27 15:53:05.281252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.453 [2024-09-27 15:53:05.281260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.453 [2024-09-27 15:53:05.281269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:94552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.454 [2024-09-27 15:53:05.281277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.454 [2024-09-27 15:53:05.281287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:94560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.454 [2024-09-27 15:53:05.281295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.454 [2024-09-27 15:53:05.281304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:94568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.454 [2024-09-27 15:53:05.281312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.454 [2024-09-27 15:53:05.281321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:94576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.454 [2024-09-27 15:53:05.281329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.454 [2024-09-27 15:53:05.281338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.454 [2024-09-27 15:53:05.281353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.454 [2024-09-27 15:53:05.281363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:94592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.454 [2024-09-27 15:53:05.281370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.454 [2024-09-27 15:53:05.281380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:94600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.454 [2024-09-27 15:53:05.281387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.454 [2024-09-27 15:53:05.281397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:94608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.454 [2024-09-27 15:53:05.281404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.454 [2024-09-27 15:53:05.281414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:94616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.454 [2024-09-27 15:53:05.281421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.454 [2024-09-27 15:53:05.281430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.454 [2024-09-27 15:53:05.281438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.454 [2024-09-27 15:53:05.281447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.454 [2024-09-27 15:53:05.281455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.454 [2024-09-27 15:53:05.281464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:94640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.454 [2024-09-27 15:53:05.281471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.454 [2024-09-27 15:53:05.281481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.454 [2024-09-27 15:53:05.281488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.454 [2024-09-27 15:53:05.281497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:94656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.454 [2024-09-27 15:53:05.281504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.454 [2024-09-27 15:53:05.281514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.454 [2024-09-27 15:53:05.281521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.454 [2024-09-27 15:53:05.281530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:94672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.454 [2024-09-27 15:53:05.281538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.454 [2024-09-27 15:53:05.281547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:94680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.454 [2024-09-27 15:53:05.281555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.454 [2024-09-27 15:53:05.281565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:94688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.454 [2024-09-27 15:53:05.281573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.454 [2024-09-27 15:53:05.281582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:94696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.454 [2024-09-27 15:53:05.281590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.454 [2024-09-27 15:53:05.281600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:94704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.454 [2024-09-27 15:53:05.281607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.454 [2024-09-27 15:53:05.281616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:94712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.454 [2024-09-27 15:53:05.281623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.454 [2024-09-27 15:53:05.281633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.454 [2024-09-27 15:53:05.281640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.454 [2024-09-27 15:53:05.281649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.454 [2024-09-27 15:53:05.281657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.454 [2024-09-27 15:53:05.281666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:94736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.454 [2024-09-27 15:53:05.281673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.454 [2024-09-27 15:53:05.281682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:94744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.454 [2024-09-27 15:53:05.281690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.454 [2024-09-27 15:53:05.281699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:94808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.454 [2024-09-27 15:53:05.281706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.454 [2024-09-27 15:53:05.281716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:94816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.454 [2024-09-27 15:53:05.281724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.454 [2024-09-27 15:53:05.281733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.454 [2024-09-27 15:53:05.281741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.454 [2024-09-27 15:53:05.281750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:94832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.454 [2024-09-27 15:53:05.281757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.454 [2024-09-27 15:53:05.281766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.454 [2024-09-27 15:53:05.281775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.454 [2024-09-27 15:53:05.281784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:94848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.454 [2024-09-27 15:53:05.281792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.454 [2024-09-27 15:53:05.281801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:94856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.454 [2024-09-27 15:53:05.281808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.454 [2024-09-27 15:53:05.281818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.454 [2024-09-27 15:53:05.281825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.454 [2024-09-27 15:53:05.281834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.454 [2024-09-27 15:53:05.281841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.454 [2024-09-27 15:53:05.281851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.454 [2024-09-27 15:53:05.281858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.454 [2024-09-27 15:53:05.281867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:94888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.454 [2024-09-27 15:53:05.281875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.454 [2024-09-27 15:53:05.281884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:94896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.454 [2024-09-27 15:53:05.281891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.454 [2024-09-27 15:53:05.281907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.454 [2024-09-27 15:53:05.281914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.454 [2024-09-27 15:53:05.281924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.455 [2024-09-27 15:53:05.281931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.455 [2024-09-27 15:53:05.281940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:94920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.455 [2024-09-27 15:53:05.281947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.455 [2024-09-27 15:53:05.281956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.455 [2024-09-27 15:53:05.281963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.455 [2024-09-27 15:53:05.281972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:94936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.455 [2024-09-27 15:53:05.281980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.455 [2024-09-27 15:53:05.281989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:94944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.455 [2024-09-27 15:53:05.281998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.455 [2024-09-27 15:53:05.282007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.455 [2024-09-27 15:53:05.282014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.455 [2024-09-27 15:53:05.282023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.455 [2024-09-27 15:53:05.282031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.455 [2024-09-27 15:53:05.282040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.455 [2024-09-27 15:53:05.282047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.455 [2024-09-27 15:53:05.282056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.455 [2024-09-27 15:53:05.282064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.455 [2024-09-27 15:53:05.282074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.455 [2024-09-27 15:53:05.282081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.455 [2024-09-27 15:53:05.282091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:94992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.455 [2024-09-27 15:53:05.282098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.455 [2024-09-27 15:53:05.282107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.455 [2024-09-27 15:53:05.282115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.455 [2024-09-27 15:53:05.282125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.455 [2024-09-27 15:53:05.282132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.455 [2024-09-27 15:53:05.282141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.455 [2024-09-27 15:53:05.282148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.455 [2024-09-27 15:53:05.282158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.455 [2024-09-27 15:53:05.282165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.455 [2024-09-27 15:53:05.282174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.455 [2024-09-27 15:53:05.282182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.455 [2024-09-27 15:53:05.282191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.455 [2024-09-27 15:53:05.282199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.455 [2024-09-27 15:53:05.282210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.455 [2024-09-27 15:53:05.282217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.455 [2024-09-27 15:53:05.282227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:94752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.455 [2024-09-27 15:53:05.282234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.455 [2024-09-27 15:53:05.282243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:94760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.455 [2024-09-27 15:53:05.282251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.455 [2024-09-27 15:53:05.282261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:94768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.455 [2024-09-27 15:53:05.282268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.455 [2024-09-27 15:53:05.282278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:94776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.455 [2024-09-27 15:53:05.282285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.455 [2024-09-27 15:53:05.282295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:94784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.455 [2024-09-27 15:53:05.282302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.455 [2024-09-27 15:53:05.282311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:94792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.455 [2024-09-27 15:53:05.282319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.455 [2024-09-27 15:53:05.282328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:94800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.455 [2024-09-27 15:53:05.282335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.455 [2024-09-27 15:53:05.282345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.455 [2024-09-27 15:53:05.282352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.455 [2024-09-27 15:53:05.282361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.455 [2024-09-27 15:53:05.282368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.455 [2024-09-27 15:53:05.282377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.455 [2024-09-27 15:53:05.282385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.455 [2024-09-27 15:53:05.282394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.455 [2024-09-27 15:53:05.282401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.455 [2024-09-27 15:53:05.282410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.455 [2024-09-27 15:53:05.282419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.455 [2024-09-27 15:53:05.282428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.455 [2024-09-27 15:53:05.282435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.455 [2024-09-27 15:53:05.282444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.455 [2024-09-27 15:53:05.282452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.455 [2024-09-27 15:53:05.282461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.455 [2024-09-27 15:53:05.282468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.455 [2024-09-27 15:53:05.282477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.455 [2024-09-27 15:53:05.282485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.455 [2024-09-27 15:53:05.282494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.455 [2024-09-27 15:53:05.282501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.455 [2024-09-27 15:53:05.282510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.455 [2024-09-27 15:53:05.282517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.455 [2024-09-27 15:53:05.282526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.455 [2024-09-27 15:53:05.282533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.455 [2024-09-27 15:53:05.282542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.455 [2024-09-27 15:53:05.282550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.455 [2024-09-27 15:53:05.282559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:95160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.456 [2024-09-27 15:53:05.282566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.456 [2024-09-27 15:53:05.282575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:95168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.456 [2024-09-27 15:53:05.282583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.456 [2024-09-27 15:53:05.282592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.456 [2024-09-27 15:53:05.282599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.456 [2024-09-27 15:53:05.282609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.456 [2024-09-27 15:53:05.282617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.456 [2024-09-27 15:53:05.282631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.456 [2024-09-27 15:53:05.282639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.456 [2024-09-27 15:53:05.282648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:95200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.456 [2024-09-27 15:53:05.282656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.456 [2024-09-27 15:53:05.282665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.456 [2024-09-27 15:53:05.282672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.456 [2024-09-27 15:53:05.282681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:95216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.456 [2024-09-27 15:53:05.282688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.456 [2024-09-27 15:53:05.282698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.456 [2024-09-27 15:53:05.282705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.456 [2024-09-27 15:53:05.282714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.456 [2024-09-27 15:53:05.282721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.456 [2024-09-27 15:53:05.282731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.456 [2024-09-27 15:53:05.282738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.456 [2024-09-27 15:53:05.282747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.456 [2024-09-27 15:53:05.282755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.456 [2024-09-27 15:53:05.282764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.456 [2024-09-27 15:53:05.282772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.456 [2024-09-27 15:53:05.282781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.456 [2024-09-27 15:53:05.282788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.456 [2024-09-27 15:53:05.282797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.456 [2024-09-27 15:53:05.282804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.456 [2024-09-27 15:53:05.282813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.456 [2024-09-27 15:53:05.282821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.456 [2024-09-27 15:53:05.282830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.456 [2024-09-27 15:53:05.282837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.456 [2024-09-27 15:53:05.282848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.456 [2024-09-27 15:53:05.282855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.456 [2024-09-27 15:53:05.282864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.456 [2024-09-27 15:53:05.282871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.456 [2024-09-27 15:53:05.282881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.456 [2024-09-27 15:53:05.282888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.456 [2024-09-27 15:53:05.282901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.456 [2024-09-27 15:53:05.282908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.456 [2024-09-27 15:53:05.282918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.456 [2024-09-27 15:53:05.282925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.456 [2024-09-27 15:53:05.282934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.456 [2024-09-27 15:53:05.282941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.456 [2024-09-27 15:53:05.282950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.456 [2024-09-27 15:53:05.282957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.456 [2024-09-27 15:53:05.282967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.456 [2024-09-27 15:53:05.282974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.456 [2024-09-27 15:53:05.282983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.456 [2024-09-27 15:53:05.282990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.456 [2024-09-27 15:53:05.283000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.456 [2024-09-27 15:53:05.283007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.456 [2024-09-27 15:53:05.283016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.456 [2024-09-27 15:53:05.283023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.456 [2024-09-27 15:53:05.283032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.456 [2024-09-27 15:53:05.283039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.456 [2024-09-27 15:53:05.283049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.456 [2024-09-27 15:53:05.283058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.456 [2024-09-27 15:53:05.283066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.456 [2024-09-27 15:53:05.283074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.456 [2024-09-27 15:53:05.283083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.456 [2024-09-27 15:53:05.283090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.456 [2024-09-27 15:53:05.283099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.456 [2024-09-27 15:53:05.283106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.456 [2024-09-27 15:53:05.283115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:95424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.456 [2024-09-27 15:53:05.283123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.456 [2024-09-27 15:53:05.283132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.456 [2024-09-27 15:53:05.283139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.456 [2024-09-27 15:53:05.283149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.456 [2024-09-27 15:53:05.283156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.456 [2024-09-27 15:53:05.283166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.456 [2024-09-27 15:53:05.283174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.456 [2024-09-27 15:53:05.283183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:95456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.456 [2024-09-27 15:53:05.283190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.456 [2024-09-27 15:53:05.283199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:95464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.457 [2024-09-27 15:53:05.283207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.457 [2024-09-27 15:53:05.283216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.457 [2024-09-27 15:53:05.283223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.457 [2024-09-27 15:53:05.283232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.457 [2024-09-27 15:53:05.283240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.457 [2024-09-27 15:53:05.283249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.457 [2024-09-27 15:53:05.283256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.457 [2024-09-27 15:53:05.283267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.457 [2024-09-27 15:53:05.283275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.457 [2024-09-27 15:53:05.283284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.457 [2024-09-27 15:53:05.283291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.457 [2024-09-27 15:53:05.283300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:95512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.457 [2024-09-27 15:53:05.283308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.457 [2024-09-27 15:53:05.283317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:95520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.457 [2024-09-27 15:53:05.283324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.457 [2024-09-27 15:53:05.283334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.457 [2024-09-27 15:53:05.283341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.457 [2024-09-27 15:53:05.283350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.457 [2024-09-27 15:53:05.283358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.457 [2024-09-27 15:53:05.283367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:95544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.457 [2024-09-27 15:53:05.283374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.457 [2024-09-27 15:53:05.283393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:39.457 [2024-09-27 15:53:05.283400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:39.457 [2024-09-27 15:53:05.283407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95552 len:8 PRP1 0x0 PRP2 0x0 00:34:39.457 [2024-09-27 15:53:05.283415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.457 [2024-09-27 15:53:05.283451] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x203ce60 was disconnected and freed. reset controller. 00:34:39.457 [2024-09-27 15:53:05.283461] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:34:39.457 [2024-09-27 15:53:05.283469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:39.457 [2024-09-27 15:53:05.287269] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:39.457 [2024-09-27 15:53:05.287294] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x201c8d0 (9): Bad file descriptor 00:34:39.457 [2024-09-27 15:53:05.321387] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:39.457 10920.50 IOPS, 42.66 MiB/s 10997.67 IOPS, 42.96 MiB/s 11211.75 IOPS, 43.80 MiB/s [2024-09-27 15:53:08.739934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:28208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.457 [2024-09-27 15:53:08.739964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.457 [2024-09-27 15:53:08.739975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:28216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.457 [2024-09-27 15:53:08.739985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.457 [2024-09-27 15:53:08.739992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:28224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.457 [2024-09-27 15:53:08.739998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.457 [2024-09-27 15:53:08.740005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:28232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.457 [2024-09-27 15:53:08.740010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.457 [2024-09-27 15:53:08.740017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:28240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.457 [2024-09-27 15:53:08.740022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.457 [2024-09-27 15:53:08.740029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:28248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.457 [2024-09-27 15:53:08.740035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.457 [2024-09-27 15:53:08.740042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.457 [2024-09-27 15:53:08.740047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.457 [2024-09-27 15:53:08.740053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:28264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.457 [2024-09-27 15:53:08.740059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.457 [2024-09-27 15:53:08.740065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:28272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.457 [2024-09-27 15:53:08.740070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.457 [2024-09-27 15:53:08.740077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:28280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.457 [2024-09-27 15:53:08.740082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.457 [2024-09-27 15:53:08.740088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:28288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.457 [2024-09-27 15:53:08.740094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.457 [2024-09-27 15:53:08.740100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.457 [2024-09-27 15:53:08.740105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.457 [2024-09-27 15:53:08.740112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:28304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.457 [2024-09-27 15:53:08.740117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.457 [2024-09-27 15:53:08.740123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:28312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.457 [2024-09-27 15:53:08.740128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.457 [2024-09-27 15:53:08.740136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:28320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.457 [2024-09-27 15:53:08.740141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.457 [2024-09-27 15:53:08.740148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.457 [2024-09-27 15:53:08.740153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.457 [2024-09-27 15:53:08.740159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.457 [2024-09-27 15:53:08.740164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.457 [2024-09-27 15:53:08.740171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:28344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.457 [2024-09-27 15:53:08.740176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.457 [2024-09-27 15:53:08.740182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:28352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.457 [2024-09-27 15:53:08.740187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.457 [2024-09-27 15:53:08.740194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.457 [2024-09-27 15:53:08.740199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.457 [2024-09-27 15:53:08.740206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:28368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.457 [2024-09-27 15:53:08.740211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.457 [2024-09-27 15:53:08.740217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:28376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.457 [2024-09-27 15:53:08.740222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.457 [2024-09-27 15:53:08.740229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:28384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.457 [2024-09-27 15:53:08.740234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.458 [2024-09-27 15:53:08.740240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:28392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.458 [2024-09-27 15:53:08.740246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.458 [2024-09-27 15:53:08.740252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:28400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.458 [2024-09-27 15:53:08.740257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.458 [2024-09-27 15:53:08.740264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:28408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.458 [2024-09-27 15:53:08.740269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.458 [2024-09-27 15:53:08.740275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:28416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.458 [2024-09-27 15:53:08.740284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.458 [2024-09-27 15:53:08.740290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:28424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.458 [2024-09-27 15:53:08.740295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.458 [2024-09-27 15:53:08.740302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:28432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.458 [2024-09-27 15:53:08.740307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.458 [2024-09-27 15:53:08.740313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:28440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.458 [2024-09-27 15:53:08.740318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.458 [2024-09-27 15:53:08.740324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.458 [2024-09-27 15:53:08.740330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.458 [2024-09-27 15:53:08.740336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:28456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.458 [2024-09-27 15:53:08.740341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.458 [2024-09-27 15:53:08.740347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:28464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.458 [2024-09-27 15:53:08.740353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.458 [2024-09-27 15:53:08.740360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:28472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.458 [2024-09-27 15:53:08.740365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.458 [2024-09-27 15:53:08.740371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:28480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.458 [2024-09-27 15:53:08.740377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.458 [2024-09-27 15:53:08.740384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:28488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.458 [2024-09-27 15:53:08.740389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.458 [2024-09-27 15:53:08.740395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.458 [2024-09-27 15:53:08.740400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.458 [2024-09-27 15:53:08.740407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.458 [2024-09-27 15:53:08.740412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.458 [2024-09-27 15:53:08.740419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:28512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.458 [2024-09-27 15:53:08.740424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.458 [2024-09-27 15:53:08.740431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:28520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.458 [2024-09-27 15:53:08.740437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.458 [2024-09-27 15:53:08.740444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:28528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.458 [2024-09-27 15:53:08.740449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.458 [2024-09-27 15:53:08.740456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:28536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.458 [2024-09-27 15:53:08.740461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.458 [2024-09-27 15:53:08.740468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:28544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.458 [2024-09-27 15:53:08.740473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.458 [2024-09-27 15:53:08.740480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:28552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.458 [2024-09-27 15:53:08.740485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.458 [2024-09-27 15:53:08.740492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.458 [2024-09-27 15:53:08.740498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.458 [2024-09-27 15:53:08.740504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:28568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.458 [2024-09-27 15:53:08.740510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.458 [2024-09-27 15:53:08.740516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:28576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.458 [2024-09-27 15:53:08.740521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.458 [2024-09-27 15:53:08.740528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:28584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.458 [2024-09-27 15:53:08.740533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.458 [2024-09-27 15:53:08.740539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:28592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.458 [2024-09-27 15:53:08.740544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.458 [2024-09-27 15:53:08.740550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:28600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.458 [2024-09-27 15:53:08.740555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.458 [2024-09-27 15:53:08.740562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:28608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.458 [2024-09-27 15:53:08.740567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.458 [2024-09-27 15:53:08.740574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:28616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.458 [2024-09-27 15:53:08.740581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.458 [2024-09-27 15:53:08.740587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.458 [2024-09-27 15:53:08.740592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.458 [2024-09-27 15:53:08.740599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:28632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.458 [2024-09-27 15:53:08.740604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.458 [2024-09-27 15:53:08.740610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:28640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.458 [2024-09-27 15:53:08.740615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.459 [2024-09-27 15:53:08.740622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:28720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.459 [2024-09-27 15:53:08.740627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.459 [2024-09-27 15:53:08.740633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:28728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.459 [2024-09-27 15:53:08.740638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.459 [2024-09-27 15:53:08.740645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:28736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.459 [2024-09-27 15:53:08.740650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.459 [2024-09-27 15:53:08.740656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:28744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.459 [2024-09-27 15:53:08.740661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.459 [2024-09-27 15:53:08.740668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:28752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.459 [2024-09-27 15:53:08.740673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.459 [2024-09-27 15:53:08.740679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:28760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.459 [2024-09-27 15:53:08.740684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.459 [2024-09-27 15:53:08.740690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:28768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.459 [2024-09-27 15:53:08.740695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.459 [2024-09-27 15:53:08.740702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:28648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.459 [2024-09-27 15:53:08.740707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.459 [2024-09-27 15:53:08.740713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:28656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.459 [2024-09-27 15:53:08.740718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.459 [2024-09-27 15:53:08.740725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:28664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.459 [2024-09-27 15:53:08.740731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.459 [2024-09-27 15:53:08.740738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:28672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.459 [2024-09-27 15:53:08.740743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.459 [2024-09-27 15:53:08.740750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:28680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.459 [2024-09-27 15:53:08.740755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.459 [2024-09-27 15:53:08.740761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:28688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.459 [2024-09-27 15:53:08.740766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.459 [2024-09-27 15:53:08.740773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:28696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.459 [2024-09-27 15:53:08.740778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.459 [2024-09-27 15:53:08.740784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:28776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.459 [2024-09-27 15:53:08.740789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.459 [2024-09-27 15:53:08.740795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:28784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.459 [2024-09-27 15:53:08.740800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.459 [2024-09-27 15:53:08.740807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:28792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.459 [2024-09-27 15:53:08.740812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.459 [2024-09-27 15:53:08.740818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:28800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.459 [2024-09-27 15:53:08.740823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.459 [2024-09-27 15:53:08.740830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:28808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.459 [2024-09-27 15:53:08.740834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.459 [2024-09-27 15:53:08.740841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:28816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.459 [2024-09-27 15:53:08.740846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.459 [2024-09-27 15:53:08.740852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.459 [2024-09-27 15:53:08.740857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.459 [2024-09-27 15:53:08.740864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.459 [2024-09-27 15:53:08.740868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.459 [2024-09-27 15:53:08.740876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:28840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.459 [2024-09-27 15:53:08.740881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.459 [2024-09-27 15:53:08.740887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:28848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.459 [2024-09-27 15:53:08.740892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.459 [2024-09-27 15:53:08.740903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:28856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.459 [2024-09-27 15:53:08.740908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.459 [2024-09-27 15:53:08.740914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:28864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.459 [2024-09-27 15:53:08.740919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.459 [2024-09-27 15:53:08.740925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:28872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.459 [2024-09-27 15:53:08.740930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.459 [2024-09-27 15:53:08.740937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:28880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.459 [2024-09-27 15:53:08.740942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.459 [2024-09-27 15:53:08.740948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.459 [2024-09-27 15:53:08.740953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.459 [2024-09-27 15:53:08.740959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:28896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.459 [2024-09-27 15:53:08.740965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.459 [2024-09-27 15:53:08.740971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:28904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.459 [2024-09-27 15:53:08.740976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.459 [2024-09-27 15:53:08.740982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:28912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.459 [2024-09-27 15:53:08.740987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.459 [2024-09-27 15:53:08.740994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.459 [2024-09-27 15:53:08.740998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.459 [2024-09-27 15:53:08.741005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:28928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.459 [2024-09-27 15:53:08.741010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.459 [2024-09-27 15:53:08.741016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:28936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.459 [2024-09-27 15:53:08.741022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.459 [2024-09-27 15:53:08.741029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:28944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.459 [2024-09-27 15:53:08.741034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.459 [2024-09-27 15:53:08.741040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:28952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.459 [2024-09-27 15:53:08.741045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.460 [2024-09-27 15:53:08.741051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:28960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.460 [2024-09-27 15:53:08.741056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.460 [2024-09-27 15:53:08.741063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:28968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.460 [2024-09-27 15:53:08.741068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.460 [2024-09-27 15:53:08.741074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:28976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.460 [2024-09-27 15:53:08.741079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.460 [2024-09-27 15:53:08.741085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.460 [2024-09-27 15:53:08.741090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.460 [2024-09-27 15:53:08.741096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:28992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.460 [2024-09-27 15:53:08.741102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.460 [2024-09-27 15:53:08.741108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:29000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.460 [2024-09-27 15:53:08.741113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.460 [2024-09-27 15:53:08.741120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:29008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.460 [2024-09-27 15:53:08.741125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.460 [2024-09-27 15:53:08.741131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.460 [2024-09-27 15:53:08.741136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.460 [2024-09-27 15:53:08.741143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.460 [2024-09-27 15:53:08.741148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.460 [2024-09-27 15:53:08.741154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:29032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.460 [2024-09-27 15:53:08.741159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.460 [2024-09-27 15:53:08.741167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.460 [2024-09-27 15:53:08.741172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.460 [2024-09-27 15:53:08.741178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.460 [2024-09-27 15:53:08.741183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.460 [2024-09-27 15:53:08.741190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:29056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.460 [2024-09-27 15:53:08.741194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.460 [2024-09-27 15:53:08.741201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:29064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.460 [2024-09-27 15:53:08.741206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.460 [2024-09-27 15:53:08.741212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.460 [2024-09-27 15:53:08.741217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.460 [2024-09-27 15:53:08.741224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:29080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.460 [2024-09-27 15:53:08.741228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.460 [2024-09-27 15:53:08.741235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.460 [2024-09-27 15:53:08.741240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.460 [2024-09-27 15:53:08.741247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:29096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.460 [2024-09-27 15:53:08.741252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.460 [2024-09-27 15:53:08.741258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:29104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.460 [2024-09-27 15:53:08.741263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.460 [2024-09-27 15:53:08.741269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:29112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.460 [2024-09-27 15:53:08.741274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.460 [2024-09-27 15:53:08.741281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:29120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.460 [2024-09-27 15:53:08.741286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.460 [2024-09-27 15:53:08.741292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.460 [2024-09-27 15:53:08.741298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.460 [2024-09-27 15:53:08.741304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:29136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.460 [2024-09-27 15:53:08.741309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.460 [2024-09-27 15:53:08.741316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:29144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.460 [2024-09-27 15:53:08.741322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.460 [2024-09-27 15:53:08.741328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:29152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.460 [2024-09-27 15:53:08.741333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.460 [2024-09-27 15:53:08.741339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:29160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.460 [2024-09-27 15:53:08.741344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.460 [2024-09-27 15:53:08.741351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:29168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.460 [2024-09-27 15:53:08.741356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.460 [2024-09-27 15:53:08.741363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:29176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.460 [2024-09-27 15:53:08.741368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.460 [2024-09-27 15:53:08.741374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:29184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.460 [2024-09-27 15:53:08.741379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.460 [2024-09-27 15:53:08.741386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:29192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.460 [2024-09-27 15:53:08.741391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.460 [2024-09-27 15:53:08.741397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:29200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.460 [2024-09-27 15:53:08.741402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.460 [2024-09-27 15:53:08.741409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:29208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.460 [2024-09-27 15:53:08.741414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.460 [2024-09-27 15:53:08.741420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:29216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.460 [2024-09-27 15:53:08.741425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.460 [2024-09-27 15:53:08.741431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.460 [2024-09-27 15:53:08.741436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.460 [2024-09-27 15:53:08.741443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.460 [2024-09-27 15:53:08.741448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.460 [2024-09-27 15:53:08.741466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:39.460 [2024-09-27 15:53:08.741472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:39.460 [2024-09-27 15:53:08.741477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:28712 len:8 PRP1 0x0 PRP2 0x0 00:34:39.460 [2024-09-27 15:53:08.741484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.460 [2024-09-27 15:53:08.741514] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x203f0c0 was disconnected and freed. reset controller. 00:34:39.460 [2024-09-27 15:53:08.741521] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:34:39.461 [2024-09-27 15:53:08.741536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:39.461 [2024-09-27 15:53:08.741541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.461 [2024-09-27 15:53:08.741548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:39.461 [2024-09-27 15:53:08.741552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.461 [2024-09-27 15:53:08.741558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:39.461 [2024-09-27 15:53:08.741563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.461 [2024-09-27 15:53:08.741569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:39.461 [2024-09-27 15:53:08.741574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.461 [2024-09-27 15:53:08.741579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:39.461 [2024-09-27 15:53:08.744147] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:39.461 [2024-09-27 15:53:08.744166] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x201c8d0 (9): Bad file descriptor 00:34:39.461 [2024-09-27 15:53:08.933220] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:39.461 11083.80 IOPS, 43.30 MiB/s 11398.17 IOPS, 44.52 MiB/s 11607.71 IOPS, 45.34 MiB/s 11769.88 IOPS, 45.98 MiB/s [2024-09-27 15:53:13.126356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.461 [2024-09-27 15:53:13.126394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.461 [2024-09-27 15:53:13.126408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.461 [2024-09-27 15:53:13.126414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.461 [2024-09-27 15:53:13.126421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.461 [2024-09-27 15:53:13.126426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.461 [2024-09-27 15:53:13.126433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.461 [2024-09-27 15:53:13.126438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.461 [2024-09-27 15:53:13.126445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.461 [2024-09-27 15:53:13.126457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.461 [2024-09-27 15:53:13.126463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.461 [2024-09-27 15:53:13.126468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.461 [2024-09-27 15:53:13.126475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.461 [2024-09-27 15:53:13.126480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.461 [2024-09-27 15:53:13.126487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.461 [2024-09-27 15:53:13.126493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.461 [2024-09-27 15:53:13.126500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.461 [2024-09-27 15:53:13.126505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.461 [2024-09-27 15:53:13.126512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.461 [2024-09-27 15:53:13.126518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.461 [2024-09-27 15:53:13.126524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.461 [2024-09-27 15:53:13.126530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.461 [2024-09-27 15:53:13.126536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.461 [2024-09-27 15:53:13.126541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.461 [2024-09-27 15:53:13.126548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.461 [2024-09-27 15:53:13.126553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.461 [2024-09-27 15:53:13.126560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.461 [2024-09-27 15:53:13.126565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.461 [2024-09-27 15:53:13.126571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.461 [2024-09-27 15:53:13.126576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.461 [2024-09-27 15:53:13.126583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.461 [2024-09-27 15:53:13.126588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.461 [2024-09-27 15:53:13.126595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.461 [2024-09-27 15:53:13.126600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.461 [2024-09-27 15:53:13.126607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.461 [2024-09-27 15:53:13.126612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.461 [2024-09-27 15:53:13.126619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.461 [2024-09-27 15:53:13.126625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.461 [2024-09-27 15:53:13.126632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.461 [2024-09-27 15:53:13.126637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.461 [2024-09-27 15:53:13.126643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.461 [2024-09-27 15:53:13.126648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.461 [2024-09-27 15:53:13.126655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.461 [2024-09-27 15:53:13.126660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.461 [2024-09-27 15:53:13.126666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.461 [2024-09-27 15:53:13.126672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.461 [2024-09-27 15:53:13.126678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.461 [2024-09-27 15:53:13.126683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.461 [2024-09-27 15:53:13.126690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.461 [2024-09-27 15:53:13.126695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.461 [2024-09-27 15:53:13.126701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.461 [2024-09-27 15:53:13.126706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.461 [2024-09-27 15:53:13.126713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.461 [2024-09-27 15:53:13.126718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.461 [2024-09-27 15:53:13.126725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.461 [2024-09-27 15:53:13.126730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.461 [2024-09-27 15:53:13.126736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.461 [2024-09-27 15:53:13.126741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.461 [2024-09-27 15:53:13.126748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.461 [2024-09-27 15:53:13.126753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.461 [2024-09-27 15:53:13.126761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.461 [2024-09-27 15:53:13.126766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.462 [2024-09-27 15:53:13.126772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.462 [2024-09-27 15:53:13.126778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.462 [2024-09-27 15:53:13.126785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.462 [2024-09-27 15:53:13.126789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.462 [2024-09-27 15:53:13.126796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.462 [2024-09-27 15:53:13.126801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.462 [2024-09-27 15:53:13.126808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.462 [2024-09-27 15:53:13.126813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.462 [2024-09-27 15:53:13.126820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.462 [2024-09-27 15:53:13.126825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.462 [2024-09-27 15:53:13.126832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.462 [2024-09-27 15:53:13.126837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.462 [2024-09-27 15:53:13.126843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.462 [2024-09-27 15:53:13.126848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.462 [2024-09-27 15:53:13.126855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.462 [2024-09-27 15:53:13.126860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.462 [2024-09-27 15:53:13.126868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.462 [2024-09-27 15:53:13.126874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.462 [2024-09-27 15:53:13.126881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.462 [2024-09-27 15:53:13.126886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.462 [2024-09-27 15:53:13.126892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.462 [2024-09-27 15:53:13.126901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.462 [2024-09-27 15:53:13.126908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.462 [2024-09-27 15:53:13.126914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.462 [2024-09-27 15:53:13.126921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.462 [2024-09-27 15:53:13.126926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.462 [2024-09-27 15:53:13.126932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.462 [2024-09-27 15:53:13.126938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.462 [2024-09-27 15:53:13.126944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.462 [2024-09-27 15:53:13.126949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.462 [2024-09-27 15:53:13.126956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.462 [2024-09-27 15:53:13.126961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.462 [2024-09-27 15:53:13.126968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.462 [2024-09-27 15:53:13.126973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.462 [2024-09-27 15:53:13.126979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:20704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.462 [2024-09-27 15:53:13.126985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.462 [2024-09-27 15:53:13.126991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.462 [2024-09-27 15:53:13.126996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.462 [2024-09-27 15:53:13.127003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.462 [2024-09-27 15:53:13.127007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.462 [2024-09-27 15:53:13.127015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.462 [2024-09-27 15:53:13.127020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.462 [2024-09-27 15:53:13.127026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.462 [2024-09-27 15:53:13.127031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.462 [2024-09-27 15:53:13.127038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.462 [2024-09-27 15:53:13.127043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.462 [2024-09-27 15:53:13.127049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.462 [2024-09-27 15:53:13.127055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.462 [2024-09-27 15:53:13.127063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.462 [2024-09-27 15:53:13.127068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.462 [2024-09-27 15:53:13.127074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.462 [2024-09-27 15:53:13.127079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.462 [2024-09-27 15:53:13.127086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.462 [2024-09-27 15:53:13.127091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.462 [2024-09-27 15:53:13.127097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.462 [2024-09-27 15:53:13.127102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.462 [2024-09-27 15:53:13.127109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.462 [2024-09-27 15:53:13.127114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.462 [2024-09-27 15:53:13.127121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.462 [2024-09-27 15:53:13.127126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.462 [2024-09-27 15:53:13.127132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.462 [2024-09-27 15:53:13.127137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.462 [2024-09-27 15:53:13.127143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.462 [2024-09-27 15:53:13.127148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.462 [2024-09-27 15:53:13.127156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.462 [2024-09-27 15:53:13.127160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.462 [2024-09-27 15:53:13.127167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.462 [2024-09-27 15:53:13.127172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.462 [2024-09-27 15:53:13.127178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.463 [2024-09-27 15:53:13.127184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.463 [2024-09-27 15:53:13.127190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.463 [2024-09-27 15:53:13.127196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.463 [2024-09-27 15:53:13.127202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.463 [2024-09-27 15:53:13.127208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.463 [2024-09-27 15:53:13.127215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.463 [2024-09-27 15:53:13.127221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.463 [2024-09-27 15:53:13.127227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.463 [2024-09-27 15:53:13.127232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.463 [2024-09-27 15:53:13.127239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.463 [2024-09-27 15:53:13.127244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.463 [2024-09-27 15:53:13.127251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.463 [2024-09-27 15:53:13.127256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.463 [2024-09-27 15:53:13.127263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.463 [2024-09-27 15:53:13.127268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.463 [2024-09-27 15:53:13.127275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.463 [2024-09-27 15:53:13.127280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.463 [2024-09-27 15:53:13.127287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.463 [2024-09-27 15:53:13.127292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.463 [2024-09-27 15:53:13.127298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.463 [2024-09-27 15:53:13.127304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.463 [2024-09-27 15:53:13.127310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.463 [2024-09-27 15:53:13.127316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.463 [2024-09-27 15:53:13.127322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.463 [2024-09-27 15:53:13.127327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.463 [2024-09-27 15:53:13.127334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.463 [2024-09-27 15:53:13.127339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.463 [2024-09-27 15:53:13.127346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.463 [2024-09-27 15:53:13.127351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.463 [2024-09-27 15:53:13.127361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.463 [2024-09-27 15:53:13.127366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.463 [2024-09-27 15:53:13.127373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.463 [2024-09-27 15:53:13.127378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.463 [2024-09-27 15:53:13.127385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.463 [2024-09-27 15:53:13.127390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.463 [2024-09-27 15:53:13.127396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.463 [2024-09-27 15:53:13.127401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.463 [2024-09-27 15:53:13.127408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.463 [2024-09-27 15:53:13.127413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.463 [2024-09-27 15:53:13.127419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.463 [2024-09-27 15:53:13.127425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.463 [2024-09-27 15:53:13.127432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.463 [2024-09-27 15:53:13.127437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.463 [2024-09-27 15:53:13.127444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.463 [2024-09-27 15:53:13.127449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.463 [2024-09-27 15:53:13.127455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.463 [2024-09-27 15:53:13.127460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.463 [2024-09-27 15:53:13.127467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.463 [2024-09-27 15:53:13.127472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.463 [2024-09-27 15:53:13.127478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.463 [2024-09-27 15:53:13.127483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.463 [2024-09-27 15:53:13.127489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.463 [2024-09-27 15:53:13.127495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.463 [2024-09-27 15:53:13.127501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.463 [2024-09-27 15:53:13.127506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.463 [2024-09-27 15:53:13.127514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.463 [2024-09-27 15:53:13.127519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.463 [2024-09-27 15:53:13.127525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.463 [2024-09-27 15:53:13.127531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.463 [2024-09-27 15:53:13.127537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.463 [2024-09-27 15:53:13.127542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.463 [2024-09-27 15:53:13.127549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.463 [2024-09-27 15:53:13.127554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.463 [2024-09-27 15:53:13.127561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.463 [2024-09-27 15:53:13.127566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.463 [2024-09-27 15:53:13.127572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.463 [2024-09-27 15:53:13.127578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.463 [2024-09-27 15:53:13.127584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.463 [2024-09-27 15:53:13.127589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.463 [2024-09-27 15:53:13.127596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.463 [2024-09-27 15:53:13.127601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.463 [2024-09-27 15:53:13.127607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.463 [2024-09-27 15:53:13.127613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.464 [2024-09-27 15:53:13.127619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.464 [2024-09-27 15:53:13.127624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.464 [2024-09-27 15:53:13.127631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.464 [2024-09-27 15:53:13.127636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.464 [2024-09-27 15:53:13.127643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.464 [2024-09-27 15:53:13.127648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.464 [2024-09-27 15:53:13.127655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.464 [2024-09-27 15:53:13.127661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.464 [2024-09-27 15:53:13.127667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.464 [2024-09-27 15:53:13.127672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.464 [2024-09-27 15:53:13.127679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.464 [2024-09-27 15:53:13.127684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.464 [2024-09-27 15:53:13.127691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.464 [2024-09-27 15:53:13.127696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.464 [2024-09-27 15:53:13.127702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.464 [2024-09-27 15:53:13.127707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.464 [2024-09-27 15:53:13.127714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.464 [2024-09-27 15:53:13.127719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.464 [2024-09-27 15:53:13.127725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.464 [2024-09-27 15:53:13.127730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.464 [2024-09-27 15:53:13.127737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.464 [2024-09-27 15:53:13.127742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.464 [2024-09-27 15:53:13.127749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.464 [2024-09-27 15:53:13.127754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.464 [2024-09-27 15:53:13.127760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.464 [2024-09-27 15:53:13.127765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.464 [2024-09-27 15:53:13.127772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.464 [2024-09-27 15:53:13.127777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.464 [2024-09-27 15:53:13.127783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.464 [2024-09-27 15:53:13.127788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.464 [2024-09-27 15:53:13.127794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.464 [2024-09-27 15:53:13.127803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.464 [2024-09-27 15:53:13.127810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.464 [2024-09-27 15:53:13.127815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.464 [2024-09-27 15:53:13.127822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.464 [2024-09-27 15:53:13.127827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.464 [2024-09-27 15:53:13.127834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.464 [2024-09-27 15:53:13.127838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.464 [2024-09-27 15:53:13.127845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.464 [2024-09-27 15:53:13.127850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.464 [2024-09-27 15:53:13.127857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.464 [2024-09-27 15:53:13.127862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.464 [2024-09-27 15:53:13.127868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.464 [2024-09-27 15:53:13.127874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.464 [2024-09-27 15:53:13.127880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.464 [2024-09-27 15:53:13.127885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.464 [2024-09-27 15:53:13.127892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.464 [2024-09-27 15:53:13.127900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.464 [2024-09-27 15:53:13.127907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:39.464 [2024-09-27 15:53:13.127911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.464 [2024-09-27 15:53:13.127918] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203fae0 is same with the state(6) to be set 00:34:39.464 [2024-09-27 15:53:13.127925] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:39.464 [2024-09-27 15:53:13.127929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:39.464 [2024-09-27 15:53:13.127933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21328 len:8 PRP1 0x0 PRP2 0x0 00:34:39.464 [2024-09-27 15:53:13.127938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.464 [2024-09-27 15:53:13.127968] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x203fae0 was disconnected and freed. reset controller. 00:34:39.464 [2024-09-27 15:53:13.127975] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:34:39.464 [2024-09-27 15:53:13.127992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:39.464 [2024-09-27 15:53:13.127999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.464 [2024-09-27 15:53:13.128005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:39.464 [2024-09-27 15:53:13.128010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.464 [2024-09-27 15:53:13.128016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:39.464 [2024-09-27 15:53:13.128021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.464 [2024-09-27 15:53:13.128026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:39.464 [2024-09-27 15:53:13.128031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.464 [2024-09-27 15:53:13.128036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:39.464 [2024-09-27 15:53:13.130531] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:39.464 [2024-09-27 15:53:13.130553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x201c8d0 (9): Bad file descriptor 00:34:39.464 [2024-09-27 15:53:13.165420] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:39.464 11850.89 IOPS, 46.29 MiB/s 11998.30 IOPS, 46.87 MiB/s 12131.18 IOPS, 47.39 MiB/s 12238.83 IOPS, 47.81 MiB/s 12333.62 IOPS, 48.18 MiB/s 12411.21 IOPS, 48.48 MiB/s 12478.07 IOPS, 48.74 MiB/s 00:34:39.464 Latency(us) 00:34:39.464 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:39.464 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:39.464 Verification LBA range: start 0x0 length 0x4000 00:34:39.464 NVMe0n1 : 15.01 12478.55 48.74 888.86 0.00 9554.74 349.87 12670.29 00:34:39.464 =================================================================================================================== 00:34:39.464 Total : 12478.55 48.74 888.86 0.00 9554.74 349.87 12670.29 00:34:39.464 Received shutdown signal, test time was about 15.000000 seconds 00:34:39.464 00:34:39.464 Latency(us) 00:34:39.465 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:39.465 =================================================================================================================== 00:34:39.465 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:39.465 15:53:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:34:39.465 15:53:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:34:39.465 15:53:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:34:39.465 15:53:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=570936 00:34:39.465 15:53:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 570936 /var/tmp/bdevperf.sock 00:34:39.465 15:53:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:34:39.465 15:53:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 570936 ']' 00:34:39.465 15:53:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:39.465 15:53:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:39.465 15:53:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:39.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:39.465 15:53:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:39.465 15:53:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:40.036 15:53:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:40.037 15:53:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:34:40.037 15:53:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:40.037 [2024-09-27 15:53:20.472560] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:40.037 15:53:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:34:40.298 [2024-09-27 15:53:20.648980] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:34:40.298 15:53:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:40.868 NVMe0n1 00:34:40.868 15:53:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:40.868 00:34:41.129 15:53:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:41.129 00:34:41.390 15:53:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:41.390 15:53:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:34:41.390 15:53:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:41.650 15:53:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:34:44.950 15:53:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:44.950 15:53:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:34:44.950 15:53:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=571957 00:34:44.950 15:53:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:44.950 15:53:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 571957 00:34:45.892 { 00:34:45.892 "results": [ 00:34:45.892 { 00:34:45.892 "job": "NVMe0n1", 00:34:45.892 "core_mask": "0x1", 00:34:45.892 "workload": "verify", 00:34:45.892 "status": "finished", 00:34:45.892 "verify_range": { 00:34:45.892 "start": 0, 00:34:45.892 "length": 16384 00:34:45.892 }, 00:34:45.892 "queue_depth": 128, 00:34:45.892 "io_size": 4096, 00:34:45.892 "runtime": 1.006984, 00:34:45.892 "iops": 12881.038824847266, 00:34:45.892 "mibps": 50.316557909559634, 00:34:45.892 "io_failed": 0, 00:34:45.892 "io_timeout": 0, 00:34:45.892 "avg_latency_us": 9902.228680389586, 00:34:45.892 "min_latency_us": 2007.04, 00:34:45.892 "max_latency_us": 13271.04 00:34:45.892 } 00:34:45.892 ], 00:34:45.892 "core_count": 1 00:34:45.892 } 00:34:45.892 15:53:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:45.892 [2024-09-27 15:53:19.519195] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:34:45.892 [2024-09-27 15:53:19.519256] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid570936 ] 00:34:45.892 [2024-09-27 15:53:19.597619] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:45.892 [2024-09-27 15:53:19.624578] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:34:45.892 [2024-09-27 15:53:21.952019] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:34:45.892 [2024-09-27 15:53:21.952057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:45.892 [2024-09-27 15:53:21.952066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:45.892 [2024-09-27 15:53:21.952075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:45.892 [2024-09-27 15:53:21.952080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:45.892 [2024-09-27 15:53:21.952086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:45.892 [2024-09-27 15:53:21.952092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:45.892 [2024-09-27 15:53:21.952097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:45.892 [2024-09-27 15:53:21.952103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:45.892 [2024-09-27 15:53:21.952109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:45.892 [2024-09-27 15:53:21.952131] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:45.892 [2024-09-27 15:53:21.952143] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf248d0 (9): Bad file descriptor 00:34:45.892 [2024-09-27 15:53:22.008353] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:45.892 Running I/O for 1 seconds... 00:34:45.892 12843.00 IOPS, 50.17 MiB/s 00:34:45.892 Latency(us) 00:34:45.892 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:45.892 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:45.892 Verification LBA range: start 0x0 length 0x4000 00:34:45.892 NVMe0n1 : 1.01 12881.04 50.32 0.00 0.00 9902.23 2007.04 13271.04 00:34:45.892 =================================================================================================================== 00:34:45.892 Total : 12881.04 50.32 0.00 0.00 9902.23 2007.04 13271.04 00:34:45.892 15:53:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:45.892 15:53:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:34:46.152 15:53:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:46.413 15:53:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:46.413 15:53:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:34:46.413 15:53:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:46.673 15:53:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:34:49.974 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:49.974 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:34:49.974 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 570936 00:34:49.974 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 570936 ']' 00:34:49.974 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 570936 00:34:49.974 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:34:49.974 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:49.974 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 570936 00:34:49.974 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:49.974 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:49.974 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 570936' 00:34:49.974 killing process with pid 570936 00:34:49.974 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 570936 00:34:49.974 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 570936 00:34:49.974 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:34:49.974 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:50.235 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:34:50.235 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:50.235 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:34:50.235 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # nvmfcleanup 00:34:50.235 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:34:50.235 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:50.235 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:34:50.235 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:50.235 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:50.235 rmmod nvme_tcp 00:34:50.235 rmmod nvme_fabrics 00:34:50.235 rmmod nvme_keyring 00:34:50.235 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:50.235 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:34:50.235 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:34:50.235 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@513 -- # '[' -n 567230 ']' 00:34:50.235 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # killprocess 567230 00:34:50.235 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 567230 ']' 00:34:50.235 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 567230 00:34:50.235 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:34:50.235 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:50.235 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 567230 00:34:50.235 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:50.235 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:50.235 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 567230' 00:34:50.235 killing process with pid 567230 00:34:50.235 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 567230 00:34:50.235 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 567230 00:34:50.496 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:34:50.496 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:34:50.496 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:34:50.496 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:34:50.496 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-save 00:34:50.496 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:34:50.496 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-restore 00:34:50.496 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:50.496 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:50.496 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:50.496 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:50.496 15:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:53.044 15:53:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:53.044 00:34:53.044 real 0m40.382s 00:34:53.044 user 2m3.450s 00:34:53.044 sys 0m8.864s 00:34:53.044 15:53:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:53.044 15:53:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:53.044 ************************************ 00:34:53.044 END TEST nvmf_failover 00:34:53.044 ************************************ 00:34:53.044 15:53:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:34:53.044 15:53:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:53.044 15:53:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:53.044 15:53:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.044 ************************************ 00:34:53.044 START TEST nvmf_host_discovery 00:34:53.044 ************************************ 00:34:53.044 15:53:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:34:53.044 * Looking for test storage... 00:34:53.044 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:53.044 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:53.044 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:34:53.044 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:53.044 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:53.044 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:53.044 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:53.044 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:53.044 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:34:53.044 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:34:53.044 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:34:53.044 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:34:53.044 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:34:53.044 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:53.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.045 --rc genhtml_branch_coverage=1 00:34:53.045 --rc genhtml_function_coverage=1 00:34:53.045 --rc genhtml_legend=1 00:34:53.045 --rc geninfo_all_blocks=1 00:34:53.045 --rc geninfo_unexecuted_blocks=1 00:34:53.045 00:34:53.045 ' 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:53.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.045 --rc genhtml_branch_coverage=1 00:34:53.045 --rc genhtml_function_coverage=1 00:34:53.045 --rc genhtml_legend=1 00:34:53.045 --rc geninfo_all_blocks=1 00:34:53.045 --rc geninfo_unexecuted_blocks=1 00:34:53.045 00:34:53.045 ' 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:53.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.045 --rc genhtml_branch_coverage=1 00:34:53.045 --rc genhtml_function_coverage=1 00:34:53.045 --rc genhtml_legend=1 00:34:53.045 --rc geninfo_all_blocks=1 00:34:53.045 --rc geninfo_unexecuted_blocks=1 00:34:53.045 00:34:53.045 ' 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:53.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.045 --rc genhtml_branch_coverage=1 00:34:53.045 --rc genhtml_function_coverage=1 00:34:53.045 --rc genhtml_legend=1 00:34:53.045 --rc geninfo_all_blocks=1 00:34:53.045 --rc geninfo_unexecuted_blocks=1 00:34:53.045 00:34:53.045 ' 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:53.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:53.045 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:53.046 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:53.046 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:34:53.046 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:34:53.046 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:34:53.046 15:53:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:01.194 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:01.194 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:01.194 Found net devices under 0000:31:00.0: cvl_0_0 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:01.194 Found net devices under 0000:31:00.1: cvl_0_1 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # is_hw=yes 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:01.194 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:01.195 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:01.195 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:01.195 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:01.195 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:01.195 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:01.195 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:01.195 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:01.195 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:01.195 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:01.195 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:01.195 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:01.195 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:01.195 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:01.195 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:01.195 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:01.195 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:01.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:01.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.579 ms 00:35:01.195 00:35:01.195 --- 10.0.0.2 ping statistics --- 00:35:01.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:01.195 rtt min/avg/max/mdev = 0.579/0.579/0.579/0.000 ms 00:35:01.195 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:01.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:01.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:35:01.195 00:35:01.195 --- 10.0.0.1 ping statistics --- 00:35:01.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:01.195 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:35:01.195 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:01.195 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # return 0 00:35:01.195 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:35:01.195 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:01.195 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:35:01.195 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:35:01.195 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:01.195 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:35:01.195 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:35:01.195 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:35:01.195 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:35:01.195 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:01.195 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:01.195 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # nvmfpid=577356 00:35:01.195 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # waitforlisten 577356 00:35:01.195 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:01.195 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 577356 ']' 00:35:01.195 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:01.195 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:01.195 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:01.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:01.195 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:01.195 15:53:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:01.195 [2024-09-27 15:53:40.928610] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:35:01.195 [2024-09-27 15:53:40.928679] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:01.195 [2024-09-27 15:53:41.019771] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:01.195 [2024-09-27 15:53:41.065856] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:01.195 [2024-09-27 15:53:41.065918] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:01.195 [2024-09-27 15:53:41.065927] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:01.195 [2024-09-27 15:53:41.065934] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:01.195 [2024-09-27 15:53:41.065945] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:01.195 [2024-09-27 15:53:41.065969] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:01.458 15:53:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:01.458 15:53:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:35:01.458 15:53:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:35:01.458 15:53:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:01.458 15:53:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:01.458 15:53:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:01.458 15:53:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:01.458 15:53:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.458 15:53:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:01.458 [2024-09-27 15:53:41.788322] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:01.458 15:53:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.458 15:53:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:35:01.458 15:53:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.458 15:53:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:01.458 [2024-09-27 15:53:41.800592] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:35:01.458 15:53:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.458 15:53:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:35:01.458 15:53:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.458 15:53:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:01.458 null0 00:35:01.458 15:53:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.458 15:53:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:35:01.458 15:53:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.458 15:53:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:01.458 null1 00:35:01.458 15:53:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.458 15:53:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:35:01.458 15:53:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.458 15:53:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:01.458 15:53:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.458 15:53:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=577398 00:35:01.458 15:53:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 577398 /tmp/host.sock 00:35:01.458 15:53:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:35:01.458 15:53:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 577398 ']' 00:35:01.458 15:53:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:35:01.458 15:53:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:01.458 15:53:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:35:01.458 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:35:01.458 15:53:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:01.458 15:53:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:01.458 [2024-09-27 15:53:41.897016] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:35:01.458 [2024-09-27 15:53:41.897082] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid577398 ] 00:35:01.721 [2024-09-27 15:53:41.981387] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:01.721 [2024-09-27 15:53:42.028878] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:02.293 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:02.293 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:35:02.293 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:02.293 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:35:02.293 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.293 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:02.293 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.293 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:35:02.293 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.293 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:02.293 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.293 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:35:02.293 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:35:02.293 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:02.293 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.293 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:02.293 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:02.293 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:02.293 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:02.293 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.293 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:35:02.293 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:35:02.293 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:02.293 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:02.293 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.293 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:02.293 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:02.293 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:02.554 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.554 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:35:02.554 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:35:02.554 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.554 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:02.554 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.554 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:35:02.554 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:02.554 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:02.554 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.554 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:02.554 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:02.554 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:02.554 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.554 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:35:02.554 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:35:02.554 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:02.554 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:02.554 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.554 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:02.554 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:02.554 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:02.554 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.554 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:35:02.554 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:35:02.554 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.554 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:02.554 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.554 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:35:02.554 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:02.555 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:02.555 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.555 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:02.555 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:02.555 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:02.555 15:53:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.555 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:35:02.555 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:35:02.555 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:02.555 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:02.555 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.555 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:02.555 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:02.555 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:02.555 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:02.816 [2024-09-27 15:53:43.059795] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:35:02.816 15:53:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:35:03.388 [2024-09-27 15:53:43.779051] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:03.388 [2024-09-27 15:53:43.779087] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:03.388 [2024-09-27 15:53:43.779101] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:03.648 [2024-09-27 15:53:43.907492] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:35:03.648 [2024-09-27 15:53:44.132878] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:03.648 [2024-09-27 15:53:44.132922] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:03.909 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:35:03.909 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:35:03.909 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:35:03.909 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:03.909 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:03.909 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:03.909 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:03.909 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:03.909 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:03.909 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:03.909 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:03.909 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:35:03.909 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:35:03.909 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:35:03.909 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:35:03.909 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:35:03.909 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:35:03.909 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:35:03.909 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:03.909 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:03.909 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:03.909 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:03.909 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:03.910 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:03.910 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:03.910 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:35:03.910 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:35:03.910 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:35:03.910 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:35:03.910 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:35:03.910 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:35:03.910 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:35:03.910 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:35:03.910 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:35:03.910 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:35:03.910 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:03.910 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:35:03.910 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:03.910 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:04.171 [2024-09-27 15:53:44.592192] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:35:04.171 [2024-09-27 15:53:44.593016] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:35:04.171 [2024-09-27 15:53:44.593041] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:35:04.171 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:35:04.172 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:35:04.172 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:04.172 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:04.172 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.172 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:04.172 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:04.172 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:04.172 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.172 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:04.172 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:35:04.172 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:04.172 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:04.172 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:35:04.172 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:35:04.172 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:35:04.172 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:35:04.172 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:04.172 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:04.172 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.172 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:04.172 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:04.172 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:04.432 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.432 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:35:04.432 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:35:04.432 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:35:04.432 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:35:04.432 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:35:04.432 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:35:04.432 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:35:04.432 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:35:04.432 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:35:04.432 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:35:04.432 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:35:04.432 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:35:04.432 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.432 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:04.432 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.432 [2024-09-27 15:53:44.722457] bdev_nvme.c:7086:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:35:04.432 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:35:04.432 15:53:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:35:04.432 [2024-09-27 15:53:44.822329] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:04.432 [2024-09-27 15:53:44.822346] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:04.432 [2024-09-27 15:53:44.822352] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:35:05.373 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:35:05.373 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:35:05.373 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:35:05.373 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:35:05.373 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:35:05.373 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.373 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:35:05.373 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:05.373 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:35:05.373 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.373 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:35:05.373 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:35:05.373 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:35:05.373 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:35:05.373 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:05.374 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:05.374 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:35:05.374 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:35:05.374 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:05.374 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:35:05.374 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:35:05.374 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.374 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:05.374 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:35:05.374 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.374 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:35:05.374 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:35:05.374 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:35:05.374 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:35:05.374 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:05.374 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.374 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:05.374 [2024-09-27 15:53:45.852307] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:35:05.374 [2024-09-27 15:53:45.852328] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:05.374 [2024-09-27 15:53:45.854150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:05.374 [2024-09-27 15:53:45.854171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:05.374 [2024-09-27 15:53:45.854181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:05.374 [2024-09-27 15:53:45.854189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:05.374 [2024-09-27 15:53:45.854197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:05.374 [2024-09-27 15:53:45.854205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:05.374 [2024-09-27 15:53:45.854213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:05.374 [2024-09-27 15:53:45.854221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:05.374 [2024-09-27 15:53:45.854229] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9e580 is same with the state(6) to be set 00:35:05.374 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.374 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:05.374 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:05.374 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:35:05.374 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:35:05.374 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:35:05.374 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:35:05.636 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:05.636 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:05.636 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.636 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:05.636 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:05.636 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:05.636 [2024-09-27 15:53:45.864164] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9e580 (9): Bad file descriptor 00:35:05.636 [2024-09-27 15:53:45.874202] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:05.636 [2024-09-27 15:53:45.874414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.636 [2024-09-27 15:53:45.874428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe9e580 with addr=10.0.0.2, port=4420 00:35:05.636 [2024-09-27 15:53:45.874436] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9e580 is same with the state(6) to be set 00:35:05.636 [2024-09-27 15:53:45.874448] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9e580 (9): Bad file descriptor 00:35:05.636 [2024-09-27 15:53:45.874467] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:05.636 [2024-09-27 15:53:45.874474] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:05.636 [2024-09-27 15:53:45.874483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:05.636 [2024-09-27 15:53:45.874495] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.636 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.636 [2024-09-27 15:53:45.884265] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:05.636 [2024-09-27 15:53:45.884559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.636 [2024-09-27 15:53:45.884571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe9e580 with addr=10.0.0.2, port=4420 00:35:05.636 [2024-09-27 15:53:45.884579] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9e580 is same with the state(6) to be set 00:35:05.636 [2024-09-27 15:53:45.884590] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9e580 (9): Bad file descriptor 00:35:05.636 [2024-09-27 15:53:45.884607] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:05.636 [2024-09-27 15:53:45.884614] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:05.636 [2024-09-27 15:53:45.884621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:05.636 [2024-09-27 15:53:45.884632] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.636 [2024-09-27 15:53:45.894319] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:05.636 [2024-09-27 15:53:45.894531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.636 [2024-09-27 15:53:45.894543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe9e580 with addr=10.0.0.2, port=4420 00:35:05.636 [2024-09-27 15:53:45.894550] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9e580 is same with the state(6) to be set 00:35:05.636 [2024-09-27 15:53:45.894561] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9e580 (9): Bad file descriptor 00:35:05.636 [2024-09-27 15:53:45.894572] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:05.636 [2024-09-27 15:53:45.894578] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:05.636 [2024-09-27 15:53:45.894585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:05.636 [2024-09-27 15:53:45.894596] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.636 [2024-09-27 15:53:45.904373] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:05.636 [2024-09-27 15:53:45.904688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.636 [2024-09-27 15:53:45.904700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe9e580 with addr=10.0.0.2, port=4420 00:35:05.636 [2024-09-27 15:53:45.904708] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9e580 is same with the state(6) to be set 00:35:05.636 [2024-09-27 15:53:45.904719] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9e580 (9): Bad file descriptor 00:35:05.636 [2024-09-27 15:53:45.904736] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:05.636 [2024-09-27 15:53:45.904743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:05.636 [2024-09-27 15:53:45.904750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:05.636 [2024-09-27 15:53:45.904761] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.636 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:05.636 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:35:05.636 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:05.636 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:05.636 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:35:05.636 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:35:05.636 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:35:05.636 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:35:05.636 [2024-09-27 15:53:45.914429] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:05.636 [2024-09-27 15:53:45.914741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.636 [2024-09-27 15:53:45.914753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe9e580 with addr=10.0.0.2, port=4420 00:35:05.636 [2024-09-27 15:53:45.914760] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9e580 is same with the state(6) to be set 00:35:05.636 [2024-09-27 15:53:45.914771] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9e580 (9): Bad file descriptor 00:35:05.636 [2024-09-27 15:53:45.914788] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:05.636 [2024-09-27 15:53:45.914795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:05.636 [2024-09-27 15:53:45.914802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:05.636 [2024-09-27 15:53:45.914813] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.636 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:05.636 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:05.636 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.636 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:05.636 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:05.636 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:05.636 [2024-09-27 15:53:45.924477] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:05.636 [2024-09-27 15:53:45.924788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.636 [2024-09-27 15:53:45.924796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe9e580 with addr=10.0.0.2, port=4420 00:35:05.636 [2024-09-27 15:53:45.924802] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9e580 is same with the state(6) to be set 00:35:05.636 [2024-09-27 15:53:45.924809] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9e580 (9): Bad file descriptor 00:35:05.636 [2024-09-27 15:53:45.924821] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:05.636 [2024-09-27 15:53:45.924826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:05.636 [2024-09-27 15:53:45.924832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:05.636 [2024-09-27 15:53:45.924839] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.636 [2024-09-27 15:53:45.934523] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:05.636 [2024-09-27 15:53:45.934767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.636 [2024-09-27 15:53:45.934776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe9e580 with addr=10.0.0.2, port=4420 00:35:05.636 [2024-09-27 15:53:45.934781] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9e580 is same with the state(6) to be set 00:35:05.636 [2024-09-27 15:53:45.934792] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9e580 (9): Bad file descriptor 00:35:05.636 [2024-09-27 15:53:45.934799] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:05.636 [2024-09-27 15:53:45.934804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:05.636 [2024-09-27 15:53:45.934809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:05.636 [2024-09-27 15:53:45.934816] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.636 [2024-09-27 15:53:45.939867] bdev_nvme.c:6949:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:35:05.636 [2024-09-27 15:53:45.939879] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:35:05.636 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.636 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:35:05.636 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:35:05.636 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:35:05.636 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:35:05.636 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:35:05.636 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:35:05.636 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:35:05.636 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:35:05.636 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:35:05.636 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:35:05.636 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.636 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:35:05.636 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:05.636 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:35:05.636 15:53:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.636 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:35:05.636 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:35:05.636 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:35:05.636 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:35:05.636 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:05.636 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:05.636 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:35:05.636 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:35:05.636 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:05.636 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:35:05.636 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:35:05.636 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:35:05.636 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.636 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:05.636 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.637 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:35:05.637 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:35:05.637 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:35:05.637 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:35:05.637 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:35:05.637 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.637 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:05.637 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.637 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:35:05.637 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:35:05.637 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:35:05.637 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:35:05.637 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:35:05.637 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:35:05.637 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:05.637 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:05.637 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.637 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:05.637 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:05.637 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:05.637 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.898 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:35:05.898 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:35:05.898 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:35:05.898 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:35:05.898 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:35:05.898 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:35:05.898 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:35:05.898 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:35:05.898 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:05.898 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:05.898 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.898 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:05.898 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:05.898 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:05.898 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.898 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:35:05.898 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:35:05.898 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:35:05.898 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:35:05.898 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:05.898 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:05.898 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:35:05.898 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:35:05.898 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:05.898 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:35:05.898 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:35:05.898 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:35:05.898 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.898 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:05.898 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.898 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:35:05.898 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:35:05.898 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:35:05.898 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:35:05.898 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:05.898 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.898 15:53:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:06.839 [2024-09-27 15:53:47.297094] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:06.839 [2024-09-27 15:53:47.297108] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:06.839 [2024-09-27 15:53:47.297116] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:07.099 [2024-09-27 15:53:47.383372] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:35:07.361 [2024-09-27 15:53:47.697743] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:07.361 [2024-09-27 15:53:47.697766] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:07.361 request: 00:35:07.361 { 00:35:07.361 "name": "nvme", 00:35:07.361 "trtype": "tcp", 00:35:07.361 "traddr": "10.0.0.2", 00:35:07.361 "adrfam": "ipv4", 00:35:07.361 "trsvcid": "8009", 00:35:07.361 "hostnqn": "nqn.2021-12.io.spdk:test", 00:35:07.361 "wait_for_attach": true, 00:35:07.361 "method": "bdev_nvme_start_discovery", 00:35:07.361 "req_id": 1 00:35:07.361 } 00:35:07.361 Got JSON-RPC error response 00:35:07.361 response: 00:35:07.361 { 00:35:07.361 "code": -17, 00:35:07.361 "message": "File exists" 00:35:07.361 } 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:07.361 request: 00:35:07.361 { 00:35:07.361 "name": "nvme_second", 00:35:07.361 "trtype": "tcp", 00:35:07.361 "traddr": "10.0.0.2", 00:35:07.361 "adrfam": "ipv4", 00:35:07.361 "trsvcid": "8009", 00:35:07.361 "hostnqn": "nqn.2021-12.io.spdk:test", 00:35:07.361 "wait_for_attach": true, 00:35:07.361 "method": "bdev_nvme_start_discovery", 00:35:07.361 "req_id": 1 00:35:07.361 } 00:35:07.361 Got JSON-RPC error response 00:35:07.361 response: 00:35:07.361 { 00:35:07.361 "code": -17, 00:35:07.361 "message": "File exists" 00:35:07.361 } 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:07.361 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:35:07.622 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.622 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:35:07.622 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:35:07.622 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:07.622 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:07.622 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:07.622 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.622 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:07.622 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:07.622 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.622 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:35:07.622 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:35:07.622 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:35:07.623 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:35:07.623 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:35:07.623 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:07.623 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:35:07.623 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:07.623 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:35:07.623 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.623 15:53:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:08.564 [2024-09-27 15:53:48.961195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:08.564 [2024-09-27 15:53:48.961219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec1d50 with addr=10.0.0.2, port=8010 00:35:08.564 [2024-09-27 15:53:48.961229] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:35:08.564 [2024-09-27 15:53:48.961234] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:08.564 [2024-09-27 15:53:48.961239] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:35:09.507 [2024-09-27 15:53:49.963392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:09.507 [2024-09-27 15:53:49.963412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec1d50 with addr=10.0.0.2, port=8010 00:35:09.507 [2024-09-27 15:53:49.963421] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:35:09.507 [2024-09-27 15:53:49.963426] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:09.507 [2024-09-27 15:53:49.963431] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:35:10.892 [2024-09-27 15:53:50.965530] bdev_nvme.c:7205:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:35:10.892 request: 00:35:10.892 { 00:35:10.892 "name": "nvme_second", 00:35:10.892 "trtype": "tcp", 00:35:10.892 "traddr": "10.0.0.2", 00:35:10.892 "adrfam": "ipv4", 00:35:10.892 "trsvcid": "8010", 00:35:10.892 "hostnqn": "nqn.2021-12.io.spdk:test", 00:35:10.892 "wait_for_attach": false, 00:35:10.892 "attach_timeout_ms": 3000, 00:35:10.892 "method": "bdev_nvme_start_discovery", 00:35:10.892 "req_id": 1 00:35:10.892 } 00:35:10.892 Got JSON-RPC error response 00:35:10.892 response: 00:35:10.892 { 00:35:10.892 "code": -110, 00:35:10.892 "message": "Connection timed out" 00:35:10.892 } 00:35:10.892 15:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:10.892 15:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:35:10.892 15:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:10.892 15:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:10.892 15:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:10.892 15:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:35:10.892 15:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:35:10.892 15:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:35:10.892 15:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.892 15:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:35:10.893 15:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:10.893 15:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:35:10.893 15:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.893 15:53:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:35:10.893 15:53:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:35:10.893 15:53:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 577398 00:35:10.893 15:53:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:35:10.893 15:53:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:35:10.893 15:53:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:35:10.893 15:53:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:10.893 15:53:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:35:10.893 15:53:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:10.893 15:53:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:10.893 rmmod nvme_tcp 00:35:10.893 rmmod nvme_fabrics 00:35:10.893 rmmod nvme_keyring 00:35:10.893 15:53:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:10.893 15:53:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:35:10.893 15:53:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:35:10.893 15:53:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@513 -- # '[' -n 577356 ']' 00:35:10.893 15:53:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # killprocess 577356 00:35:10.893 15:53:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 577356 ']' 00:35:10.893 15:53:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 577356 00:35:10.893 15:53:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:35:10.893 15:53:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:10.893 15:53:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 577356 00:35:10.893 15:53:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:10.893 15:53:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:10.893 15:53:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 577356' 00:35:10.893 killing process with pid 577356 00:35:10.893 15:53:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 577356 00:35:10.893 15:53:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 577356 00:35:10.893 15:53:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:35:10.893 15:53:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:35:10.893 15:53:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:35:10.893 15:53:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:35:10.893 15:53:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-save 00:35:10.893 15:53:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:35:10.893 15:53:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:35:10.893 15:53:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:10.893 15:53:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:10.893 15:53:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:10.893 15:53:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:10.893 15:53:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:13.440 00:35:13.440 real 0m20.358s 00:35:13.440 user 0m23.501s 00:35:13.440 sys 0m7.200s 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:13.440 ************************************ 00:35:13.440 END TEST nvmf_host_discovery 00:35:13.440 ************************************ 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.440 ************************************ 00:35:13.440 START TEST nvmf_host_multipath_status 00:35:13.440 ************************************ 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:35:13.440 * Looking for test storage... 00:35:13.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lcov --version 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:13.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:13.440 --rc genhtml_branch_coverage=1 00:35:13.440 --rc genhtml_function_coverage=1 00:35:13.440 --rc genhtml_legend=1 00:35:13.440 --rc geninfo_all_blocks=1 00:35:13.440 --rc geninfo_unexecuted_blocks=1 00:35:13.440 00:35:13.440 ' 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:13.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:13.440 --rc genhtml_branch_coverage=1 00:35:13.440 --rc genhtml_function_coverage=1 00:35:13.440 --rc genhtml_legend=1 00:35:13.440 --rc geninfo_all_blocks=1 00:35:13.440 --rc geninfo_unexecuted_blocks=1 00:35:13.440 00:35:13.440 ' 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:13.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:13.440 --rc genhtml_branch_coverage=1 00:35:13.440 --rc genhtml_function_coverage=1 00:35:13.440 --rc genhtml_legend=1 00:35:13.440 --rc geninfo_all_blocks=1 00:35:13.440 --rc geninfo_unexecuted_blocks=1 00:35:13.440 00:35:13.440 ' 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:13.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:13.440 --rc genhtml_branch_coverage=1 00:35:13.440 --rc genhtml_function_coverage=1 00:35:13.440 --rc genhtml_legend=1 00:35:13.440 --rc geninfo_all_blocks=1 00:35:13.440 --rc geninfo_unexecuted_blocks=1 00:35:13.440 00:35:13.440 ' 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:13.440 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:13.441 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.441 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.441 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.441 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:35:13.441 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.441 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:35:13.441 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:13.441 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:13.441 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:13.441 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:13.441 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:13.441 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:13.441 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:13.441 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:13.441 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:13.441 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:13.441 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:35:13.441 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:35:13.441 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:13.441 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:35:13.441 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:13.441 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:35:13.441 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:35:13.441 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:35:13.441 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:13.441 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # prepare_net_devs 00:35:13.441 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@434 -- # local -g is_hw=no 00:35:13.441 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # remove_spdk_ns 00:35:13.441 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:13.441 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:13.441 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:13.441 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:35:13.441 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:35:13.441 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:35:13.441 15:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:21.592 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:21.592 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:21.592 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:21.593 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:21.593 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:21.593 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:21.593 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:21.593 Found net devices under 0000:31:00.0: cvl_0_0 00:35:21.593 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:21.593 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:21.593 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:21.593 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:21.593 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:21.593 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:21.593 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:21.593 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:21.593 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:21.593 Found net devices under 0000:31:00.1: cvl_0_1 00:35:21.593 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:21.593 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:35:21.593 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # is_hw=yes 00:35:21.593 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:35:21.593 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:35:21.593 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:35:21.593 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:21.593 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:21.593 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:21.593 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:21.593 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:21.593 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:21.593 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:21.593 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:21.593 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:21.593 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:21.593 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:21.593 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:21.593 15:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:21.593 15:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:21.593 15:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:21.593 15:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:21.593 15:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:21.593 15:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:21.593 15:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:21.593 15:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:21.593 15:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:21.593 15:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:21.593 15:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:21.593 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:21.593 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:35:21.593 00:35:21.593 --- 10.0.0.2 ping statistics --- 00:35:21.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:21.593 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:35:21.593 15:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:21.593 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:21.593 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:35:21.593 00:35:21.593 --- 10.0.0.1 ping statistics --- 00:35:21.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:21.593 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:35:21.593 15:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:21.593 15:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # return 0 00:35:21.593 15:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:35:21.593 15:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:21.593 15:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:35:21.593 15:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:35:21.593 15:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:21.593 15:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:35:21.593 15:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:35:21.593 15:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:35:21.593 15:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:35:21.593 15:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:21.593 15:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:21.593 15:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # nvmfpid=583705 00:35:21.593 15:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # waitforlisten 583705 00:35:21.593 15:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:35:21.593 15:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 583705 ']' 00:35:21.593 15:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:21.593 15:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:21.593 15:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:21.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:21.593 15:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:21.593 15:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:21.593 [2024-09-27 15:54:01.442680] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:35:21.593 [2024-09-27 15:54:01.442758] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:21.593 [2024-09-27 15:54:01.535590] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:21.593 [2024-09-27 15:54:01.582027] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:21.593 [2024-09-27 15:54:01.582084] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:21.593 [2024-09-27 15:54:01.582094] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:21.593 [2024-09-27 15:54:01.582102] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:21.593 [2024-09-27 15:54:01.582109] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:21.593 [2024-09-27 15:54:01.582295] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:21.593 [2024-09-27 15:54:01.582298] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:21.855 15:54:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:21.855 15:54:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:35:21.855 15:54:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:35:21.855 15:54:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:21.855 15:54:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:21.855 15:54:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:21.855 15:54:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=583705 00:35:21.855 15:54:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:22.118 [2024-09-27 15:54:02.458808] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:22.118 15:54:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:35:22.379 Malloc0 00:35:22.379 15:54:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:35:22.640 15:54:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:22.640 15:54:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:22.900 [2024-09-27 15:54:03.224168] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:22.900 15:54:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:35:23.161 [2024-09-27 15:54:03.408636] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:35:23.161 15:54:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=584100 00:35:23.161 15:54:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:35:23.161 15:54:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:23.161 15:54:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 584100 /var/tmp/bdevperf.sock 00:35:23.161 15:54:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 584100 ']' 00:35:23.161 15:54:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:23.161 15:54:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:23.161 15:54:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:23.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:23.161 15:54:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:23.161 15:54:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:24.104 15:54:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:24.104 15:54:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:35:24.104 15:54:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:35:24.104 15:54:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:35:24.364 Nvme0n1 00:35:24.364 15:54:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:35:24.935 Nvme0n1 00:35:24.935 15:54:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:35:24.935 15:54:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:35:26.845 15:54:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:35:26.845 15:54:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:35:27.104 15:54:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:27.364 15:54:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:35:28.305 15:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:35:28.305 15:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:28.305 15:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:28.305 15:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:28.565 15:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:28.565 15:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:28.565 15:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:28.565 15:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:28.565 15:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:28.565 15:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:28.565 15:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:28.565 15:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:28.826 15:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:28.826 15:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:28.826 15:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:28.826 15:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:29.087 15:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:29.087 15:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:29.087 15:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:29.087 15:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:29.087 15:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:29.087 15:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:29.087 15:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:29.087 15:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:29.348 15:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:29.348 15:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:35:29.348 15:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:29.608 15:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:29.608 15:54:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:35:30.991 15:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:35:30.991 15:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:30.991 15:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:30.991 15:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:30.991 15:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:30.991 15:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:30.991 15:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:30.991 15:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:30.991 15:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:30.991 15:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:30.991 15:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:30.991 15:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:31.251 15:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:31.251 15:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:31.251 15:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:31.251 15:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:31.511 15:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:31.511 15:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:31.511 15:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:31.511 15:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:31.511 15:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:31.512 15:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:31.512 15:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:31.512 15:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:31.772 15:54:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:31.772 15:54:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:35:31.772 15:54:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:32.032 15:54:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:35:32.032 15:54:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:35:33.418 15:54:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:35:33.418 15:54:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:33.418 15:54:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:33.418 15:54:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:33.418 15:54:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:33.418 15:54:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:33.418 15:54:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:33.418 15:54:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:33.418 15:54:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:33.418 15:54:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:33.418 15:54:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:33.418 15:54:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:33.678 15:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:33.678 15:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:33.679 15:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:33.679 15:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:33.940 15:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:33.940 15:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:33.940 15:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:33.940 15:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:34.200 15:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:34.200 15:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:34.200 15:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:34.200 15:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:34.200 15:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:34.200 15:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:35:34.200 15:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:34.461 15:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:34.722 15:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:35:35.662 15:54:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:35:35.662 15:54:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:35.663 15:54:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:35.663 15:54:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:35.923 15:54:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:35.923 15:54:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:35.924 15:54:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:35.924 15:54:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:35.924 15:54:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:35.924 15:54:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:35.924 15:54:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:35.924 15:54:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:36.184 15:54:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:36.184 15:54:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:36.184 15:54:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:36.184 15:54:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:36.445 15:54:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:36.445 15:54:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:36.445 15:54:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:36.445 15:54:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:36.445 15:54:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:36.445 15:54:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:36.445 15:54:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:36.445 15:54:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:36.705 15:54:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:36.705 15:54:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:35:36.705 15:54:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:35:36.966 15:54:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:36.966 15:54:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:35:37.908 15:54:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:35:37.908 15:54:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:37.908 15:54:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:37.908 15:54:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:38.169 15:54:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:38.169 15:54:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:38.169 15:54:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:38.169 15:54:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:38.429 15:54:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:38.429 15:54:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:38.429 15:54:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:38.429 15:54:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:38.689 15:54:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:38.689 15:54:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:38.689 15:54:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:38.689 15:54:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:38.689 15:54:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:38.689 15:54:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:35:38.689 15:54:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:38.689 15:54:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:38.951 15:54:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:38.951 15:54:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:38.951 15:54:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:38.951 15:54:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:39.211 15:54:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:39.211 15:54:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:35:39.211 15:54:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:35:39.211 15:54:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:39.472 15:54:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:35:40.414 15:54:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:35:40.414 15:54:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:40.414 15:54:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:40.414 15:54:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:40.675 15:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:40.675 15:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:40.675 15:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:40.675 15:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:40.936 15:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:40.936 15:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:40.936 15:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:40.936 15:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:40.936 15:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:40.936 15:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:40.936 15:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:40.936 15:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:41.196 15:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:41.196 15:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:35:41.196 15:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:41.196 15:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:41.457 15:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:41.457 15:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:41.457 15:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:41.457 15:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:41.718 15:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:41.718 15:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:35:41.718 15:54:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:35:41.718 15:54:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:35:41.978 15:54:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:41.978 15:54:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:35:43.362 15:54:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:35:43.362 15:54:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:43.362 15:54:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:43.362 15:54:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:43.362 15:54:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:43.362 15:54:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:43.362 15:54:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:43.362 15:54:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:43.623 15:54:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:43.623 15:54:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:43.623 15:54:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:43.623 15:54:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:43.623 15:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:43.623 15:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:43.623 15:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:43.623 15:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:43.882 15:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:43.882 15:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:43.882 15:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:43.883 15:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:44.143 15:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:44.143 15:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:44.143 15:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:44.143 15:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:44.143 15:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:44.143 15:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:35:44.143 15:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:44.405 15:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:44.665 15:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:35:45.607 15:54:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:35:45.607 15:54:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:45.607 15:54:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:45.607 15:54:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:45.867 15:54:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:45.867 15:54:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:45.867 15:54:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:45.867 15:54:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:45.867 15:54:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:45.867 15:54:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:45.867 15:54:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:45.867 15:54:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:46.128 15:54:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:46.128 15:54:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:46.128 15:54:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:46.128 15:54:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:46.389 15:54:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:46.389 15:54:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:46.389 15:54:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:46.389 15:54:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:46.650 15:54:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:46.650 15:54:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:46.650 15:54:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:46.650 15:54:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:46.650 15:54:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:46.650 15:54:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:35:46.650 15:54:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:46.911 15:54:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:35:47.171 15:54:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:35:48.113 15:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:35:48.113 15:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:48.113 15:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:48.113 15:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:48.374 15:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:48.374 15:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:48.374 15:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:48.374 15:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:48.374 15:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:48.374 15:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:48.374 15:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:48.374 15:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:48.634 15:54:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:48.634 15:54:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:48.634 15:54:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:48.635 15:54:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:48.895 15:54:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:48.895 15:54:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:48.895 15:54:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:48.895 15:54:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:48.896 15:54:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:48.896 15:54:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:49.157 15:54:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:49.157 15:54:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:49.157 15:54:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:49.157 15:54:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:35:49.157 15:54:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:49.418 15:54:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:49.681 15:54:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:35:50.623 15:54:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:35:50.623 15:54:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:50.623 15:54:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:50.623 15:54:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:50.885 15:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:50.885 15:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:50.885 15:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:50.885 15:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:50.885 15:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:50.885 15:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:50.885 15:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:50.885 15:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:51.147 15:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:51.147 15:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:51.147 15:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:51.147 15:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:51.408 15:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:51.408 15:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:51.408 15:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:51.408 15:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:51.408 15:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:51.408 15:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:51.408 15:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:51.408 15:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:51.669 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:51.669 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 584100 00:35:51.669 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 584100 ']' 00:35:51.669 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 584100 00:35:51.669 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:35:51.669 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:51.669 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 584100 00:35:51.669 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:35:51.669 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:35:51.669 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 584100' 00:35:51.669 killing process with pid 584100 00:35:51.669 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 584100 00:35:51.669 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 584100 00:35:51.669 { 00:35:51.669 "results": [ 00:35:51.669 { 00:35:51.669 "job": "Nvme0n1", 00:35:51.669 "core_mask": "0x4", 00:35:51.669 "workload": "verify", 00:35:51.669 "status": "terminated", 00:35:51.669 "verify_range": { 00:35:51.669 "start": 0, 00:35:51.669 "length": 16384 00:35:51.669 }, 00:35:51.669 "queue_depth": 128, 00:35:51.669 "io_size": 4096, 00:35:51.669 "runtime": 26.71104, 00:35:51.669 "iops": 12044.083644815028, 00:35:51.669 "mibps": 47.0472017375587, 00:35:51.669 "io_failed": 0, 00:35:51.669 "io_timeout": 0, 00:35:51.669 "avg_latency_us": 10609.153612715385, 00:35:51.669 "min_latency_us": 324.26666666666665, 00:35:51.669 "max_latency_us": 3019898.88 00:35:51.669 } 00:35:51.669 ], 00:35:51.669 "core_count": 1 00:35:51.669 } 00:35:51.933 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 584100 00:35:51.933 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:51.934 [2024-09-27 15:54:03.478905] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:35:51.934 [2024-09-27 15:54:03.478983] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid584100 ] 00:35:51.934 [2024-09-27 15:54:03.561196] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:51.934 [2024-09-27 15:54:03.607018] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:35:51.934 [2024-09-27 15:54:05.173442] bdev_nvme.c:5605:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:35:51.934 Running I/O for 90 seconds... 00:35:51.934 10307.00 IOPS, 40.26 MiB/s 10689.00 IOPS, 41.75 MiB/s 11311.00 IOPS, 44.18 MiB/s 11758.50 IOPS, 45.93 MiB/s 11992.60 IOPS, 46.85 MiB/s 12137.33 IOPS, 47.41 MiB/s 12248.14 IOPS, 47.84 MiB/s 12332.62 IOPS, 48.17 MiB/s 12402.33 IOPS, 48.45 MiB/s 12446.40 IOPS, 48.62 MiB/s 12484.00 IOPS, 48.77 MiB/s [2024-09-27 15:54:17.179276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.934 [2024-09-27 15:54:17.179311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:51.934 [2024-09-27 15:54:17.179339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.934 [2024-09-27 15:54:17.179346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:51.934 [2024-09-27 15:54:17.179357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.934 [2024-09-27 15:54:17.179363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:51.934 [2024-09-27 15:54:17.179373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.934 [2024-09-27 15:54:17.179379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:51.934 [2024-09-27 15:54:17.179390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.934 [2024-09-27 15:54:17.179395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:51.934 [2024-09-27 15:54:17.179405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.934 [2024-09-27 15:54:17.179411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:51.934 [2024-09-27 15:54:17.179421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.934 [2024-09-27 15:54:17.179427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:51.934 [2024-09-27 15:54:17.179437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.934 [2024-09-27 15:54:17.179442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:51.934 [2024-09-27 15:54:17.179453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.934 [2024-09-27 15:54:17.179458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:51.934 [2024-09-27 15:54:17.179816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.934 [2024-09-27 15:54:17.179826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:51.934 [2024-09-27 15:54:17.179838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.934 [2024-09-27 15:54:17.179844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:51.934 [2024-09-27 15:54:17.179855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.934 [2024-09-27 15:54:17.179860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:51.934 [2024-09-27 15:54:17.179871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.934 [2024-09-27 15:54:17.179876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:51.934 [2024-09-27 15:54:17.179887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.934 [2024-09-27 15:54:17.179892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.934 [2024-09-27 15:54:17.179909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.934 [2024-09-27 15:54:17.179914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:51.934 [2024-09-27 15:54:17.179925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.934 [2024-09-27 15:54:17.179930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:51.934 [2024-09-27 15:54:17.179941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.934 [2024-09-27 15:54:17.179946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:51.934 [2024-09-27 15:54:17.179957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.934 [2024-09-27 15:54:17.179962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:51.934 [2024-09-27 15:54:17.179973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.934 [2024-09-27 15:54:17.179979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:51.934 [2024-09-27 15:54:17.179990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.934 [2024-09-27 15:54:17.179995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:51.934 [2024-09-27 15:54:17.180006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.934 [2024-09-27 15:54:17.180011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:51.934 [2024-09-27 15:54:17.180025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.934 [2024-09-27 15:54:17.180030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:51.934 [2024-09-27 15:54:17.180041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.934 [2024-09-27 15:54:17.180046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:51.934 [2024-09-27 15:54:17.180057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.934 [2024-09-27 15:54:17.180062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:51.934 [2024-09-27 15:54:17.180072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.934 [2024-09-27 15:54:17.180078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:51.934 [2024-09-27 15:54:17.180844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.934 [2024-09-27 15:54:17.180850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:51.934 [2024-09-27 15:54:17.180863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.934 [2024-09-27 15:54:17.180868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:51.934 [2024-09-27 15:54:17.180880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.934 [2024-09-27 15:54:17.180885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:51.934 [2024-09-27 15:54:17.180901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.934 [2024-09-27 15:54:17.180907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:51.934 [2024-09-27 15:54:17.180920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.934 [2024-09-27 15:54:17.180925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:51.934 [2024-09-27 15:54:17.180937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.934 [2024-09-27 15:54:17.180942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:51.934 [2024-09-27 15:54:17.180954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.934 [2024-09-27 15:54:17.180960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:51.934 [2024-09-27 15:54:17.180972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.934 [2024-09-27 15:54:17.180977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:51.935 [2024-09-27 15:54:17.180989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.935 [2024-09-27 15:54:17.180996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:51.935 [2024-09-27 15:54:17.181009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.935 [2024-09-27 15:54:17.181014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:51.935 [2024-09-27 15:54:17.181026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.935 [2024-09-27 15:54:17.181031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:51.935 [2024-09-27 15:54:17.181043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.935 [2024-09-27 15:54:17.181048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:51.935 [2024-09-27 15:54:17.181061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.935 [2024-09-27 15:54:17.181066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:51.935 [2024-09-27 15:54:17.181129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.935 [2024-09-27 15:54:17.181135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:51.935 [2024-09-27 15:54:17.181150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.935 [2024-09-27 15:54:17.181155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:51.935 [2024-09-27 15:54:17.181169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.935 [2024-09-27 15:54:17.181174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:51.935 [2024-09-27 15:54:17.181188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.935 [2024-09-27 15:54:17.181193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:51.935 [2024-09-27 15:54:17.181207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.935 [2024-09-27 15:54:17.181213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:51.935 [2024-09-27 15:54:17.181227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.935 [2024-09-27 15:54:17.181234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.935 [2024-09-27 15:54:17.181247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.935 [2024-09-27 15:54:17.181253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.935 [2024-09-27 15:54:17.181266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.935 [2024-09-27 15:54:17.181272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.935 [2024-09-27 15:54:17.181288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.935 [2024-09-27 15:54:17.181293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:51.935 [2024-09-27 15:54:17.181307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.935 [2024-09-27 15:54:17.181312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:51.935 [2024-09-27 15:54:17.181326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.935 [2024-09-27 15:54:17.181332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:51.935 [2024-09-27 15:54:17.181346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.935 [2024-09-27 15:54:17.181351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:51.935 [2024-09-27 15:54:17.181365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.935 [2024-09-27 15:54:17.181370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:51.935 [2024-09-27 15:54:17.181383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.935 [2024-09-27 15:54:17.181388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:51.935 [2024-09-27 15:54:17.181402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.935 [2024-09-27 15:54:17.181407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:51.935 [2024-09-27 15:54:17.181421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.935 [2024-09-27 15:54:17.181426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:51.935 [2024-09-27 15:54:17.181440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.935 [2024-09-27 15:54:17.181445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:51.935 [2024-09-27 15:54:17.181492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.935 [2024-09-27 15:54:17.181498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:51.935 [2024-09-27 15:54:17.181513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.935 [2024-09-27 15:54:17.181518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:51.935 [2024-09-27 15:54:17.181533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.935 [2024-09-27 15:54:17.181538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:51.935 [2024-09-27 15:54:17.181554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.935 [2024-09-27 15:54:17.181560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:51.935 [2024-09-27 15:54:17.181574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.935 [2024-09-27 15:54:17.181579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:51.935 [2024-09-27 15:54:17.181594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.935 [2024-09-27 15:54:17.181600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:51.935 [2024-09-27 15:54:17.181614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.935 [2024-09-27 15:54:17.181619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:51.935 [2024-09-27 15:54:17.181634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.935 [2024-09-27 15:54:17.181639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:51.935 [2024-09-27 15:54:17.181654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.935 [2024-09-27 15:54:17.181659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:51.935 [2024-09-27 15:54:17.181674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:3064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.935 [2024-09-27 15:54:17.181679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:51.935 [2024-09-27 15:54:17.181693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.935 [2024-09-27 15:54:17.181698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:51.935 [2024-09-27 15:54:17.181713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.935 [2024-09-27 15:54:17.181718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:51.935 [2024-09-27 15:54:17.181733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.935 [2024-09-27 15:54:17.181738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:51.935 [2024-09-27 15:54:17.181752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.935 [2024-09-27 15:54:17.181757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:51.935 [2024-09-27 15:54:17.181772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.935 [2024-09-27 15:54:17.181777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:51.936 [2024-09-27 15:54:17.181791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.936 [2024-09-27 15:54:17.181798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:51.936 [2024-09-27 15:54:17.181812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:3120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.936 [2024-09-27 15:54:17.181818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:51.936 [2024-09-27 15:54:17.181832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.936 [2024-09-27 15:54:17.181838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:51.936 [2024-09-27 15:54:17.181852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:3136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.936 [2024-09-27 15:54:17.181857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:51.936 [2024-09-27 15:54:17.181871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.936 [2024-09-27 15:54:17.181877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:51.936 [2024-09-27 15:54:17.181892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.936 [2024-09-27 15:54:17.181902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:51.936 [2024-09-27 15:54:17.181917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.936 [2024-09-27 15:54:17.181922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:51.936 [2024-09-27 15:54:17.181937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.936 [2024-09-27 15:54:17.181942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.936 [2024-09-27 15:54:17.181957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.936 [2024-09-27 15:54:17.181962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:51.936 [2024-09-27 15:54:17.181977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:3184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.936 [2024-09-27 15:54:17.181982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:51.936 [2024-09-27 15:54:17.181996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.936 [2024-09-27 15:54:17.182002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:51.936 [2024-09-27 15:54:17.182016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.936 [2024-09-27 15:54:17.182021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:51.936 [2024-09-27 15:54:17.182036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:3208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.936 [2024-09-27 15:54:17.182043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:51.936 [2024-09-27 15:54:17.182057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.936 [2024-09-27 15:54:17.182062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:51.936 [2024-09-27 15:54:17.182077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.936 [2024-09-27 15:54:17.182082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:51.936 [2024-09-27 15:54:17.182096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.936 [2024-09-27 15:54:17.182102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:51.936 [2024-09-27 15:54:17.182116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.936 [2024-09-27 15:54:17.182121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:51.936 [2024-09-27 15:54:17.182136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.936 [2024-09-27 15:54:17.182141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:51.936 [2024-09-27 15:54:17.182156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.936 [2024-09-27 15:54:17.182161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:51.936 [2024-09-27 15:54:17.182175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.936 [2024-09-27 15:54:17.182180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:51.936 [2024-09-27 15:54:17.182195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.936 [2024-09-27 15:54:17.182200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:51.936 [2024-09-27 15:54:17.182215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.936 [2024-09-27 15:54:17.182220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:51.936 [2024-09-27 15:54:17.182234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:3288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.936 [2024-09-27 15:54:17.182239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:51.936 [2024-09-27 15:54:17.182254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.936 [2024-09-27 15:54:17.182259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:51.936 12314.58 IOPS, 48.10 MiB/s 11367.31 IOPS, 44.40 MiB/s 10555.36 IOPS, 41.23 MiB/s 10017.60 IOPS, 39.13 MiB/s 10197.00 IOPS, 39.83 MiB/s 10357.88 IOPS, 40.46 MiB/s 10745.56 IOPS, 41.97 MiB/s 11085.63 IOPS, 43.30 MiB/s 11265.55 IOPS, 44.01 MiB/s 11343.57 IOPS, 44.31 MiB/s 11407.68 IOPS, 44.56 MiB/s 11633.17 IOPS, 45.44 MiB/s 11860.12 IOPS, 46.33 MiB/s [2024-09-27 15:54:29.894896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:114728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.936 [2024-09-27 15:54:29.894930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:51.936 [2024-09-27 15:54:29.896707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:114808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.936 [2024-09-27 15:54:29.896723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:51.936 [2024-09-27 15:54:29.896736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:114824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.936 [2024-09-27 15:54:29.896743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:51.936 [2024-09-27 15:54:29.896754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:114840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.936 [2024-09-27 15:54:29.896759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:51.936 [2024-09-27 15:54:29.896770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:114856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.936 [2024-09-27 15:54:29.896775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:51.936 [2024-09-27 15:54:29.896785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:114872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.936 [2024-09-27 15:54:29.896791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:51.936 [2024-09-27 15:54:29.896801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:114888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.936 [2024-09-27 15:54:29.896806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:51.936 [2024-09-27 15:54:29.896817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:114904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.936 [2024-09-27 15:54:29.896822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:51.936 [2024-09-27 15:54:29.896833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:114920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.936 [2024-09-27 15:54:29.896838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:51.936 [2024-09-27 15:54:29.896848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:114936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.936 [2024-09-27 15:54:29.896854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:51.936 [2024-09-27 15:54:29.896864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:114952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.936 [2024-09-27 15:54:29.896870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:51.936 [2024-09-27 15:54:29.896880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:114968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.937 [2024-09-27 15:54:29.896885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:51.937 [2024-09-27 15:54:29.896905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:114984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.937 [2024-09-27 15:54:29.896911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:51.937 [2024-09-27 15:54:29.896921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:115000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.937 [2024-09-27 15:54:29.896926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:51.937 [2024-09-27 15:54:29.896936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:115016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.937 [2024-09-27 15:54:29.896941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:51.937 [2024-09-27 15:54:29.896952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:115032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.937 [2024-09-27 15:54:29.896957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:51.937 [2024-09-27 15:54:29.896967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:115048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.937 [2024-09-27 15:54:29.896972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:51.937 [2024-09-27 15:54:29.896982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:115064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.937 [2024-09-27 15:54:29.896988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:51.937 [2024-09-27 15:54:29.896999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:115080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.937 [2024-09-27 15:54:29.897004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:51.937 [2024-09-27 15:54:29.897015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:114688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.937 [2024-09-27 15:54:29.897020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:51.937 [2024-09-27 15:54:29.897030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:114720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.937 [2024-09-27 15:54:29.897035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:51.937 [2024-09-27 15:54:29.897046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:115096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.937 [2024-09-27 15:54:29.897051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:51.937 [2024-09-27 15:54:29.897061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:115112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.937 [2024-09-27 15:54:29.897066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:51.937 [2024-09-27 15:54:29.897076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:115128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.937 [2024-09-27 15:54:29.897082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:51.937 [2024-09-27 15:54:29.897096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:115144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.937 [2024-09-27 15:54:29.897102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:51.937 [2024-09-27 15:54:29.897112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:115160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.937 [2024-09-27 15:54:29.897117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.937 [2024-09-27 15:54:29.897127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:115176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.937 [2024-09-27 15:54:29.897133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:51.937 [2024-09-27 15:54:29.897143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:115192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.937 [2024-09-27 15:54:29.897148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:51.937 [2024-09-27 15:54:29.897159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:115208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.937 [2024-09-27 15:54:29.897164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:51.937 [2024-09-27 15:54:29.897174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:115224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.937 [2024-09-27 15:54:29.897179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:51.937 [2024-09-27 15:54:29.897190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:115240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.937 [2024-09-27 15:54:29.897195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:51.937 [2024-09-27 15:54:29.897205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:115256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.937 [2024-09-27 15:54:29.897211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:51.937 11977.24 IOPS, 46.79 MiB/s 12020.58 IOPS, 46.96 MiB/s Received shutdown signal, test time was about 26.711650 seconds 00:35:51.937 00:35:51.937 Latency(us) 00:35:51.937 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:51.937 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:35:51.937 Verification LBA range: start 0x0 length 0x4000 00:35:51.937 Nvme0n1 : 26.71 12044.08 47.05 0.00 0.00 10609.15 324.27 3019898.88 00:35:51.937 =================================================================================================================== 00:35:51.937 Total : 12044.08 47.05 0.00 0.00 10609.15 324.27 3019898.88 00:35:51.937 [2024-09-27 15:54:32.122315] app.c:1032:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:35:51.937 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:51.937 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:35:51.937 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:52.199 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:35:52.199 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # nvmfcleanup 00:35:52.199 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:35:52.199 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:52.199 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:35:52.199 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:52.199 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:52.199 rmmod nvme_tcp 00:35:52.199 rmmod nvme_fabrics 00:35:52.199 rmmod nvme_keyring 00:35:52.199 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:52.199 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:35:52.199 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:35:52.199 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@513 -- # '[' -n 583705 ']' 00:35:52.199 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # killprocess 583705 00:35:52.199 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 583705 ']' 00:35:52.199 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 583705 00:35:52.199 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:35:52.199 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:52.200 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 583705 00:35:52.200 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:52.200 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:52.200 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 583705' 00:35:52.200 killing process with pid 583705 00:35:52.200 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 583705 00:35:52.200 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 583705 00:35:52.200 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:35:52.200 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:35:52.200 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:35:52.200 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:35:52.200 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-save 00:35:52.200 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:35:52.200 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-restore 00:35:52.200 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:52.200 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:52.200 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:52.200 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:52.200 15:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:54.749 15:54:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:54.749 00:35:54.749 real 0m41.314s 00:35:54.749 user 1m45.821s 00:35:54.749 sys 0m11.818s 00:35:54.749 15:54:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:54.749 15:54:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:54.749 ************************************ 00:35:54.749 END TEST nvmf_host_multipath_status 00:35:54.749 ************************************ 00:35:54.749 15:54:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:35:54.749 15:54:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:54.749 15:54:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:54.749 15:54:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.749 ************************************ 00:35:54.749 START TEST nvmf_discovery_remove_ifc 00:35:54.749 ************************************ 00:35:54.749 15:54:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:35:54.749 * Looking for test storage... 00:35:54.749 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:54.749 15:54:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:54.749 15:54:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lcov --version 00:35:54.749 15:54:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:54.749 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:54.749 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:54.749 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:54.749 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:54.749 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:35:54.749 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:35:54.749 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:35:54.749 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:35:54.749 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:35:54.749 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:35:54.749 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:35:54.749 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:54.749 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:35:54.749 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:35:54.749 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:54.749 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:54.749 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:35:54.749 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:35:54.749 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:54.749 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:35:54.749 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:35:54.749 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:35:54.749 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:35:54.749 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:54.749 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:35:54.749 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:35:54.749 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:54.749 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:54.749 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:35:54.749 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:54.749 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:54.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:54.749 --rc genhtml_branch_coverage=1 00:35:54.749 --rc genhtml_function_coverage=1 00:35:54.749 --rc genhtml_legend=1 00:35:54.749 --rc geninfo_all_blocks=1 00:35:54.749 --rc geninfo_unexecuted_blocks=1 00:35:54.749 00:35:54.749 ' 00:35:54.749 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:54.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:54.749 --rc genhtml_branch_coverage=1 00:35:54.749 --rc genhtml_function_coverage=1 00:35:54.749 --rc genhtml_legend=1 00:35:54.749 --rc geninfo_all_blocks=1 00:35:54.749 --rc geninfo_unexecuted_blocks=1 00:35:54.749 00:35:54.749 ' 00:35:54.749 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:54.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:54.749 --rc genhtml_branch_coverage=1 00:35:54.749 --rc genhtml_function_coverage=1 00:35:54.749 --rc genhtml_legend=1 00:35:54.749 --rc geninfo_all_blocks=1 00:35:54.749 --rc geninfo_unexecuted_blocks=1 00:35:54.749 00:35:54.749 ' 00:35:54.749 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:54.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:54.749 --rc genhtml_branch_coverage=1 00:35:54.749 --rc genhtml_function_coverage=1 00:35:54.749 --rc genhtml_legend=1 00:35:54.749 --rc geninfo_all_blocks=1 00:35:54.749 --rc geninfo_unexecuted_blocks=1 00:35:54.749 00:35:54.749 ' 00:35:54.749 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:54.749 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:35:54.749 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:54.749 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:54.749 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:54.749 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:54.749 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:54.749 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:54.749 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:54.749 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:54.749 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:54.750 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:54.750 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:54.750 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:54.750 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:54.750 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:54.750 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:54.750 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:54.750 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:54.750 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:35:54.750 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:54.750 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:54.750 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:54.750 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:54.750 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:54.750 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:54.750 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:35:54.750 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:54.750 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:35:54.750 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:54.750 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:54.750 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:54.750 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:54.750 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:54.750 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:54.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:54.750 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:54.750 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:54.750 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:54.750 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:35:54.750 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:35:54.750 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:35:54.750 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:35:54.750 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:35:54.750 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:35:54.750 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:35:54.750 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:35:54.750 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:54.750 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # prepare_net_devs 00:35:54.750 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@434 -- # local -g is_hw=no 00:35:54.750 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # remove_spdk_ns 00:35:54.750 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:54.750 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:54.750 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:54.750 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:35:54.750 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:35:54.750 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:35:54.750 15:54:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:02.895 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:02.895 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:36:02.895 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:02.896 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:02.896 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:02.896 Found net devices under 0000:31:00.0: cvl_0_0 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:02.896 Found net devices under 0000:31:00.1: cvl_0_1 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # is_hw=yes 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:02.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:02.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:36:02.896 00:36:02.896 --- 10.0.0.2 ping statistics --- 00:36:02.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:02.896 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:36:02.896 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:02.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:02.897 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:36:02.897 00:36:02.897 --- 10.0.0.1 ping statistics --- 00:36:02.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:02.897 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:36:02.897 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:02.897 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # return 0 00:36:02.897 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:36:02.897 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:02.897 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:36:02.897 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:36:02.897 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:02.897 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:36:02.897 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:36:02.897 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:36:02.897 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:36:02.897 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:02.897 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:02.897 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # nvmfpid=594486 00:36:02.897 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # waitforlisten 594486 00:36:02.897 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:36:02.897 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 594486 ']' 00:36:02.897 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:02.897 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:02.897 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:02.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:02.897 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:02.897 15:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:02.897 [2024-09-27 15:54:42.859133] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:36:02.897 [2024-09-27 15:54:42.859200] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:02.897 [2024-09-27 15:54:42.950344] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:02.897 [2024-09-27 15:54:42.996463] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:02.897 [2024-09-27 15:54:42.996521] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:02.897 [2024-09-27 15:54:42.996533] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:02.897 [2024-09-27 15:54:42.996544] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:02.897 [2024-09-27 15:54:42.996551] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:02.897 [2024-09-27 15:54:42.996584] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:03.470 15:54:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:03.471 15:54:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:36:03.471 15:54:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:36:03.471 15:54:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:03.471 15:54:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:03.471 15:54:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:03.471 15:54:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:36:03.471 15:54:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.471 15:54:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:03.471 [2024-09-27 15:54:43.751029] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:03.471 [2024-09-27 15:54:43.759345] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:36:03.471 null0 00:36:03.471 [2024-09-27 15:54:43.791234] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:03.471 15:54:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.471 15:54:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=594569 00:36:03.471 15:54:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 594569 /tmp/host.sock 00:36:03.471 15:54:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:36:03.471 15:54:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 594569 ']' 00:36:03.471 15:54:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:36:03.471 15:54:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:03.471 15:54:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:36:03.471 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:36:03.471 15:54:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:03.471 15:54:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:03.471 [2024-09-27 15:54:43.870448] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:36:03.471 [2024-09-27 15:54:43.870515] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid594569 ] 00:36:03.471 [2024-09-27 15:54:43.955034] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:03.732 [2024-09-27 15:54:44.002867] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:36:04.304 15:54:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:04.304 15:54:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:36:04.304 15:54:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:04.304 15:54:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:36:04.304 15:54:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.304 15:54:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:04.304 15:54:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.304 15:54:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:36:04.304 15:54:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.304 15:54:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:04.304 15:54:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.304 15:54:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:36:04.304 15:54:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.304 15:54:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:05.690 [2024-09-27 15:54:45.832145] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:36:05.690 [2024-09-27 15:54:45.832180] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:36:05.690 [2024-09-27 15:54:45.832198] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:36:05.690 [2024-09-27 15:54:45.918451] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:36:05.690 [2024-09-27 15:54:46.101095] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:36:05.690 [2024-09-27 15:54:46.101146] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:36:05.690 [2024-09-27 15:54:46.101169] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:36:05.690 [2024-09-27 15:54:46.101183] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:36:05.690 [2024-09-27 15:54:46.101203] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:36:05.690 15:54:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.690 15:54:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:36:05.690 15:54:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:05.690 15:54:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:05.690 15:54:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:05.690 15:54:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.690 15:54:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:05.690 15:54:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:05.690 15:54:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:05.690 15:54:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.690 [2024-09-27 15:54:46.150486] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x2431f40 was disconnected and freed. delete nvme_qpair. 00:36:05.690 15:54:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:36:05.690 15:54:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:36:05.690 15:54:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:36:05.950 15:54:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:36:05.950 15:54:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:05.950 15:54:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:05.950 15:54:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:05.950 15:54:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.950 15:54:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:05.950 15:54:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:05.950 15:54:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:05.950 15:54:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.950 15:54:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:36:05.950 15:54:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:06.890 15:54:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:06.890 15:54:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:06.890 15:54:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:06.890 15:54:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.890 15:54:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:06.890 15:54:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:06.890 15:54:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:06.890 15:54:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.151 15:54:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:36:07.151 15:54:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:08.093 15:54:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:08.093 15:54:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:08.093 15:54:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:08.093 15:54:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.093 15:54:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:08.093 15:54:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:08.094 15:54:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:08.094 15:54:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.094 15:54:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:36:08.094 15:54:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:09.036 15:54:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:09.036 15:54:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:09.036 15:54:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:09.036 15:54:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.036 15:54:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:09.036 15:54:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:09.036 15:54:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:09.036 15:54:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.036 15:54:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:36:09.036 15:54:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:10.420 15:54:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:10.420 15:54:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:10.420 15:54:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:10.421 15:54:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:10.421 15:54:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:10.421 15:54:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:10.421 15:54:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:10.421 15:54:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:10.421 15:54:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:36:10.421 15:54:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:11.361 [2024-09-27 15:54:51.541812] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:36:11.361 [2024-09-27 15:54:51.541851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:11.361 [2024-09-27 15:54:51.541861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:11.361 [2024-09-27 15:54:51.541869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:11.361 [2024-09-27 15:54:51.541875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:11.361 [2024-09-27 15:54:51.541881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:11.361 [2024-09-27 15:54:51.541886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:11.361 [2024-09-27 15:54:51.541892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:11.361 [2024-09-27 15:54:51.541901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:11.361 [2024-09-27 15:54:51.541907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:36:11.361 [2024-09-27 15:54:51.541912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:11.361 [2024-09-27 15:54:51.541918] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240e7c0 is same with the state(6) to be set 00:36:11.361 [2024-09-27 15:54:51.551833] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x240e7c0 (9): Bad file descriptor 00:36:11.361 [2024-09-27 15:54:51.561869] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:11.361 15:54:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:11.361 15:54:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:11.361 15:54:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:11.361 15:54:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.361 15:54:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:11.361 15:54:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:11.361 15:54:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:12.302 [2024-09-27 15:54:52.587968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:36:12.302 [2024-09-27 15:54:52.588060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240e7c0 with addr=10.0.0.2, port=4420 00:36:12.302 [2024-09-27 15:54:52.588093] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240e7c0 is same with the state(6) to be set 00:36:12.302 [2024-09-27 15:54:52.588149] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x240e7c0 (9): Bad file descriptor 00:36:12.302 [2024-09-27 15:54:52.589268] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:36:12.302 [2024-09-27 15:54:52.589338] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:12.302 [2024-09-27 15:54:52.589373] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:36:12.302 [2024-09-27 15:54:52.589396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:12.302 [2024-09-27 15:54:52.589460] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:12.302 [2024-09-27 15:54:52.589486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:12.302 15:54:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:12.302 15:54:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:36:12.302 15:54:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:13.242 [2024-09-27 15:54:53.591883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:13.242 [2024-09-27 15:54:53.591901] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:13.242 [2024-09-27 15:54:53.591907] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:36:13.242 [2024-09-27 15:54:53.591913] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:36:13.242 [2024-09-27 15:54:53.591923] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:13.242 [2024-09-27 15:54:53.591939] bdev_nvme.c:6913:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:36:13.242 [2024-09-27 15:54:53.591955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:13.242 [2024-09-27 15:54:53.591963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.242 [2024-09-27 15:54:53.591971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:13.242 [2024-09-27 15:54:53.591976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.242 [2024-09-27 15:54:53.591983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:13.242 [2024-09-27 15:54:53.591989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.242 [2024-09-27 15:54:53.591994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:13.242 [2024-09-27 15:54:53.591999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.242 [2024-09-27 15:54:53.592005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:36:13.242 [2024-09-27 15:54:53.592010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.242 [2024-09-27 15:54:53.592015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:36:13.242 [2024-09-27 15:54:53.592389] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fded0 (9): Bad file descriptor 00:36:13.242 [2024-09-27 15:54:53.593398] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:36:13.242 [2024-09-27 15:54:53.593405] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:36:13.242 15:54:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:13.242 15:54:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:13.242 15:54:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:13.242 15:54:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.242 15:54:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:13.242 15:54:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:13.242 15:54:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:13.242 15:54:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.242 15:54:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:36:13.242 15:54:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:13.242 15:54:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:13.502 15:54:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:36:13.502 15:54:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:13.502 15:54:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:13.502 15:54:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:13.502 15:54:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.502 15:54:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:13.502 15:54:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:13.502 15:54:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:13.502 15:54:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.502 15:54:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:36:13.502 15:54:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:14.443 15:54:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:14.443 15:54:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:14.443 15:54:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:14.443 15:54:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.443 15:54:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:14.443 15:54:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:14.443 15:54:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:14.443 15:54:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.443 15:54:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:36:14.443 15:54:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:15.383 [2024-09-27 15:54:55.645850] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:36:15.383 [2024-09-27 15:54:55.645863] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:36:15.383 [2024-09-27 15:54:55.645872] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:36:15.383 [2024-09-27 15:54:55.775267] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:36:15.383 [2024-09-27 15:54:55.834361] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:36:15.383 [2024-09-27 15:54:55.834393] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:36:15.383 [2024-09-27 15:54:55.834407] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:36:15.383 [2024-09-27 15:54:55.834416] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:36:15.383 [2024-09-27 15:54:55.834422] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:36:15.383 [2024-09-27 15:54:55.842907] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x240fa60 was disconnected and freed. delete nvme_qpair. 00:36:15.643 15:54:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:15.643 15:54:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:15.643 15:54:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:15.643 15:54:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.643 15:54:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:15.643 15:54:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:15.643 15:54:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:15.643 15:54:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.643 15:54:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:36:15.643 15:54:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:36:15.643 15:54:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 594569 00:36:15.644 15:54:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 594569 ']' 00:36:15.644 15:54:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 594569 00:36:15.644 15:54:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:36:15.644 15:54:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:15.644 15:54:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 594569 00:36:15.644 15:54:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:15.644 15:54:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:15.644 15:54:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 594569' 00:36:15.644 killing process with pid 594569 00:36:15.644 15:54:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 594569 00:36:15.644 15:54:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 594569 00:36:15.644 15:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:36:15.644 15:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # nvmfcleanup 00:36:15.644 15:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:36:15.644 15:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:15.644 15:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:36:15.644 15:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:15.644 15:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:15.644 rmmod nvme_tcp 00:36:15.644 rmmod nvme_fabrics 00:36:15.904 rmmod nvme_keyring 00:36:15.904 15:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:15.904 15:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:36:15.904 15:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:36:15.904 15:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@513 -- # '[' -n 594486 ']' 00:36:15.904 15:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # killprocess 594486 00:36:15.905 15:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 594486 ']' 00:36:15.905 15:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 594486 00:36:15.905 15:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:36:15.905 15:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:15.905 15:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 594486 00:36:15.905 15:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:15.905 15:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:15.905 15:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 594486' 00:36:15.905 killing process with pid 594486 00:36:15.905 15:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 594486 00:36:15.905 15:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 594486 00:36:15.905 15:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:36:15.905 15:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:36:15.905 15:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:36:15.905 15:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:36:15.905 15:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-restore 00:36:15.905 15:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-save 00:36:15.905 15:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:36:15.905 15:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:15.905 15:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:15.905 15:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:15.905 15:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:15.905 15:54:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:18.453 15:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:18.453 00:36:18.453 real 0m23.601s 00:36:18.453 user 0m27.482s 00:36:18.453 sys 0m7.173s 00:36:18.453 15:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:18.453 15:54:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:18.453 ************************************ 00:36:18.453 END TEST nvmf_discovery_remove_ifc 00:36:18.453 ************************************ 00:36:18.453 15:54:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:36:18.453 15:54:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:18.453 15:54:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:18.453 15:54:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.453 ************************************ 00:36:18.453 START TEST nvmf_identify_kernel_target 00:36:18.453 ************************************ 00:36:18.453 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:36:18.453 * Looking for test storage... 00:36:18.453 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:18.453 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:18.453 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lcov --version 00:36:18.453 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:18.453 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:18.453 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:18.453 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:18.453 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:18.453 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:36:18.453 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:36:18.453 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:36:18.453 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:36:18.453 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:36:18.453 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:36:18.453 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:36:18.453 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:18.453 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:36:18.453 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:18.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:18.454 --rc genhtml_branch_coverage=1 00:36:18.454 --rc genhtml_function_coverage=1 00:36:18.454 --rc genhtml_legend=1 00:36:18.454 --rc geninfo_all_blocks=1 00:36:18.454 --rc geninfo_unexecuted_blocks=1 00:36:18.454 00:36:18.454 ' 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:18.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:18.454 --rc genhtml_branch_coverage=1 00:36:18.454 --rc genhtml_function_coverage=1 00:36:18.454 --rc genhtml_legend=1 00:36:18.454 --rc geninfo_all_blocks=1 00:36:18.454 --rc geninfo_unexecuted_blocks=1 00:36:18.454 00:36:18.454 ' 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:18.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:18.454 --rc genhtml_branch_coverage=1 00:36:18.454 --rc genhtml_function_coverage=1 00:36:18.454 --rc genhtml_legend=1 00:36:18.454 --rc geninfo_all_blocks=1 00:36:18.454 --rc geninfo_unexecuted_blocks=1 00:36:18.454 00:36:18.454 ' 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:18.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:18.454 --rc genhtml_branch_coverage=1 00:36:18.454 --rc genhtml_function_coverage=1 00:36:18.454 --rc genhtml_legend=1 00:36:18.454 --rc geninfo_all_blocks=1 00:36:18.454 --rc geninfo_unexecuted_blocks=1 00:36:18.454 00:36:18.454 ' 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:18.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:18.454 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:18.455 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:18.455 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:18.455 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:36:18.455 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:36:18.455 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:18.455 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:36:18.455 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:36:18.455 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:36:18.455 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:18.455 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:18.455 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:18.455 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:36:18.455 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:36:18.455 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:36:18.455 15:54:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:26.601 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:26.601 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:26.601 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:26.602 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:26.602 Found net devices under 0000:31:00.0: cvl_0_0 00:36:26.602 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:26.602 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:26.602 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:26.602 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:26.602 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:26.602 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:26.602 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:26.602 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:26.602 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:26.602 Found net devices under 0000:31:00.1: cvl_0_1 00:36:26.602 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:26.602 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:36:26.602 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # is_hw=yes 00:36:26.602 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:36:26.602 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:36:26.602 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:36:26.602 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:26.602 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:26.602 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:26.602 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:26.602 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:26.602 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:26.602 15:55:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:26.602 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:26.602 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.582 ms 00:36:26.602 00:36:26.602 --- 10.0.0.2 ping statistics --- 00:36:26.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:26.602 rtt min/avg/max/mdev = 0.582/0.582/0.582/0.000 ms 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:26.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:26.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:36:26.602 00:36:26.602 --- 10.0.0.1 ping statistics --- 00:36:26.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:26.602 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # return 0 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@765 -- # local ip 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # local block nvme 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@666 -- # modprobe nvmet 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:26.602 15:55:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:29.900 Waiting for block devices as requested 00:36:29.901 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:29.901 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:29.901 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:29.901 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:29.901 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:29.901 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:29.901 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:30.161 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:30.161 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:36:30.422 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:30.422 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:30.422 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:30.683 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:30.683 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:30.683 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:30.944 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:30.944 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:31.205 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:36:31.205 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:31.205 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:36:31.205 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:36:31.205 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:31.205 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:36:31.205 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:36:31.205 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:36:31.205 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:31.205 No valid GPT data, bailing 00:36:31.205 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:31.205 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:36:31.205 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:36:31.205 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:36:31.205 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # [[ -b /dev/nvme0n1 ]] 00:36:31.205 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:31.205 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:31.205 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:31.205 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:31.205 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo 1 00:36:31.205 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@692 -- # echo /dev/nvme0n1 00:36:31.205 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:36:31.205 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:36:31.205 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo tcp 00:36:31.205 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 4420 00:36:31.205 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo ipv4 00:36:31.205 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:31.205 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:36:31.468 00:36:31.468 Discovery Log Number of Records 2, Generation counter 2 00:36:31.468 =====Discovery Log Entry 0====== 00:36:31.468 trtype: tcp 00:36:31.468 adrfam: ipv4 00:36:31.468 subtype: current discovery subsystem 00:36:31.468 treq: not specified, sq flow control disable supported 00:36:31.468 portid: 1 00:36:31.468 trsvcid: 4420 00:36:31.468 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:31.468 traddr: 10.0.0.1 00:36:31.468 eflags: none 00:36:31.468 sectype: none 00:36:31.468 =====Discovery Log Entry 1====== 00:36:31.468 trtype: tcp 00:36:31.468 adrfam: ipv4 00:36:31.468 subtype: nvme subsystem 00:36:31.468 treq: not specified, sq flow control disable supported 00:36:31.468 portid: 1 00:36:31.468 trsvcid: 4420 00:36:31.468 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:31.468 traddr: 10.0.0.1 00:36:31.468 eflags: none 00:36:31.468 sectype: none 00:36:31.468 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:36:31.468 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:36:31.468 ===================================================== 00:36:31.468 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:36:31.468 ===================================================== 00:36:31.468 Controller Capabilities/Features 00:36:31.468 ================================ 00:36:31.468 Vendor ID: 0000 00:36:31.468 Subsystem Vendor ID: 0000 00:36:31.468 Serial Number: 0a9d9e3e46c8a30c5f02 00:36:31.468 Model Number: Linux 00:36:31.468 Firmware Version: 6.8.9-20 00:36:31.468 Recommended Arb Burst: 0 00:36:31.468 IEEE OUI Identifier: 00 00 00 00:36:31.468 Multi-path I/O 00:36:31.468 May have multiple subsystem ports: No 00:36:31.468 May have multiple controllers: No 00:36:31.468 Associated with SR-IOV VF: No 00:36:31.468 Max Data Transfer Size: Unlimited 00:36:31.468 Max Number of Namespaces: 0 00:36:31.468 Max Number of I/O Queues: 1024 00:36:31.468 NVMe Specification Version (VS): 1.3 00:36:31.468 NVMe Specification Version (Identify): 1.3 00:36:31.468 Maximum Queue Entries: 1024 00:36:31.468 Contiguous Queues Required: No 00:36:31.468 Arbitration Mechanisms Supported 00:36:31.468 Weighted Round Robin: Not Supported 00:36:31.468 Vendor Specific: Not Supported 00:36:31.468 Reset Timeout: 7500 ms 00:36:31.468 Doorbell Stride: 4 bytes 00:36:31.468 NVM Subsystem Reset: Not Supported 00:36:31.468 Command Sets Supported 00:36:31.468 NVM Command Set: Supported 00:36:31.468 Boot Partition: Not Supported 00:36:31.468 Memory Page Size Minimum: 4096 bytes 00:36:31.468 Memory Page Size Maximum: 4096 bytes 00:36:31.468 Persistent Memory Region: Not Supported 00:36:31.468 Optional Asynchronous Events Supported 00:36:31.468 Namespace Attribute Notices: Not Supported 00:36:31.468 Firmware Activation Notices: Not Supported 00:36:31.468 ANA Change Notices: Not Supported 00:36:31.468 PLE Aggregate Log Change Notices: Not Supported 00:36:31.468 LBA Status Info Alert Notices: Not Supported 00:36:31.468 EGE Aggregate Log Change Notices: Not Supported 00:36:31.468 Normal NVM Subsystem Shutdown event: Not Supported 00:36:31.468 Zone Descriptor Change Notices: Not Supported 00:36:31.468 Discovery Log Change Notices: Supported 00:36:31.468 Controller Attributes 00:36:31.469 128-bit Host Identifier: Not Supported 00:36:31.469 Non-Operational Permissive Mode: Not Supported 00:36:31.469 NVM Sets: Not Supported 00:36:31.469 Read Recovery Levels: Not Supported 00:36:31.469 Endurance Groups: Not Supported 00:36:31.469 Predictable Latency Mode: Not Supported 00:36:31.469 Traffic Based Keep ALive: Not Supported 00:36:31.469 Namespace Granularity: Not Supported 00:36:31.469 SQ Associations: Not Supported 00:36:31.469 UUID List: Not Supported 00:36:31.469 Multi-Domain Subsystem: Not Supported 00:36:31.469 Fixed Capacity Management: Not Supported 00:36:31.469 Variable Capacity Management: Not Supported 00:36:31.469 Delete Endurance Group: Not Supported 00:36:31.469 Delete NVM Set: Not Supported 00:36:31.469 Extended LBA Formats Supported: Not Supported 00:36:31.469 Flexible Data Placement Supported: Not Supported 00:36:31.469 00:36:31.469 Controller Memory Buffer Support 00:36:31.469 ================================ 00:36:31.469 Supported: No 00:36:31.469 00:36:31.469 Persistent Memory Region Support 00:36:31.469 ================================ 00:36:31.469 Supported: No 00:36:31.469 00:36:31.469 Admin Command Set Attributes 00:36:31.469 ============================ 00:36:31.469 Security Send/Receive: Not Supported 00:36:31.469 Format NVM: Not Supported 00:36:31.469 Firmware Activate/Download: Not Supported 00:36:31.469 Namespace Management: Not Supported 00:36:31.469 Device Self-Test: Not Supported 00:36:31.469 Directives: Not Supported 00:36:31.469 NVMe-MI: Not Supported 00:36:31.469 Virtualization Management: Not Supported 00:36:31.469 Doorbell Buffer Config: Not Supported 00:36:31.469 Get LBA Status Capability: Not Supported 00:36:31.469 Command & Feature Lockdown Capability: Not Supported 00:36:31.469 Abort Command Limit: 1 00:36:31.469 Async Event Request Limit: 1 00:36:31.469 Number of Firmware Slots: N/A 00:36:31.469 Firmware Slot 1 Read-Only: N/A 00:36:31.469 Firmware Activation Without Reset: N/A 00:36:31.469 Multiple Update Detection Support: N/A 00:36:31.469 Firmware Update Granularity: No Information Provided 00:36:31.469 Per-Namespace SMART Log: No 00:36:31.469 Asymmetric Namespace Access Log Page: Not Supported 00:36:31.469 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:36:31.469 Command Effects Log Page: Not Supported 00:36:31.469 Get Log Page Extended Data: Supported 00:36:31.469 Telemetry Log Pages: Not Supported 00:36:31.469 Persistent Event Log Pages: Not Supported 00:36:31.469 Supported Log Pages Log Page: May Support 00:36:31.469 Commands Supported & Effects Log Page: Not Supported 00:36:31.469 Feature Identifiers & Effects Log Page:May Support 00:36:31.469 NVMe-MI Commands & Effects Log Page: May Support 00:36:31.469 Data Area 4 for Telemetry Log: Not Supported 00:36:31.469 Error Log Page Entries Supported: 1 00:36:31.469 Keep Alive: Not Supported 00:36:31.469 00:36:31.469 NVM Command Set Attributes 00:36:31.469 ========================== 00:36:31.469 Submission Queue Entry Size 00:36:31.469 Max: 1 00:36:31.469 Min: 1 00:36:31.469 Completion Queue Entry Size 00:36:31.469 Max: 1 00:36:31.469 Min: 1 00:36:31.469 Number of Namespaces: 0 00:36:31.469 Compare Command: Not Supported 00:36:31.469 Write Uncorrectable Command: Not Supported 00:36:31.469 Dataset Management Command: Not Supported 00:36:31.469 Write Zeroes Command: Not Supported 00:36:31.469 Set Features Save Field: Not Supported 00:36:31.469 Reservations: Not Supported 00:36:31.469 Timestamp: Not Supported 00:36:31.469 Copy: Not Supported 00:36:31.469 Volatile Write Cache: Not Present 00:36:31.469 Atomic Write Unit (Normal): 1 00:36:31.469 Atomic Write Unit (PFail): 1 00:36:31.469 Atomic Compare & Write Unit: 1 00:36:31.469 Fused Compare & Write: Not Supported 00:36:31.469 Scatter-Gather List 00:36:31.469 SGL Command Set: Supported 00:36:31.469 SGL Keyed: Not Supported 00:36:31.469 SGL Bit Bucket Descriptor: Not Supported 00:36:31.469 SGL Metadata Pointer: Not Supported 00:36:31.469 Oversized SGL: Not Supported 00:36:31.469 SGL Metadata Address: Not Supported 00:36:31.469 SGL Offset: Supported 00:36:31.469 Transport SGL Data Block: Not Supported 00:36:31.469 Replay Protected Memory Block: Not Supported 00:36:31.469 00:36:31.469 Firmware Slot Information 00:36:31.469 ========================= 00:36:31.469 Active slot: 0 00:36:31.469 00:36:31.469 00:36:31.469 Error Log 00:36:31.469 ========= 00:36:31.469 00:36:31.469 Active Namespaces 00:36:31.469 ================= 00:36:31.469 Discovery Log Page 00:36:31.469 ================== 00:36:31.469 Generation Counter: 2 00:36:31.469 Number of Records: 2 00:36:31.469 Record Format: 0 00:36:31.469 00:36:31.469 Discovery Log Entry 0 00:36:31.469 ---------------------- 00:36:31.469 Transport Type: 3 (TCP) 00:36:31.469 Address Family: 1 (IPv4) 00:36:31.469 Subsystem Type: 3 (Current Discovery Subsystem) 00:36:31.469 Entry Flags: 00:36:31.469 Duplicate Returned Information: 0 00:36:31.469 Explicit Persistent Connection Support for Discovery: 0 00:36:31.469 Transport Requirements: 00:36:31.469 Secure Channel: Not Specified 00:36:31.469 Port ID: 1 (0x0001) 00:36:31.469 Controller ID: 65535 (0xffff) 00:36:31.469 Admin Max SQ Size: 32 00:36:31.469 Transport Service Identifier: 4420 00:36:31.469 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:36:31.469 Transport Address: 10.0.0.1 00:36:31.469 Discovery Log Entry 1 00:36:31.469 ---------------------- 00:36:31.469 Transport Type: 3 (TCP) 00:36:31.469 Address Family: 1 (IPv4) 00:36:31.469 Subsystem Type: 2 (NVM Subsystem) 00:36:31.469 Entry Flags: 00:36:31.469 Duplicate Returned Information: 0 00:36:31.469 Explicit Persistent Connection Support for Discovery: 0 00:36:31.469 Transport Requirements: 00:36:31.469 Secure Channel: Not Specified 00:36:31.469 Port ID: 1 (0x0001) 00:36:31.469 Controller ID: 65535 (0xffff) 00:36:31.469 Admin Max SQ Size: 32 00:36:31.469 Transport Service Identifier: 4420 00:36:31.469 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:36:31.469 Transport Address: 10.0.0.1 00:36:31.469 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:31.469 get_feature(0x01) failed 00:36:31.469 get_feature(0x02) failed 00:36:31.469 get_feature(0x04) failed 00:36:31.469 ===================================================== 00:36:31.469 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:31.469 ===================================================== 00:36:31.469 Controller Capabilities/Features 00:36:31.469 ================================ 00:36:31.469 Vendor ID: 0000 00:36:31.469 Subsystem Vendor ID: 0000 00:36:31.469 Serial Number: 7effda7f5f192b120c8b 00:36:31.469 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:36:31.469 Firmware Version: 6.8.9-20 00:36:31.469 Recommended Arb Burst: 6 00:36:31.469 IEEE OUI Identifier: 00 00 00 00:36:31.469 Multi-path I/O 00:36:31.469 May have multiple subsystem ports: Yes 00:36:31.469 May have multiple controllers: Yes 00:36:31.469 Associated with SR-IOV VF: No 00:36:31.469 Max Data Transfer Size: Unlimited 00:36:31.469 Max Number of Namespaces: 1024 00:36:31.469 Max Number of I/O Queues: 128 00:36:31.469 NVMe Specification Version (VS): 1.3 00:36:31.469 NVMe Specification Version (Identify): 1.3 00:36:31.469 Maximum Queue Entries: 1024 00:36:31.469 Contiguous Queues Required: No 00:36:31.470 Arbitration Mechanisms Supported 00:36:31.470 Weighted Round Robin: Not Supported 00:36:31.470 Vendor Specific: Not Supported 00:36:31.470 Reset Timeout: 7500 ms 00:36:31.470 Doorbell Stride: 4 bytes 00:36:31.470 NVM Subsystem Reset: Not Supported 00:36:31.470 Command Sets Supported 00:36:31.470 NVM Command Set: Supported 00:36:31.470 Boot Partition: Not Supported 00:36:31.470 Memory Page Size Minimum: 4096 bytes 00:36:31.470 Memory Page Size Maximum: 4096 bytes 00:36:31.470 Persistent Memory Region: Not Supported 00:36:31.470 Optional Asynchronous Events Supported 00:36:31.470 Namespace Attribute Notices: Supported 00:36:31.470 Firmware Activation Notices: Not Supported 00:36:31.470 ANA Change Notices: Supported 00:36:31.470 PLE Aggregate Log Change Notices: Not Supported 00:36:31.470 LBA Status Info Alert Notices: Not Supported 00:36:31.470 EGE Aggregate Log Change Notices: Not Supported 00:36:31.470 Normal NVM Subsystem Shutdown event: Not Supported 00:36:31.470 Zone Descriptor Change Notices: Not Supported 00:36:31.470 Discovery Log Change Notices: Not Supported 00:36:31.470 Controller Attributes 00:36:31.470 128-bit Host Identifier: Supported 00:36:31.470 Non-Operational Permissive Mode: Not Supported 00:36:31.470 NVM Sets: Not Supported 00:36:31.470 Read Recovery Levels: Not Supported 00:36:31.470 Endurance Groups: Not Supported 00:36:31.470 Predictable Latency Mode: Not Supported 00:36:31.470 Traffic Based Keep ALive: Supported 00:36:31.470 Namespace Granularity: Not Supported 00:36:31.470 SQ Associations: Not Supported 00:36:31.470 UUID List: Not Supported 00:36:31.470 Multi-Domain Subsystem: Not Supported 00:36:31.470 Fixed Capacity Management: Not Supported 00:36:31.470 Variable Capacity Management: Not Supported 00:36:31.470 Delete Endurance Group: Not Supported 00:36:31.470 Delete NVM Set: Not Supported 00:36:31.470 Extended LBA Formats Supported: Not Supported 00:36:31.470 Flexible Data Placement Supported: Not Supported 00:36:31.470 00:36:31.470 Controller Memory Buffer Support 00:36:31.470 ================================ 00:36:31.470 Supported: No 00:36:31.470 00:36:31.470 Persistent Memory Region Support 00:36:31.470 ================================ 00:36:31.470 Supported: No 00:36:31.470 00:36:31.470 Admin Command Set Attributes 00:36:31.470 ============================ 00:36:31.470 Security Send/Receive: Not Supported 00:36:31.470 Format NVM: Not Supported 00:36:31.470 Firmware Activate/Download: Not Supported 00:36:31.470 Namespace Management: Not Supported 00:36:31.470 Device Self-Test: Not Supported 00:36:31.470 Directives: Not Supported 00:36:31.470 NVMe-MI: Not Supported 00:36:31.470 Virtualization Management: Not Supported 00:36:31.470 Doorbell Buffer Config: Not Supported 00:36:31.470 Get LBA Status Capability: Not Supported 00:36:31.470 Command & Feature Lockdown Capability: Not Supported 00:36:31.470 Abort Command Limit: 4 00:36:31.470 Async Event Request Limit: 4 00:36:31.470 Number of Firmware Slots: N/A 00:36:31.470 Firmware Slot 1 Read-Only: N/A 00:36:31.470 Firmware Activation Without Reset: N/A 00:36:31.470 Multiple Update Detection Support: N/A 00:36:31.470 Firmware Update Granularity: No Information Provided 00:36:31.470 Per-Namespace SMART Log: Yes 00:36:31.470 Asymmetric Namespace Access Log Page: Supported 00:36:31.470 ANA Transition Time : 10 sec 00:36:31.470 00:36:31.470 Asymmetric Namespace Access Capabilities 00:36:31.470 ANA Optimized State : Supported 00:36:31.470 ANA Non-Optimized State : Supported 00:36:31.470 ANA Inaccessible State : Supported 00:36:31.470 ANA Persistent Loss State : Supported 00:36:31.470 ANA Change State : Supported 00:36:31.470 ANAGRPID is not changed : No 00:36:31.470 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:36:31.470 00:36:31.470 ANA Group Identifier Maximum : 128 00:36:31.470 Number of ANA Group Identifiers : 128 00:36:31.470 Max Number of Allowed Namespaces : 1024 00:36:31.470 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:36:31.470 Command Effects Log Page: Supported 00:36:31.470 Get Log Page Extended Data: Supported 00:36:31.470 Telemetry Log Pages: Not Supported 00:36:31.470 Persistent Event Log Pages: Not Supported 00:36:31.470 Supported Log Pages Log Page: May Support 00:36:31.470 Commands Supported & Effects Log Page: Not Supported 00:36:31.470 Feature Identifiers & Effects Log Page:May Support 00:36:31.470 NVMe-MI Commands & Effects Log Page: May Support 00:36:31.470 Data Area 4 for Telemetry Log: Not Supported 00:36:31.470 Error Log Page Entries Supported: 128 00:36:31.470 Keep Alive: Supported 00:36:31.470 Keep Alive Granularity: 1000 ms 00:36:31.470 00:36:31.470 NVM Command Set Attributes 00:36:31.470 ========================== 00:36:31.470 Submission Queue Entry Size 00:36:31.470 Max: 64 00:36:31.470 Min: 64 00:36:31.470 Completion Queue Entry Size 00:36:31.470 Max: 16 00:36:31.470 Min: 16 00:36:31.470 Number of Namespaces: 1024 00:36:31.470 Compare Command: Not Supported 00:36:31.470 Write Uncorrectable Command: Not Supported 00:36:31.470 Dataset Management Command: Supported 00:36:31.470 Write Zeroes Command: Supported 00:36:31.470 Set Features Save Field: Not Supported 00:36:31.470 Reservations: Not Supported 00:36:31.470 Timestamp: Not Supported 00:36:31.470 Copy: Not Supported 00:36:31.470 Volatile Write Cache: Present 00:36:31.470 Atomic Write Unit (Normal): 1 00:36:31.470 Atomic Write Unit (PFail): 1 00:36:31.470 Atomic Compare & Write Unit: 1 00:36:31.470 Fused Compare & Write: Not Supported 00:36:31.470 Scatter-Gather List 00:36:31.470 SGL Command Set: Supported 00:36:31.470 SGL Keyed: Not Supported 00:36:31.470 SGL Bit Bucket Descriptor: Not Supported 00:36:31.470 SGL Metadata Pointer: Not Supported 00:36:31.470 Oversized SGL: Not Supported 00:36:31.470 SGL Metadata Address: Not Supported 00:36:31.470 SGL Offset: Supported 00:36:31.470 Transport SGL Data Block: Not Supported 00:36:31.470 Replay Protected Memory Block: Not Supported 00:36:31.470 00:36:31.470 Firmware Slot Information 00:36:31.470 ========================= 00:36:31.470 Active slot: 0 00:36:31.470 00:36:31.470 Asymmetric Namespace Access 00:36:31.470 =========================== 00:36:31.470 Change Count : 0 00:36:31.470 Number of ANA Group Descriptors : 1 00:36:31.470 ANA Group Descriptor : 0 00:36:31.470 ANA Group ID : 1 00:36:31.470 Number of NSID Values : 1 00:36:31.470 Change Count : 0 00:36:31.470 ANA State : 1 00:36:31.470 Namespace Identifier : 1 00:36:31.470 00:36:31.470 Commands Supported and Effects 00:36:31.470 ============================== 00:36:31.470 Admin Commands 00:36:31.470 -------------- 00:36:31.470 Get Log Page (02h): Supported 00:36:31.470 Identify (06h): Supported 00:36:31.470 Abort (08h): Supported 00:36:31.470 Set Features (09h): Supported 00:36:31.471 Get Features (0Ah): Supported 00:36:31.471 Asynchronous Event Request (0Ch): Supported 00:36:31.471 Keep Alive (18h): Supported 00:36:31.471 I/O Commands 00:36:31.471 ------------ 00:36:31.471 Flush (00h): Supported 00:36:31.471 Write (01h): Supported LBA-Change 00:36:31.471 Read (02h): Supported 00:36:31.471 Write Zeroes (08h): Supported LBA-Change 00:36:31.471 Dataset Management (09h): Supported 00:36:31.471 00:36:31.471 Error Log 00:36:31.471 ========= 00:36:31.471 Entry: 0 00:36:31.471 Error Count: 0x3 00:36:31.471 Submission Queue Id: 0x0 00:36:31.471 Command Id: 0x5 00:36:31.471 Phase Bit: 0 00:36:31.471 Status Code: 0x2 00:36:31.471 Status Code Type: 0x0 00:36:31.471 Do Not Retry: 1 00:36:31.471 Error Location: 0x28 00:36:31.471 LBA: 0x0 00:36:31.471 Namespace: 0x0 00:36:31.471 Vendor Log Page: 0x0 00:36:31.471 ----------- 00:36:31.471 Entry: 1 00:36:31.471 Error Count: 0x2 00:36:31.471 Submission Queue Id: 0x0 00:36:31.471 Command Id: 0x5 00:36:31.471 Phase Bit: 0 00:36:31.471 Status Code: 0x2 00:36:31.471 Status Code Type: 0x0 00:36:31.471 Do Not Retry: 1 00:36:31.471 Error Location: 0x28 00:36:31.471 LBA: 0x0 00:36:31.471 Namespace: 0x0 00:36:31.471 Vendor Log Page: 0x0 00:36:31.471 ----------- 00:36:31.471 Entry: 2 00:36:31.471 Error Count: 0x1 00:36:31.471 Submission Queue Id: 0x0 00:36:31.471 Command Id: 0x4 00:36:31.471 Phase Bit: 0 00:36:31.471 Status Code: 0x2 00:36:31.471 Status Code Type: 0x0 00:36:31.471 Do Not Retry: 1 00:36:31.471 Error Location: 0x28 00:36:31.471 LBA: 0x0 00:36:31.471 Namespace: 0x0 00:36:31.471 Vendor Log Page: 0x0 00:36:31.471 00:36:31.471 Number of Queues 00:36:31.471 ================ 00:36:31.471 Number of I/O Submission Queues: 128 00:36:31.471 Number of I/O Completion Queues: 128 00:36:31.471 00:36:31.471 ZNS Specific Controller Data 00:36:31.471 ============================ 00:36:31.471 Zone Append Size Limit: 0 00:36:31.471 00:36:31.471 00:36:31.471 Active Namespaces 00:36:31.471 ================= 00:36:31.471 get_feature(0x05) failed 00:36:31.471 Namespace ID:1 00:36:31.471 Command Set Identifier: NVM (00h) 00:36:31.471 Deallocate: Supported 00:36:31.471 Deallocated/Unwritten Error: Not Supported 00:36:31.471 Deallocated Read Value: Unknown 00:36:31.471 Deallocate in Write Zeroes: Not Supported 00:36:31.471 Deallocated Guard Field: 0xFFFF 00:36:31.471 Flush: Supported 00:36:31.471 Reservation: Not Supported 00:36:31.471 Namespace Sharing Capabilities: Multiple Controllers 00:36:31.471 Size (in LBAs): 3750748848 (1788GiB) 00:36:31.471 Capacity (in LBAs): 3750748848 (1788GiB) 00:36:31.471 Utilization (in LBAs): 3750748848 (1788GiB) 00:36:31.471 UUID: 3e496caf-0565-41ae-b2b9-9984a4839c93 00:36:31.471 Thin Provisioning: Not Supported 00:36:31.471 Per-NS Atomic Units: Yes 00:36:31.471 Atomic Write Unit (Normal): 8 00:36:31.471 Atomic Write Unit (PFail): 8 00:36:31.471 Preferred Write Granularity: 8 00:36:31.471 Atomic Compare & Write Unit: 8 00:36:31.471 Atomic Boundary Size (Normal): 0 00:36:31.471 Atomic Boundary Size (PFail): 0 00:36:31.471 Atomic Boundary Offset: 0 00:36:31.471 NGUID/EUI64 Never Reused: No 00:36:31.471 ANA group ID: 1 00:36:31.471 Namespace Write Protected: No 00:36:31.471 Number of LBA Formats: 1 00:36:31.471 Current LBA Format: LBA Format #00 00:36:31.471 LBA Format #00: Data Size: 512 Metadata Size: 0 00:36:31.471 00:36:31.471 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:36:31.471 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:36:31.471 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:36:31.471 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:31.471 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:36:31.471 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:31.471 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:31.733 rmmod nvme_tcp 00:36:31.733 rmmod nvme_fabrics 00:36:31.733 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:31.733 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:36:31.733 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:36:31.733 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:36:31.733 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:36:31.733 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:36:31.733 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:36:31.733 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:36:31.733 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-save 00:36:31.733 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:36:31.733 15:55:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-restore 00:36:31.733 15:55:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:31.733 15:55:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:31.733 15:55:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:31.733 15:55:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:31.733 15:55:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:33.646 15:55:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:33.646 15:55:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:36:33.646 15:55:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:33.646 15:55:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # echo 0 00:36:33.646 15:55:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:33.646 15:55:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:33.646 15:55:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:33.646 15:55:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:33.646 15:55:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:36:33.646 15:55:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:36:33.906 15:55:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@722 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:37.203 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:37.203 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:37.463 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:37.463 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:37.463 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:37.463 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:37.463 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:37.463 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:37.463 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:37.463 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:37.463 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:37.463 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:37.463 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:37.463 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:37.463 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:37.463 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:37.463 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:36:38.032 00:36:38.032 real 0m19.739s 00:36:38.032 user 0m5.327s 00:36:38.032 sys 0m11.346s 00:36:38.032 15:55:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:38.032 15:55:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:36:38.032 ************************************ 00:36:38.032 END TEST nvmf_identify_kernel_target 00:36:38.032 ************************************ 00:36:38.032 15:55:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:36:38.032 15:55:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:38.032 15:55:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:38.032 15:55:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.032 ************************************ 00:36:38.032 START TEST nvmf_auth_host 00:36:38.032 ************************************ 00:36:38.032 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:36:38.032 * Looking for test storage... 00:36:38.032 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:38.032 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:38.032 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lcov --version 00:36:38.032 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:38.032 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:38.032 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:38.032 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:38.032 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:38.032 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:36:38.032 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:36:38.032 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:36:38.032 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:36:38.032 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:36:38.032 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:36:38.032 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:36:38.032 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:38.032 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:36:38.032 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:36:38.032 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:38.032 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:38.032 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:36:38.032 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:36:38.032 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:38.032 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:36:38.032 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:36:38.032 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:36:38.032 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:36:38.032 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:38.032 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:36:38.293 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:36:38.293 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:38.293 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:38.293 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:36:38.293 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:38.293 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:38.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:38.293 --rc genhtml_branch_coverage=1 00:36:38.293 --rc genhtml_function_coverage=1 00:36:38.293 --rc genhtml_legend=1 00:36:38.293 --rc geninfo_all_blocks=1 00:36:38.293 --rc geninfo_unexecuted_blocks=1 00:36:38.293 00:36:38.293 ' 00:36:38.293 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:38.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:38.293 --rc genhtml_branch_coverage=1 00:36:38.293 --rc genhtml_function_coverage=1 00:36:38.293 --rc genhtml_legend=1 00:36:38.293 --rc geninfo_all_blocks=1 00:36:38.293 --rc geninfo_unexecuted_blocks=1 00:36:38.293 00:36:38.293 ' 00:36:38.293 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:38.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:38.293 --rc genhtml_branch_coverage=1 00:36:38.293 --rc genhtml_function_coverage=1 00:36:38.293 --rc genhtml_legend=1 00:36:38.293 --rc geninfo_all_blocks=1 00:36:38.293 --rc geninfo_unexecuted_blocks=1 00:36:38.293 00:36:38.293 ' 00:36:38.293 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:38.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:38.293 --rc genhtml_branch_coverage=1 00:36:38.293 --rc genhtml_function_coverage=1 00:36:38.293 --rc genhtml_legend=1 00:36:38.293 --rc geninfo_all_blocks=1 00:36:38.293 --rc geninfo_unexecuted_blocks=1 00:36:38.293 00:36:38.293 ' 00:36:38.293 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:38.293 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:36:38.293 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:38.293 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:38.293 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:38.293 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:38.293 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:38.293 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:38.293 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:38.293 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:38.293 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:38.293 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:38.293 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:38.293 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:38.293 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:38.293 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:38.293 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:38.293 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:38.293 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:38.293 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:36:38.293 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:38.293 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:38.293 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:38.293 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:38.293 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:38.293 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:38.293 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:36:38.293 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:38.293 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:36:38.294 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:38.294 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:38.294 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:38.294 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:38.294 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:38.294 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:38.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:38.294 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:38.294 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:38.294 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:38.294 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:36:38.294 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:36:38.294 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:36:38.294 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:36:38.294 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:38.294 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:38.294 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:36:38.294 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:36:38.294 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:36:38.294 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:36:38.294 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:38.294 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:36:38.294 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:36:38.294 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:36:38.294 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:38.294 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:38.294 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:38.294 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:36:38.294 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:36:38.294 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:36:38.294 15:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:46.492 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:46.492 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:46.492 Found net devices under 0000:31:00.0: cvl_0_0 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:46.492 Found net devices under 0000:31:00.1: cvl_0_1 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # is_hw=yes 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:46.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:46.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.554 ms 00:36:46.492 00:36:46.492 --- 10.0.0.2 ping statistics --- 00:36:46.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:46.492 rtt min/avg/max/mdev = 0.554/0.554/0.554/0.000 ms 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:46.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:46.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:36:46.492 00:36:46.492 --- 10.0.0.1 ping statistics --- 00:36:46.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:46.492 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # return 0 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:46.492 15:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.492 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # nvmfpid=609073 00:36:46.492 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # waitforlisten 609073 00:36:46.492 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:36:46.492 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 609073 ']' 00:36:46.492 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:46.492 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:46.492 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:46.492 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:46.492 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.492 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:46.492 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:36:46.493 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:36:46.493 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:46.493 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.493 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:46.493 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:36:46.493 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:36:46.493 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:36:46.493 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:46.493 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:36:46.493 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:36:46.493 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:36:46.493 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:46.493 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=ef5b4cd1cf805ddc447fbbdfbf044252 00:36:46.493 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:36:46.493 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.W9f 00:36:46.493 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key ef5b4cd1cf805ddc447fbbdfbf044252 0 00:36:46.493 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 ef5b4cd1cf805ddc447fbbdfbf044252 0 00:36:46.493 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:36:46.493 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:36:46.493 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=ef5b4cd1cf805ddc447fbbdfbf044252 00:36:46.493 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:36:46.493 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:36:46.493 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.W9f 00:36:46.493 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.W9f 00:36:46.493 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.W9f 00:36:46.493 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:36:46.493 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:36:46.493 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:46.493 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:36:46.493 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:36:46.493 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:36:46.762 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:36:46.763 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=f6c29e9e102b0ce18b022aca519e81a68832f574afb82647414bd243a5963c32 00:36:46.763 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:36:46.763 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.lbV 00:36:46.763 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key f6c29e9e102b0ce18b022aca519e81a68832f574afb82647414bd243a5963c32 3 00:36:46.763 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 f6c29e9e102b0ce18b022aca519e81a68832f574afb82647414bd243a5963c32 3 00:36:46.763 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:36:46.763 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:36:46.763 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=f6c29e9e102b0ce18b022aca519e81a68832f574afb82647414bd243a5963c32 00:36:46.763 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:36:46.763 15:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.lbV 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.lbV 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.lbV 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=7553ca620066ff74d45f790e9a632c546f53d1e53c071e8a 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.NXU 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 7553ca620066ff74d45f790e9a632c546f53d1e53c071e8a 0 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 7553ca620066ff74d45f790e9a632c546f53d1e53c071e8a 0 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=7553ca620066ff74d45f790e9a632c546f53d1e53c071e8a 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.NXU 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.NXU 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.NXU 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=e9424772fd5ca90a994aa51a80de8649d17b38caa933caae 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.5W4 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key e9424772fd5ca90a994aa51a80de8649d17b38caa933caae 2 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 e9424772fd5ca90a994aa51a80de8649d17b38caa933caae 2 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=e9424772fd5ca90a994aa51a80de8649d17b38caa933caae 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.5W4 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.5W4 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.5W4 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=214103edc16527a688d98401ded96da2 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.jQ3 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 214103edc16527a688d98401ded96da2 1 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 214103edc16527a688d98401ded96da2 1 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=214103edc16527a688d98401ded96da2 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.jQ3 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.jQ3 00:36:46.763 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.jQ3 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=d4f50c49608c1458b42d352a1c003847 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.Eyx 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key d4f50c49608c1458b42d352a1c003847 1 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 d4f50c49608c1458b42d352a1c003847 1 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=d4f50c49608c1458b42d352a1c003847 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.Eyx 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.Eyx 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Eyx 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=ca5680af58e0232f6e941ca90bef065cfaf38054f8f95479 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.HNN 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key ca5680af58e0232f6e941ca90bef065cfaf38054f8f95479 2 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 ca5680af58e0232f6e941ca90bef065cfaf38054f8f95479 2 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=ca5680af58e0232f6e941ca90bef065cfaf38054f8f95479 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.HNN 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.HNN 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.HNN 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=8427c2fef9c0719d13f7407d639a40ad 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.hqo 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 8427c2fef9c0719d13f7407d639a40ad 0 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 8427c2fef9c0719d13f7407d639a40ad 0 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=8427c2fef9c0719d13f7407d639a40ad 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.hqo 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.hqo 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.hqo 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=45e85cdeea242daf3be8c41b78d824e193ee27b68dbfdc983c2f2b7c84f332a0 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:36:47.040 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.pC9 00:36:47.041 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 45e85cdeea242daf3be8c41b78d824e193ee27b68dbfdc983c2f2b7c84f332a0 3 00:36:47.041 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 45e85cdeea242daf3be8c41b78d824e193ee27b68dbfdc983c2f2b7c84f332a0 3 00:36:47.041 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:36:47.041 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:36:47.041 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=45e85cdeea242daf3be8c41b78d824e193ee27b68dbfdc983c2f2b7c84f332a0 00:36:47.041 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:36:47.041 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:36:47.041 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.pC9 00:36:47.041 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.pC9 00:36:47.041 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.pC9 00:36:47.041 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:36:47.041 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 609073 00:36:47.041 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 609073 ']' 00:36:47.041 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:47.041 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:47.041 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:47.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:47.041 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:47.041 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.W9f 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.lbV ]] 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.lbV 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.NXU 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.5W4 ]] 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.5W4 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.jQ3 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Eyx ]] 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Eyx 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.HNN 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.hqo ]] 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.hqo 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.pC9 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:36:47.320 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:47.321 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:47.321 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:47.321 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # local block nvme 00:36:47.321 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:36:47.321 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@666 -- # modprobe nvmet 00:36:47.596 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:47.596 15:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:50.989 Waiting for block devices as requested 00:36:50.989 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:50.989 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:50.989 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:51.275 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:51.275 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:51.275 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:51.275 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:51.573 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:51.573 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:36:51.845 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:51.845 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:51.845 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:51.845 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:52.138 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:52.138 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:52.138 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:52.138 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:53.132 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:36:53.132 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:53.132 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:36:53.132 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:36:53.132 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:53.132 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:36:53.132 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:36:53.132 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:36:53.132 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:53.132 No valid GPT data, bailing 00:36:53.132 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:53.132 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:36:53.132 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:36:53.132 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:36:53.132 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # [[ -b /dev/nvme0n1 ]] 00:36:53.132 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:53.132 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:53.132 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:53.132 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:36:53.132 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo 1 00:36:53.132 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@692 -- # echo /dev/nvme0n1 00:36:53.132 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:36:53.132 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:36:53.132 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo tcp 00:36:53.132 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 4420 00:36:53.132 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo ipv4 00:36:53.132 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:53.133 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:36:53.408 00:36:53.408 Discovery Log Number of Records 2, Generation counter 2 00:36:53.408 =====Discovery Log Entry 0====== 00:36:53.408 trtype: tcp 00:36:53.408 adrfam: ipv4 00:36:53.408 subtype: current discovery subsystem 00:36:53.408 treq: not specified, sq flow control disable supported 00:36:53.408 portid: 1 00:36:53.408 trsvcid: 4420 00:36:53.408 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:53.408 traddr: 10.0.0.1 00:36:53.408 eflags: none 00:36:53.408 sectype: none 00:36:53.408 =====Discovery Log Entry 1====== 00:36:53.408 trtype: tcp 00:36:53.408 adrfam: ipv4 00:36:53.408 subtype: nvme subsystem 00:36:53.408 treq: not specified, sq flow control disable supported 00:36:53.408 portid: 1 00:36:53.408 trsvcid: 4420 00:36:53.408 subnqn: nqn.2024-02.io.spdk:cnode0 00:36:53.408 traddr: 10.0.0.1 00:36:53.408 eflags: none 00:36:53.408 sectype: none 00:36:53.408 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:53.408 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:36:53.408 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:53.408 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:53.408 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:53.408 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:53.408 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:53.408 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:53.408 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU1M2NhNjIwMDY2ZmY3NGQ0NWY3OTBlOWE2MzJjNTQ2ZjUzZDFlNTNjMDcxZThh1FJPKA==: 00:36:53.408 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: 00:36:53.408 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:53.408 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:53.408 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU1M2NhNjIwMDY2ZmY3NGQ0NWY3OTBlOWE2MzJjNTQ2ZjUzZDFlNTNjMDcxZThh1FJPKA==: 00:36:53.408 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: ]] 00:36:53.408 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: 00:36:53.408 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:36:53.408 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:36:53.408 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:36:53.408 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:53.408 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:36:53.408 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:53.408 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:36:53.408 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:53.408 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:53.408 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:53.408 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:53.408 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:53.408 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.408 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:53.409 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:53.409 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:53.409 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:53.409 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:53.409 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:53.409 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:53.409 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:53.409 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:53.409 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:53.409 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:53.409 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:53.409 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:53.409 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:53.409 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.684 nvme0n1 00:36:53.684 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:53.684 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:53.684 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:53.684 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:53.684 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.684 15:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:53.684 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:53.684 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:53.684 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:53.684 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.684 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:53.684 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:53.684 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:53.684 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:53.684 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:36:53.684 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:53.684 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:53.684 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:53.684 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:53.684 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY1YjRjZDFjZjgwNWRkYzQ0N2ZiYmRmYmYwNDQyNTJ7Ltb5: 00:36:53.684 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjZjMjllOWUxMDJiMGNlMThiMDIyYWNhNTE5ZTgxYTY4ODMyZjU3NGFmYjgyNjQ3NDE0YmQyNDNhNTk2M2MzMv1avNw=: 00:36:53.685 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:53.685 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:53.685 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY1YjRjZDFjZjgwNWRkYzQ0N2ZiYmRmYmYwNDQyNTJ7Ltb5: 00:36:53.685 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjZjMjllOWUxMDJiMGNlMThiMDIyYWNhNTE5ZTgxYTY4ODMyZjU3NGFmYjgyNjQ3NDE0YmQyNDNhNTk2M2MzMv1avNw=: ]] 00:36:53.685 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjZjMjllOWUxMDJiMGNlMThiMDIyYWNhNTE5ZTgxYTY4ODMyZjU3NGFmYjgyNjQ3NDE0YmQyNDNhNTk2M2MzMv1avNw=: 00:36:53.685 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:36:53.685 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:53.685 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:53.685 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:53.685 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:53.685 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:53.685 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:53.685 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:53.685 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.685 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:53.685 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:53.685 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:53.685 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:53.685 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:53.685 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:53.685 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:53.685 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:53.685 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:53.685 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:53.685 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:53.685 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:53.685 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:53.685 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:53.685 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.991 nvme0n1 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU1M2NhNjIwMDY2ZmY3NGQ0NWY3OTBlOWE2MzJjNTQ2ZjUzZDFlNTNjMDcxZThh1FJPKA==: 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU1M2NhNjIwMDY2ZmY3NGQ0NWY3OTBlOWE2MzJjNTQ2ZjUzZDFlNTNjMDcxZThh1FJPKA==: 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: ]] 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.991 nvme0n1 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.991 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE0MTAzZWRjMTY1MjdhNjg4ZDk4NDAxZGVkOTZkYTKG9Q6o: 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE0MTAzZWRjMTY1MjdhNjg4ZDk4NDAxZGVkOTZkYTKG9Q6o: 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: ]] 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.259 nvme0n1 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2E1NjgwYWY1OGUwMjMyZjZlOTQxY2E5MGJlZjA2NWNmYWYzODA1NGY4Zjk1NDc5PO4ieg==: 00:36:54.259 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODQyN2MyZmVmOWMwNzE5ZDEzZjc0MDdkNjM5YTQwYWS9Mq0u: 00:36:54.260 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:54.260 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:54.260 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2E1NjgwYWY1OGUwMjMyZjZlOTQxY2E5MGJlZjA2NWNmYWYzODA1NGY4Zjk1NDc5PO4ieg==: 00:36:54.260 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODQyN2MyZmVmOWMwNzE5ZDEzZjc0MDdkNjM5YTQwYWS9Mq0u: ]] 00:36:54.260 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODQyN2MyZmVmOWMwNzE5ZDEzZjc0MDdkNjM5YTQwYWS9Mq0u: 00:36:54.260 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:36:54.260 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:54.260 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:54.260 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:54.260 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:54.260 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:54.260 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:54.260 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:54.260 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.260 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:54.260 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:54.260 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:54.260 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:54.260 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:54.260 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:54.260 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:54.260 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:54.260 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:54.260 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:54.260 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:54.260 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:54.522 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:54.522 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:54.522 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.522 nvme0n1 00:36:54.522 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:54.522 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:54.522 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:54.522 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:54.522 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.522 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:54.522 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:54.522 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:54.522 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:54.522 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.522 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:54.522 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:54.522 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:36:54.522 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:54.522 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:54.522 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:54.522 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:54.522 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDVlODVjZGVlYTI0MmRhZjNiZThjNDFiNzhkODI0ZTE5M2VlMjdiNjhkYmZkYzk4M2MyZjJiN2M4NGYzMzJhMHrx/qw=: 00:36:54.522 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:54.522 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:54.522 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:54.522 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDVlODVjZGVlYTI0MmRhZjNiZThjNDFiNzhkODI0ZTE5M2VlMjdiNjhkYmZkYzk4M2MyZjJiN2M4NGYzMzJhMHrx/qw=: 00:36:54.522 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:54.522 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:36:54.522 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:54.522 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:54.522 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:54.522 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:54.522 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:54.522 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:54.522 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:54.522 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.522 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:54.522 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:54.522 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:54.522 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:54.522 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:54.522 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:54.522 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:54.522 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:54.522 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:54.523 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:54.523 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:54.523 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:54.523 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:54.523 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:54.523 15:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.785 nvme0n1 00:36:54.785 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:54.785 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:54.785 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:54.785 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:54.785 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.785 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:54.785 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:54.785 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:54.785 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:54.785 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.785 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:54.785 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:54.785 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:54.785 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:36:54.785 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:54.785 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:54.785 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:54.785 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:54.785 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY1YjRjZDFjZjgwNWRkYzQ0N2ZiYmRmYmYwNDQyNTJ7Ltb5: 00:36:54.785 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjZjMjllOWUxMDJiMGNlMThiMDIyYWNhNTE5ZTgxYTY4ODMyZjU3NGFmYjgyNjQ3NDE0YmQyNDNhNTk2M2MzMv1avNw=: 00:36:54.785 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:54.785 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:55.046 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY1YjRjZDFjZjgwNWRkYzQ0N2ZiYmRmYmYwNDQyNTJ7Ltb5: 00:36:55.046 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjZjMjllOWUxMDJiMGNlMThiMDIyYWNhNTE5ZTgxYTY4ODMyZjU3NGFmYjgyNjQ3NDE0YmQyNDNhNTk2M2MzMv1avNw=: ]] 00:36:55.046 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjZjMjllOWUxMDJiMGNlMThiMDIyYWNhNTE5ZTgxYTY4ODMyZjU3NGFmYjgyNjQ3NDE0YmQyNDNhNTk2M2MzMv1avNw=: 00:36:55.046 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:36:55.046 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:55.046 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:55.046 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:55.046 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:55.046 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:55.046 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:55.046 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.046 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.046 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.046 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:55.046 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:55.046 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:55.046 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:55.046 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:55.046 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:55.046 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:55.046 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:55.046 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:55.046 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:55.046 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:55.046 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:55.046 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.046 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.308 nvme0n1 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU1M2NhNjIwMDY2ZmY3NGQ0NWY3OTBlOWE2MzJjNTQ2ZjUzZDFlNTNjMDcxZThh1FJPKA==: 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU1M2NhNjIwMDY2ZmY3NGQ0NWY3OTBlOWE2MzJjNTQ2ZjUzZDFlNTNjMDcxZThh1FJPKA==: 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: ]] 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.308 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.572 nvme0n1 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE0MTAzZWRjMTY1MjdhNjg4ZDk4NDAxZGVkOTZkYTKG9Q6o: 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE0MTAzZWRjMTY1MjdhNjg4ZDk4NDAxZGVkOTZkYTKG9Q6o: 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: ]] 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.572 15:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.833 nvme0n1 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2E1NjgwYWY1OGUwMjMyZjZlOTQxY2E5MGJlZjA2NWNmYWYzODA1NGY4Zjk1NDc5PO4ieg==: 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODQyN2MyZmVmOWMwNzE5ZDEzZjc0MDdkNjM5YTQwYWS9Mq0u: 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2E1NjgwYWY1OGUwMjMyZjZlOTQxY2E5MGJlZjA2NWNmYWYzODA1NGY4Zjk1NDc5PO4ieg==: 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODQyN2MyZmVmOWMwNzE5ZDEzZjc0MDdkNjM5YTQwYWS9Mq0u: ]] 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODQyN2MyZmVmOWMwNzE5ZDEzZjc0MDdkNjM5YTQwYWS9Mq0u: 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.833 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.097 nvme0n1 00:36:56.097 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:56.097 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:56.097 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:56.097 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:56.097 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.097 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:56.097 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:56.097 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:56.097 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:56.097 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.097 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:56.097 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:56.097 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:36:56.097 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:56.097 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:56.097 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:56.097 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:56.097 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDVlODVjZGVlYTI0MmRhZjNiZThjNDFiNzhkODI0ZTE5M2VlMjdiNjhkYmZkYzk4M2MyZjJiN2M4NGYzMzJhMHrx/qw=: 00:36:56.097 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:56.097 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:56.097 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:56.097 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDVlODVjZGVlYTI0MmRhZjNiZThjNDFiNzhkODI0ZTE5M2VlMjdiNjhkYmZkYzk4M2MyZjJiN2M4NGYzMzJhMHrx/qw=: 00:36:56.098 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:56.098 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:36:56.098 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:56.098 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:56.098 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:56.098 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:56.098 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:56.098 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:56.098 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:56.098 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.098 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:56.098 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:56.098 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:56.098 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:56.098 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:56.098 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:56.098 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:56.098 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:56.098 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:56.098 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:56.098 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:56.098 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:56.098 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:56.098 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:56.098 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.359 nvme0n1 00:36:56.359 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:56.359 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:56.359 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:56.359 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:56.360 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.360 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:56.360 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:56.360 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:56.360 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:56.360 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.360 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:56.360 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:56.360 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:56.360 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:36:56.360 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:56.360 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:56.360 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:56.360 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:56.360 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY1YjRjZDFjZjgwNWRkYzQ0N2ZiYmRmYmYwNDQyNTJ7Ltb5: 00:36:56.360 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjZjMjllOWUxMDJiMGNlMThiMDIyYWNhNTE5ZTgxYTY4ODMyZjU3NGFmYjgyNjQ3NDE0YmQyNDNhNTk2M2MzMv1avNw=: 00:36:56.360 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:56.360 15:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:56.931 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY1YjRjZDFjZjgwNWRkYzQ0N2ZiYmRmYmYwNDQyNTJ7Ltb5: 00:36:56.931 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjZjMjllOWUxMDJiMGNlMThiMDIyYWNhNTE5ZTgxYTY4ODMyZjU3NGFmYjgyNjQ3NDE0YmQyNDNhNTk2M2MzMv1avNw=: ]] 00:36:56.931 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjZjMjllOWUxMDJiMGNlMThiMDIyYWNhNTE5ZTgxYTY4ODMyZjU3NGFmYjgyNjQ3NDE0YmQyNDNhNTk2M2MzMv1avNw=: 00:36:56.931 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:36:56.931 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:56.931 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:56.931 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:56.931 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:56.931 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:56.931 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:56.931 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:56.931 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.931 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:56.931 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:56.931 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:56.931 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:56.931 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:56.931 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:56.931 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:56.931 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:56.931 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:56.931 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:56.931 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:56.931 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:56.931 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:56.931 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:56.931 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.192 nvme0n1 00:36:57.192 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:57.192 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:57.192 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:57.192 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:57.192 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.192 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:57.192 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:57.192 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:57.192 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:57.192 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.192 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:57.192 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:57.192 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:36:57.192 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:57.192 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:57.192 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:57.192 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:57.192 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU1M2NhNjIwMDY2ZmY3NGQ0NWY3OTBlOWE2MzJjNTQ2ZjUzZDFlNTNjMDcxZThh1FJPKA==: 00:36:57.192 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: 00:36:57.192 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:57.192 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:57.192 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU1M2NhNjIwMDY2ZmY3NGQ0NWY3OTBlOWE2MzJjNTQ2ZjUzZDFlNTNjMDcxZThh1FJPKA==: 00:36:57.192 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: ]] 00:36:57.192 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: 00:36:57.192 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:36:57.192 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:57.192 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:57.192 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:57.192 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:57.192 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:57.192 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:57.192 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:57.192 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.192 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:57.192 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:57.192 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:57.192 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:57.192 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:57.192 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:57.192 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:57.192 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:57.192 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:57.192 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:57.192 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:57.192 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:57.453 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:57.453 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:57.453 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.453 nvme0n1 00:36:57.453 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:57.715 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:57.715 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:57.715 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:57.715 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.715 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:57.715 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:57.715 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:57.715 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:57.715 15:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.715 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:57.715 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:57.715 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:36:57.715 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:57.715 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:57.715 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:57.715 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:57.715 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE0MTAzZWRjMTY1MjdhNjg4ZDk4NDAxZGVkOTZkYTKG9Q6o: 00:36:57.715 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: 00:36:57.715 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:57.715 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:57.715 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE0MTAzZWRjMTY1MjdhNjg4ZDk4NDAxZGVkOTZkYTKG9Q6o: 00:36:57.715 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: ]] 00:36:57.715 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: 00:36:57.715 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:36:57.715 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:57.715 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:57.715 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:57.715 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:57.715 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:57.715 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:57.715 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:57.715 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.715 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:57.715 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:57.715 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:57.715 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:57.715 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:57.715 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:57.715 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:57.715 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:57.715 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:57.715 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:57.715 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:57.715 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:57.715 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:57.715 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:57.715 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.976 nvme0n1 00:36:57.976 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:57.976 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:57.976 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:57.976 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:57.976 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.976 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:57.976 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:57.976 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:57.976 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:57.976 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.976 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:57.976 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:57.976 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:36:57.976 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:57.976 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:57.976 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:57.976 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:57.976 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2E1NjgwYWY1OGUwMjMyZjZlOTQxY2E5MGJlZjA2NWNmYWYzODA1NGY4Zjk1NDc5PO4ieg==: 00:36:57.976 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODQyN2MyZmVmOWMwNzE5ZDEzZjc0MDdkNjM5YTQwYWS9Mq0u: 00:36:57.976 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:57.976 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:57.976 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2E1NjgwYWY1OGUwMjMyZjZlOTQxY2E5MGJlZjA2NWNmYWYzODA1NGY4Zjk1NDc5PO4ieg==: 00:36:57.976 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODQyN2MyZmVmOWMwNzE5ZDEzZjc0MDdkNjM5YTQwYWS9Mq0u: ]] 00:36:57.976 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODQyN2MyZmVmOWMwNzE5ZDEzZjc0MDdkNjM5YTQwYWS9Mq0u: 00:36:57.977 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:36:57.977 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:57.977 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:57.977 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:57.977 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:57.977 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:57.977 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:57.977 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:57.977 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.977 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:57.977 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:57.977 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:57.977 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:57.977 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:57.977 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:57.977 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:57.977 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:57.977 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:57.977 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:57.977 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:57.977 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:57.977 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:57.977 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:57.977 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.238 nvme0n1 00:36:58.238 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.238 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:58.238 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:58.238 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.238 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.238 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.238 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:58.238 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:58.238 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.238 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.238 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.238 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:58.238 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:36:58.238 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:58.238 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:58.238 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:58.238 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:58.238 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDVlODVjZGVlYTI0MmRhZjNiZThjNDFiNzhkODI0ZTE5M2VlMjdiNjhkYmZkYzk4M2MyZjJiN2M4NGYzMzJhMHrx/qw=: 00:36:58.238 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:58.238 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:58.238 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:58.238 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDVlODVjZGVlYTI0MmRhZjNiZThjNDFiNzhkODI0ZTE5M2VlMjdiNjhkYmZkYzk4M2MyZjJiN2M4NGYzMzJhMHrx/qw=: 00:36:58.238 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:58.238 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:36:58.238 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:58.239 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:58.239 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:58.239 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:58.239 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:58.239 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:58.239 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.239 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.239 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.239 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:58.239 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:58.239 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:58.239 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:58.239 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:58.239 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:58.239 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:58.239 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:58.239 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:58.239 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:58.239 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:58.239 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:58.239 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.239 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.500 nvme0n1 00:36:58.500 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.500 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:58.500 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:58.500 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.500 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.500 15:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.761 15:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:58.761 15:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:58.761 15:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.761 15:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.761 15:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.761 15:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:58.761 15:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:58.761 15:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:36:58.761 15:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:58.761 15:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:58.761 15:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:58.761 15:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:58.761 15:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY1YjRjZDFjZjgwNWRkYzQ0N2ZiYmRmYmYwNDQyNTJ7Ltb5: 00:36:58.761 15:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjZjMjllOWUxMDJiMGNlMThiMDIyYWNhNTE5ZTgxYTY4ODMyZjU3NGFmYjgyNjQ3NDE0YmQyNDNhNTk2M2MzMv1avNw=: 00:36:58.761 15:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:58.761 15:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:00.678 15:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY1YjRjZDFjZjgwNWRkYzQ0N2ZiYmRmYmYwNDQyNTJ7Ltb5: 00:37:00.678 15:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjZjMjllOWUxMDJiMGNlMThiMDIyYWNhNTE5ZTgxYTY4ODMyZjU3NGFmYjgyNjQ3NDE0YmQyNDNhNTk2M2MzMv1avNw=: ]] 00:37:00.678 15:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjZjMjllOWUxMDJiMGNlMThiMDIyYWNhNTE5ZTgxYTY4ODMyZjU3NGFmYjgyNjQ3NDE0YmQyNDNhNTk2M2MzMv1avNw=: 00:37:00.678 15:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:37:00.678 15:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:00.678 15:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:00.678 15:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:00.678 15:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:00.678 15:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:00.678 15:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:37:00.678 15:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.678 15:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.678 15:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.678 15:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:00.678 15:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:00.678 15:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:00.678 15:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:00.678 15:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:00.678 15:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:00.678 15:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:00.678 15:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:00.678 15:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:00.678 15:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:00.678 15:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:00.678 15:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:00.678 15:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.678 15:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.678 nvme0n1 00:37:00.678 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.678 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:00.678 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:00.678 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.678 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.678 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.678 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:00.678 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:00.678 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.678 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.678 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.678 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:00.678 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:37:00.678 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:00.678 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:00.678 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:00.678 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:00.678 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU1M2NhNjIwMDY2ZmY3NGQ0NWY3OTBlOWE2MzJjNTQ2ZjUzZDFlNTNjMDcxZThh1FJPKA==: 00:37:00.678 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: 00:37:00.678 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:00.678 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:00.678 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU1M2NhNjIwMDY2ZmY3NGQ0NWY3OTBlOWE2MzJjNTQ2ZjUzZDFlNTNjMDcxZThh1FJPKA==: 00:37:00.678 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: ]] 00:37:00.678 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: 00:37:00.678 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:37:00.678 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:00.678 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:00.678 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:00.678 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:00.678 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:00.678 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:37:00.678 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.678 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.678 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.678 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:00.678 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:00.678 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:00.678 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:00.678 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:00.678 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:00.678 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:00.939 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:00.939 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:00.939 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:00.939 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:00.939 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:00.939 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.939 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:01.200 nvme0n1 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE0MTAzZWRjMTY1MjdhNjg4ZDk4NDAxZGVkOTZkYTKG9Q6o: 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE0MTAzZWRjMTY1MjdhNjg4ZDk4NDAxZGVkOTZkYTKG9Q6o: 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: ]] 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.201 15:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:01.773 nvme0n1 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2E1NjgwYWY1OGUwMjMyZjZlOTQxY2E5MGJlZjA2NWNmYWYzODA1NGY4Zjk1NDc5PO4ieg==: 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODQyN2MyZmVmOWMwNzE5ZDEzZjc0MDdkNjM5YTQwYWS9Mq0u: 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2E1NjgwYWY1OGUwMjMyZjZlOTQxY2E5MGJlZjA2NWNmYWYzODA1NGY4Zjk1NDc5PO4ieg==: 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODQyN2MyZmVmOWMwNzE5ZDEzZjc0MDdkNjM5YTQwYWS9Mq0u: ]] 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODQyN2MyZmVmOWMwNzE5ZDEzZjc0MDdkNjM5YTQwYWS9Mq0u: 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.773 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:02.343 nvme0n1 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDVlODVjZGVlYTI0MmRhZjNiZThjNDFiNzhkODI0ZTE5M2VlMjdiNjhkYmZkYzk4M2MyZjJiN2M4NGYzMzJhMHrx/qw=: 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDVlODVjZGVlYTI0MmRhZjNiZThjNDFiNzhkODI0ZTE5M2VlMjdiNjhkYmZkYzk4M2MyZjJiN2M4NGYzMzJhMHrx/qw=: 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.343 15:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:02.603 nvme0n1 00:37:02.603 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.603 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:02.603 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.603 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:02.603 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:02.603 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.865 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:02.865 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:02.865 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.865 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:02.865 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.865 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:02.865 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:02.865 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:37:02.865 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:02.865 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:02.865 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:02.865 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:02.865 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY1YjRjZDFjZjgwNWRkYzQ0N2ZiYmRmYmYwNDQyNTJ7Ltb5: 00:37:02.865 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjZjMjllOWUxMDJiMGNlMThiMDIyYWNhNTE5ZTgxYTY4ODMyZjU3NGFmYjgyNjQ3NDE0YmQyNDNhNTk2M2MzMv1avNw=: 00:37:02.865 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:02.865 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:02.865 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY1YjRjZDFjZjgwNWRkYzQ0N2ZiYmRmYmYwNDQyNTJ7Ltb5: 00:37:02.865 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjZjMjllOWUxMDJiMGNlMThiMDIyYWNhNTE5ZTgxYTY4ODMyZjU3NGFmYjgyNjQ3NDE0YmQyNDNhNTk2M2MzMv1avNw=: ]] 00:37:02.865 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjZjMjllOWUxMDJiMGNlMThiMDIyYWNhNTE5ZTgxYTY4ODMyZjU3NGFmYjgyNjQ3NDE0YmQyNDNhNTk2M2MzMv1avNw=: 00:37:02.865 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:37:02.865 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:02.865 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:02.865 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:02.865 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:02.865 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:02.865 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:37:02.865 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.865 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:02.865 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.865 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:02.865 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:02.865 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:02.865 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:02.865 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:02.865 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:02.865 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:02.865 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:02.865 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:02.865 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:02.865 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:02.865 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:02.865 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.865 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.436 nvme0n1 00:37:03.436 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:03.436 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:03.436 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:03.436 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:03.436 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.436 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:03.436 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:03.436 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:03.436 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:03.436 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.437 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:03.437 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:03.437 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:37:03.437 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:03.437 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:03.437 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:03.437 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:03.437 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU1M2NhNjIwMDY2ZmY3NGQ0NWY3OTBlOWE2MzJjNTQ2ZjUzZDFlNTNjMDcxZThh1FJPKA==: 00:37:03.437 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: 00:37:03.437 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:03.437 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:03.437 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU1M2NhNjIwMDY2ZmY3NGQ0NWY3OTBlOWE2MzJjNTQ2ZjUzZDFlNTNjMDcxZThh1FJPKA==: 00:37:03.437 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: ]] 00:37:03.437 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: 00:37:03.437 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:37:03.437 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:03.437 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:03.437 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:03.437 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:03.437 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:03.437 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:37:03.437 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:03.437 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.437 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:03.437 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:03.437 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:03.437 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:03.437 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:03.437 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:03.437 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:03.437 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:03.437 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:03.437 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:03.437 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:03.437 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:03.437 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:03.437 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:03.437 15:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.378 nvme0n1 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE0MTAzZWRjMTY1MjdhNjg4ZDk4NDAxZGVkOTZkYTKG9Q6o: 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE0MTAzZWRjMTY1MjdhNjg4ZDk4NDAxZGVkOTZkYTKG9Q6o: 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: ]] 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:04.378 15:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.956 nvme0n1 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2E1NjgwYWY1OGUwMjMyZjZlOTQxY2E5MGJlZjA2NWNmYWYzODA1NGY4Zjk1NDc5PO4ieg==: 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODQyN2MyZmVmOWMwNzE5ZDEzZjc0MDdkNjM5YTQwYWS9Mq0u: 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2E1NjgwYWY1OGUwMjMyZjZlOTQxY2E5MGJlZjA2NWNmYWYzODA1NGY4Zjk1NDc5PO4ieg==: 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODQyN2MyZmVmOWMwNzE5ZDEzZjc0MDdkNjM5YTQwYWS9Mq0u: ]] 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODQyN2MyZmVmOWMwNzE5ZDEzZjc0MDdkNjM5YTQwYWS9Mq0u: 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:04.956 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.525 nvme0n1 00:37:05.525 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.525 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:05.525 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:05.525 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.525 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.525 15:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.525 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:05.525 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:05.525 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.525 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.796 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.796 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:05.796 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:37:05.796 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:05.796 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:05.796 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:05.796 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:05.796 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDVlODVjZGVlYTI0MmRhZjNiZThjNDFiNzhkODI0ZTE5M2VlMjdiNjhkYmZkYzk4M2MyZjJiN2M4NGYzMzJhMHrx/qw=: 00:37:05.796 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:05.796 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:05.796 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:05.796 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDVlODVjZGVlYTI0MmRhZjNiZThjNDFiNzhkODI0ZTE5M2VlMjdiNjhkYmZkYzk4M2MyZjJiN2M4NGYzMzJhMHrx/qw=: 00:37:05.796 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:05.796 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:37:05.796 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:05.796 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:05.796 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:05.796 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:05.796 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:05.796 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:37:05.796 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.796 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.796 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.796 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:05.796 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:05.796 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:05.796 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:05.796 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:05.796 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:05.796 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:05.796 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:05.797 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:05.797 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:05.797 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:05.797 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:05.797 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.797 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.367 nvme0n1 00:37:06.367 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.367 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:06.367 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:06.367 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.367 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.367 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.367 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:06.367 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:06.367 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.367 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.367 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.367 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:37:06.367 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:06.367 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:06.367 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:37:06.367 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:06.367 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:06.367 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:06.367 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:06.367 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY1YjRjZDFjZjgwNWRkYzQ0N2ZiYmRmYmYwNDQyNTJ7Ltb5: 00:37:06.367 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjZjMjllOWUxMDJiMGNlMThiMDIyYWNhNTE5ZTgxYTY4ODMyZjU3NGFmYjgyNjQ3NDE0YmQyNDNhNTk2M2MzMv1avNw=: 00:37:06.367 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:06.367 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:06.367 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY1YjRjZDFjZjgwNWRkYzQ0N2ZiYmRmYmYwNDQyNTJ7Ltb5: 00:37:06.367 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjZjMjllOWUxMDJiMGNlMThiMDIyYWNhNTE5ZTgxYTY4ODMyZjU3NGFmYjgyNjQ3NDE0YmQyNDNhNTk2M2MzMv1avNw=: ]] 00:37:06.367 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjZjMjllOWUxMDJiMGNlMThiMDIyYWNhNTE5ZTgxYTY4ODMyZjU3NGFmYjgyNjQ3NDE0YmQyNDNhNTk2M2MzMv1avNw=: 00:37:06.367 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:37:06.367 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:06.367 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:06.367 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:06.367 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:06.367 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:06.367 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:37:06.367 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.367 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.367 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.367 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:06.367 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:06.367 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:06.367 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:06.368 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:06.368 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:06.368 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:06.368 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:06.368 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:06.368 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:06.368 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:06.368 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:06.368 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.368 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.628 nvme0n1 00:37:06.628 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.628 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:06.628 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:06.628 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.628 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.628 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.628 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:06.628 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:06.628 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.628 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.628 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.628 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:06.628 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:37:06.628 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:06.628 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:06.628 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:06.628 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:06.628 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU1M2NhNjIwMDY2ZmY3NGQ0NWY3OTBlOWE2MzJjNTQ2ZjUzZDFlNTNjMDcxZThh1FJPKA==: 00:37:06.628 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: 00:37:06.628 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:06.628 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:06.628 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU1M2NhNjIwMDY2ZmY3NGQ0NWY3OTBlOWE2MzJjNTQ2ZjUzZDFlNTNjMDcxZThh1FJPKA==: 00:37:06.628 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: ]] 00:37:06.628 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: 00:37:06.628 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:37:06.628 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:06.628 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:06.628 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:06.628 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:06.628 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:06.628 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:37:06.628 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.628 15:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.628 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.628 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:06.628 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:06.628 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:06.628 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:06.628 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:06.628 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:06.628 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:06.629 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:06.629 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:06.629 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:06.629 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:06.629 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:06.629 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.629 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.889 nvme0n1 00:37:06.889 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.889 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:06.889 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.889 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:06.889 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.889 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.889 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:06.889 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:06.889 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.889 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.889 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.889 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:06.889 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:37:06.889 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:06.889 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:06.889 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:06.889 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:06.889 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE0MTAzZWRjMTY1MjdhNjg4ZDk4NDAxZGVkOTZkYTKG9Q6o: 00:37:06.889 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: 00:37:06.889 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:06.889 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:06.889 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE0MTAzZWRjMTY1MjdhNjg4ZDk4NDAxZGVkOTZkYTKG9Q6o: 00:37:06.889 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: ]] 00:37:06.889 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: 00:37:06.889 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:37:06.889 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:06.889 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:06.889 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:06.889 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:06.889 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:06.889 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:37:06.889 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.889 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.890 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.890 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:06.890 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:06.890 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:06.890 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:06.890 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:06.890 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:06.890 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:06.890 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:06.890 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:06.890 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:06.890 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:06.890 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:06.890 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.890 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.151 nvme0n1 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2E1NjgwYWY1OGUwMjMyZjZlOTQxY2E5MGJlZjA2NWNmYWYzODA1NGY4Zjk1NDc5PO4ieg==: 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODQyN2MyZmVmOWMwNzE5ZDEzZjc0MDdkNjM5YTQwYWS9Mq0u: 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2E1NjgwYWY1OGUwMjMyZjZlOTQxY2E5MGJlZjA2NWNmYWYzODA1NGY4Zjk1NDc5PO4ieg==: 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODQyN2MyZmVmOWMwNzE5ZDEzZjc0MDdkNjM5YTQwYWS9Mq0u: ]] 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODQyN2MyZmVmOWMwNzE5ZDEzZjc0MDdkNjM5YTQwYWS9Mq0u: 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.151 nvme0n1 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.151 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.412 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:07.412 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:07.412 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.412 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.412 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.412 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:07.412 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:37:07.412 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:07.412 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:07.412 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:07.412 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:07.412 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDVlODVjZGVlYTI0MmRhZjNiZThjNDFiNzhkODI0ZTE5M2VlMjdiNjhkYmZkYzk4M2MyZjJiN2M4NGYzMzJhMHrx/qw=: 00:37:07.412 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:07.412 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:07.412 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:07.412 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDVlODVjZGVlYTI0MmRhZjNiZThjNDFiNzhkODI0ZTE5M2VlMjdiNjhkYmZkYzk4M2MyZjJiN2M4NGYzMzJhMHrx/qw=: 00:37:07.412 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:07.412 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:37:07.412 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:07.412 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:07.412 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:07.412 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:07.412 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:07.412 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:37:07.412 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.412 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.412 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.412 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:07.412 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:07.412 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:07.412 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:07.412 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:07.413 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:07.413 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:07.413 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:07.413 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:07.413 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:07.413 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:07.413 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:07.413 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.413 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.413 nvme0n1 00:37:07.413 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.413 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:07.413 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:07.413 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.413 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.413 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.413 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:07.413 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:07.413 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.413 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.675 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.675 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:07.675 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:07.675 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:37:07.675 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:07.675 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:07.675 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:07.675 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:07.675 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY1YjRjZDFjZjgwNWRkYzQ0N2ZiYmRmYmYwNDQyNTJ7Ltb5: 00:37:07.675 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjZjMjllOWUxMDJiMGNlMThiMDIyYWNhNTE5ZTgxYTY4ODMyZjU3NGFmYjgyNjQ3NDE0YmQyNDNhNTk2M2MzMv1avNw=: 00:37:07.675 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:07.675 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:07.675 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY1YjRjZDFjZjgwNWRkYzQ0N2ZiYmRmYmYwNDQyNTJ7Ltb5: 00:37:07.675 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjZjMjllOWUxMDJiMGNlMThiMDIyYWNhNTE5ZTgxYTY4ODMyZjU3NGFmYjgyNjQ3NDE0YmQyNDNhNTk2M2MzMv1avNw=: ]] 00:37:07.675 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjZjMjllOWUxMDJiMGNlMThiMDIyYWNhNTE5ZTgxYTY4ODMyZjU3NGFmYjgyNjQ3NDE0YmQyNDNhNTk2M2MzMv1avNw=: 00:37:07.675 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:37:07.675 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:07.675 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:07.675 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:07.675 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:07.675 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:07.675 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:37:07.675 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.675 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.675 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.675 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:07.675 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:07.675 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:07.675 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:07.675 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:07.675 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:07.675 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:07.675 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:07.675 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:07.675 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:07.675 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:07.675 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:07.675 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.675 15:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.675 nvme0n1 00:37:07.675 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.675 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:07.675 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:07.675 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.675 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.675 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.675 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:07.675 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:07.675 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.675 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU1M2NhNjIwMDY2ZmY3NGQ0NWY3OTBlOWE2MzJjNTQ2ZjUzZDFlNTNjMDcxZThh1FJPKA==: 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU1M2NhNjIwMDY2ZmY3NGQ0NWY3OTBlOWE2MzJjNTQ2ZjUzZDFlNTNjMDcxZThh1FJPKA==: 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: ]] 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.937 nvme0n1 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.937 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE0MTAzZWRjMTY1MjdhNjg4ZDk4NDAxZGVkOTZkYTKG9Q6o: 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE0MTAzZWRjMTY1MjdhNjg4ZDk4NDAxZGVkOTZkYTKG9Q6o: 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: ]] 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.199 nvme0n1 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.199 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.460 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.460 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:08.460 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:37:08.460 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:08.460 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:08.460 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:08.460 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:08.460 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2E1NjgwYWY1OGUwMjMyZjZlOTQxY2E5MGJlZjA2NWNmYWYzODA1NGY4Zjk1NDc5PO4ieg==: 00:37:08.460 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODQyN2MyZmVmOWMwNzE5ZDEzZjc0MDdkNjM5YTQwYWS9Mq0u: 00:37:08.460 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:08.460 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:08.460 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2E1NjgwYWY1OGUwMjMyZjZlOTQxY2E5MGJlZjA2NWNmYWYzODA1NGY4Zjk1NDc5PO4ieg==: 00:37:08.460 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODQyN2MyZmVmOWMwNzE5ZDEzZjc0MDdkNjM5YTQwYWS9Mq0u: ]] 00:37:08.460 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODQyN2MyZmVmOWMwNzE5ZDEzZjc0MDdkNjM5YTQwYWS9Mq0u: 00:37:08.460 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:37:08.460 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:08.460 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:08.460 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:08.460 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:08.460 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:08.460 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:37:08.460 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.460 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.460 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.460 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:08.460 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:08.460 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:08.460 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:08.460 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:08.460 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:08.460 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:08.460 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:08.460 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:08.460 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:08.460 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:08.460 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:08.460 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.460 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.460 nvme0n1 00:37:08.460 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.460 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:08.460 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:08.460 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.460 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.460 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.722 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:08.722 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:08.722 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.722 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.722 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.722 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:08.722 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:37:08.722 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:08.722 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:08.722 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:08.722 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:08.722 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDVlODVjZGVlYTI0MmRhZjNiZThjNDFiNzhkODI0ZTE5M2VlMjdiNjhkYmZkYzk4M2MyZjJiN2M4NGYzMzJhMHrx/qw=: 00:37:08.722 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:08.722 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:08.722 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:08.722 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDVlODVjZGVlYTI0MmRhZjNiZThjNDFiNzhkODI0ZTE5M2VlMjdiNjhkYmZkYzk4M2MyZjJiN2M4NGYzMzJhMHrx/qw=: 00:37:08.722 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:08.722 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:37:08.722 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:08.722 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:08.722 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:08.722 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:08.723 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:08.723 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:37:08.723 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.723 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.723 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.723 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:08.723 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:08.723 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:08.723 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:08.723 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:08.723 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:08.723 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:08.723 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:08.723 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:08.723 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:08.723 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:08.723 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:08.723 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.723 15:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.723 nvme0n1 00:37:08.723 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.723 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:08.723 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:08.723 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.723 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.723 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.723 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:08.723 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:08.984 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.984 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.984 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.984 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:08.984 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:08.984 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:37:08.984 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:08.984 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:08.984 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:08.984 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:08.984 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY1YjRjZDFjZjgwNWRkYzQ0N2ZiYmRmYmYwNDQyNTJ7Ltb5: 00:37:08.984 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjZjMjllOWUxMDJiMGNlMThiMDIyYWNhNTE5ZTgxYTY4ODMyZjU3NGFmYjgyNjQ3NDE0YmQyNDNhNTk2M2MzMv1avNw=: 00:37:08.984 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:08.984 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:08.984 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY1YjRjZDFjZjgwNWRkYzQ0N2ZiYmRmYmYwNDQyNTJ7Ltb5: 00:37:08.984 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjZjMjllOWUxMDJiMGNlMThiMDIyYWNhNTE5ZTgxYTY4ODMyZjU3NGFmYjgyNjQ3NDE0YmQyNDNhNTk2M2MzMv1avNw=: ]] 00:37:08.985 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjZjMjllOWUxMDJiMGNlMThiMDIyYWNhNTE5ZTgxYTY4ODMyZjU3NGFmYjgyNjQ3NDE0YmQyNDNhNTk2M2MzMv1avNw=: 00:37:08.985 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:37:08.985 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:08.985 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:08.985 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:08.985 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:08.985 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:08.985 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:37:08.985 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.985 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.985 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.985 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:08.985 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:08.985 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:08.985 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:08.985 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:08.985 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:08.985 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:08.985 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:08.985 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:08.985 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:08.985 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:08.985 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:08.985 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.985 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.247 nvme0n1 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU1M2NhNjIwMDY2ZmY3NGQ0NWY3OTBlOWE2MzJjNTQ2ZjUzZDFlNTNjMDcxZThh1FJPKA==: 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU1M2NhNjIwMDY2ZmY3NGQ0NWY3OTBlOWE2MzJjNTQ2ZjUzZDFlNTNjMDcxZThh1FJPKA==: 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: ]] 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.247 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.508 nvme0n1 00:37:09.508 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.508 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:09.508 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:09.508 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.508 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.508 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.508 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:09.508 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:09.508 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.508 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.509 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.509 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:09.509 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:37:09.509 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:09.509 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:09.509 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:09.509 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:09.509 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE0MTAzZWRjMTY1MjdhNjg4ZDk4NDAxZGVkOTZkYTKG9Q6o: 00:37:09.509 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: 00:37:09.509 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:09.509 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:09.509 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE0MTAzZWRjMTY1MjdhNjg4ZDk4NDAxZGVkOTZkYTKG9Q6o: 00:37:09.509 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: ]] 00:37:09.509 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: 00:37:09.509 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:37:09.509 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:09.509 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:09.509 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:09.509 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:09.509 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:09.509 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:37:09.509 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.509 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.509 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.509 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:09.509 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:09.509 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:09.509 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:09.509 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:09.509 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:09.509 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:09.509 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:09.509 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:09.509 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:09.509 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:09.509 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:09.509 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.509 15:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.770 nvme0n1 00:37:09.770 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.770 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:09.770 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:09.770 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.770 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.770 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.770 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:09.770 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:09.770 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.770 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.770 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.770 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:09.770 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:37:09.770 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:09.770 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:09.770 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:09.770 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:09.770 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2E1NjgwYWY1OGUwMjMyZjZlOTQxY2E5MGJlZjA2NWNmYWYzODA1NGY4Zjk1NDc5PO4ieg==: 00:37:09.770 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODQyN2MyZmVmOWMwNzE5ZDEzZjc0MDdkNjM5YTQwYWS9Mq0u: 00:37:09.770 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:09.770 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:09.770 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2E1NjgwYWY1OGUwMjMyZjZlOTQxY2E5MGJlZjA2NWNmYWYzODA1NGY4Zjk1NDc5PO4ieg==: 00:37:09.770 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODQyN2MyZmVmOWMwNzE5ZDEzZjc0MDdkNjM5YTQwYWS9Mq0u: ]] 00:37:09.770 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODQyN2MyZmVmOWMwNzE5ZDEzZjc0MDdkNjM5YTQwYWS9Mq0u: 00:37:09.770 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:37:09.770 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:09.770 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:09.770 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:09.770 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:09.770 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:09.770 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:37:09.770 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.770 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.031 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.031 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:10.031 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:10.031 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:10.031 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:10.031 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:10.031 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:10.031 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:10.031 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:10.031 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:10.031 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:10.031 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:10.031 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:10.031 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.031 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.292 nvme0n1 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDVlODVjZGVlYTI0MmRhZjNiZThjNDFiNzhkODI0ZTE5M2VlMjdiNjhkYmZkYzk4M2MyZjJiN2M4NGYzMzJhMHrx/qw=: 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDVlODVjZGVlYTI0MmRhZjNiZThjNDFiNzhkODI0ZTE5M2VlMjdiNjhkYmZkYzk4M2MyZjJiN2M4NGYzMzJhMHrx/qw=: 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.292 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.554 nvme0n1 00:37:10.554 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.554 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:10.554 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:10.555 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.555 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.555 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.555 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:10.555 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:10.555 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.555 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.555 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.555 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:10.555 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:10.555 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:37:10.555 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:10.555 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:10.555 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:10.555 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:10.555 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY1YjRjZDFjZjgwNWRkYzQ0N2ZiYmRmYmYwNDQyNTJ7Ltb5: 00:37:10.555 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjZjMjllOWUxMDJiMGNlMThiMDIyYWNhNTE5ZTgxYTY4ODMyZjU3NGFmYjgyNjQ3NDE0YmQyNDNhNTk2M2MzMv1avNw=: 00:37:10.555 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:10.555 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:10.555 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY1YjRjZDFjZjgwNWRkYzQ0N2ZiYmRmYmYwNDQyNTJ7Ltb5: 00:37:10.555 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjZjMjllOWUxMDJiMGNlMThiMDIyYWNhNTE5ZTgxYTY4ODMyZjU3NGFmYjgyNjQ3NDE0YmQyNDNhNTk2M2MzMv1avNw=: ]] 00:37:10.555 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjZjMjllOWUxMDJiMGNlMThiMDIyYWNhNTE5ZTgxYTY4ODMyZjU3NGFmYjgyNjQ3NDE0YmQyNDNhNTk2M2MzMv1avNw=: 00:37:10.555 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:37:10.555 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:10.555 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:10.555 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:10.555 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:10.555 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:10.555 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:37:10.555 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.555 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.555 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.555 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:10.555 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:10.555 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:10.555 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:10.555 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:10.555 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:10.555 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:10.555 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:10.555 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:10.555 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:10.555 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:10.555 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:10.555 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.555 15:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.126 nvme0n1 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU1M2NhNjIwMDY2ZmY3NGQ0NWY3OTBlOWE2MzJjNTQ2ZjUzZDFlNTNjMDcxZThh1FJPKA==: 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU1M2NhNjIwMDY2ZmY3NGQ0NWY3OTBlOWE2MzJjNTQ2ZjUzZDFlNTNjMDcxZThh1FJPKA==: 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: ]] 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.126 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.387 nvme0n1 00:37:11.387 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.387 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:11.387 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:11.387 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.387 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.648 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.648 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:11.648 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:11.648 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.648 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.648 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.648 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:11.648 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:37:11.648 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:11.648 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:11.648 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:11.648 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:11.648 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE0MTAzZWRjMTY1MjdhNjg4ZDk4NDAxZGVkOTZkYTKG9Q6o: 00:37:11.648 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: 00:37:11.648 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:11.648 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:11.648 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE0MTAzZWRjMTY1MjdhNjg4ZDk4NDAxZGVkOTZkYTKG9Q6o: 00:37:11.648 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: ]] 00:37:11.648 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: 00:37:11.648 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:37:11.648 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:11.648 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:11.648 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:11.648 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:11.648 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:11.648 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:37:11.648 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.648 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.648 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.648 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:11.648 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:11.648 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:11.648 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:11.648 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:11.648 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:11.648 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:11.649 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:11.649 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:11.649 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:11.649 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:11.649 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:11.649 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.649 15:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.910 nvme0n1 00:37:11.910 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.910 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:11.910 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:11.910 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.910 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.910 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.171 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:12.171 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:12.171 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.171 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:12.171 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.171 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:12.171 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:37:12.171 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:12.171 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:12.171 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:12.171 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:12.171 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2E1NjgwYWY1OGUwMjMyZjZlOTQxY2E5MGJlZjA2NWNmYWYzODA1NGY4Zjk1NDc5PO4ieg==: 00:37:12.171 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODQyN2MyZmVmOWMwNzE5ZDEzZjc0MDdkNjM5YTQwYWS9Mq0u: 00:37:12.171 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:12.171 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:12.171 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2E1NjgwYWY1OGUwMjMyZjZlOTQxY2E5MGJlZjA2NWNmYWYzODA1NGY4Zjk1NDc5PO4ieg==: 00:37:12.171 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODQyN2MyZmVmOWMwNzE5ZDEzZjc0MDdkNjM5YTQwYWS9Mq0u: ]] 00:37:12.171 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODQyN2MyZmVmOWMwNzE5ZDEzZjc0MDdkNjM5YTQwYWS9Mq0u: 00:37:12.171 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:37:12.171 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:12.171 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:12.171 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:12.171 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:12.171 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:12.171 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:37:12.171 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.171 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:12.171 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.171 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:12.171 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:12.171 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:12.171 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:12.171 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:12.171 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:12.171 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:12.171 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:12.171 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:12.171 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:12.171 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:12.171 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:12.171 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.171 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:12.431 nvme0n1 00:37:12.431 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.431 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:12.431 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:12.431 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.431 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:12.431 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.431 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:12.431 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:12.431 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.432 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:12.692 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.692 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:12.692 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:37:12.692 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:12.692 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:12.692 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:12.692 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:12.692 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDVlODVjZGVlYTI0MmRhZjNiZThjNDFiNzhkODI0ZTE5M2VlMjdiNjhkYmZkYzk4M2MyZjJiN2M4NGYzMzJhMHrx/qw=: 00:37:12.692 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:12.692 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:12.692 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:12.692 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDVlODVjZGVlYTI0MmRhZjNiZThjNDFiNzhkODI0ZTE5M2VlMjdiNjhkYmZkYzk4M2MyZjJiN2M4NGYzMzJhMHrx/qw=: 00:37:12.692 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:12.692 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:37:12.692 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:12.692 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:12.692 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:12.692 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:12.692 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:12.692 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:37:12.692 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.692 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:12.692 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.692 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:12.692 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:12.692 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:12.692 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:12.692 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:12.692 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:12.692 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:12.692 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:12.692 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:12.692 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:12.692 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:12.692 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:12.692 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.692 15:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:12.953 nvme0n1 00:37:12.953 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.953 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:12.953 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:12.953 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.953 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:12.953 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.953 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:12.953 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:12.953 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.953 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:12.953 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.953 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:12.953 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:12.953 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:37:12.953 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:12.953 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:12.953 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:12.953 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:12.953 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY1YjRjZDFjZjgwNWRkYzQ0N2ZiYmRmYmYwNDQyNTJ7Ltb5: 00:37:12.953 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjZjMjllOWUxMDJiMGNlMThiMDIyYWNhNTE5ZTgxYTY4ODMyZjU3NGFmYjgyNjQ3NDE0YmQyNDNhNTk2M2MzMv1avNw=: 00:37:12.953 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:12.953 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:12.953 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY1YjRjZDFjZjgwNWRkYzQ0N2ZiYmRmYmYwNDQyNTJ7Ltb5: 00:37:12.953 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjZjMjllOWUxMDJiMGNlMThiMDIyYWNhNTE5ZTgxYTY4ODMyZjU3NGFmYjgyNjQ3NDE0YmQyNDNhNTk2M2MzMv1avNw=: ]] 00:37:12.953 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjZjMjllOWUxMDJiMGNlMThiMDIyYWNhNTE5ZTgxYTY4ODMyZjU3NGFmYjgyNjQ3NDE0YmQyNDNhNTk2M2MzMv1avNw=: 00:37:12.953 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:37:12.953 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:12.953 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:12.953 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:12.953 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:12.953 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:12.953 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:37:12.953 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.953 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:13.213 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:13.213 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:13.213 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:13.213 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:13.213 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:13.213 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:13.213 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:13.213 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:13.213 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:13.213 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:13.213 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:13.213 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:13.213 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:13.213 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:13.213 15:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:13.784 nvme0n1 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU1M2NhNjIwMDY2ZmY3NGQ0NWY3OTBlOWE2MzJjNTQ2ZjUzZDFlNTNjMDcxZThh1FJPKA==: 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU1M2NhNjIwMDY2ZmY3NGQ0NWY3OTBlOWE2MzJjNTQ2ZjUzZDFlNTNjMDcxZThh1FJPKA==: 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: ]] 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:13.784 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:14.355 nvme0n1 00:37:14.355 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.355 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:14.355 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:14.355 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.355 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:14.355 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.616 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:14.616 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:14.616 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.616 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:14.616 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.616 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:14.616 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:37:14.616 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:14.616 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:14.616 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:14.616 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:14.616 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE0MTAzZWRjMTY1MjdhNjg4ZDk4NDAxZGVkOTZkYTKG9Q6o: 00:37:14.616 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: 00:37:14.616 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:14.616 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:14.616 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE0MTAzZWRjMTY1MjdhNjg4ZDk4NDAxZGVkOTZkYTKG9Q6o: 00:37:14.616 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: ]] 00:37:14.616 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: 00:37:14.616 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:37:14.616 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:14.616 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:14.616 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:14.616 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:14.616 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:14.616 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:37:14.616 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.616 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:14.616 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.616 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:14.616 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:14.616 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:14.616 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:14.616 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:14.616 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:14.616 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:14.616 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:14.616 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:14.616 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:14.616 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:14.616 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:14.616 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.616 15:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.187 nvme0n1 00:37:15.187 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.187 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:15.187 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:15.187 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.187 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.187 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.187 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:15.187 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:15.187 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.187 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.187 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.187 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:15.187 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:37:15.187 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:15.187 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:15.187 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:15.187 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:15.187 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2E1NjgwYWY1OGUwMjMyZjZlOTQxY2E5MGJlZjA2NWNmYWYzODA1NGY4Zjk1NDc5PO4ieg==: 00:37:15.187 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODQyN2MyZmVmOWMwNzE5ZDEzZjc0MDdkNjM5YTQwYWS9Mq0u: 00:37:15.187 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:15.187 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:15.187 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2E1NjgwYWY1OGUwMjMyZjZlOTQxY2E5MGJlZjA2NWNmYWYzODA1NGY4Zjk1NDc5PO4ieg==: 00:37:15.187 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODQyN2MyZmVmOWMwNzE5ZDEzZjc0MDdkNjM5YTQwYWS9Mq0u: ]] 00:37:15.187 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODQyN2MyZmVmOWMwNzE5ZDEzZjc0MDdkNjM5YTQwYWS9Mq0u: 00:37:15.187 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:37:15.187 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:15.187 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:15.187 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:15.187 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:15.188 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:15.188 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:37:15.188 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.188 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.188 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.188 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:15.188 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:15.188 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:15.188 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:15.188 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:15.188 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:15.188 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:15.188 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:15.188 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:15.188 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:15.188 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:15.188 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:15.188 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.188 15:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.758 nvme0n1 00:37:15.758 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.758 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:15.758 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:15.758 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.758 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.758 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.019 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:16.019 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:16.019 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.019 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.019 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.019 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:16.019 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:37:16.019 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:16.019 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:16.019 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:16.019 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:16.019 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDVlODVjZGVlYTI0MmRhZjNiZThjNDFiNzhkODI0ZTE5M2VlMjdiNjhkYmZkYzk4M2MyZjJiN2M4NGYzMzJhMHrx/qw=: 00:37:16.019 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:16.019 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:16.019 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:16.019 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDVlODVjZGVlYTI0MmRhZjNiZThjNDFiNzhkODI0ZTE5M2VlMjdiNjhkYmZkYzk4M2MyZjJiN2M4NGYzMzJhMHrx/qw=: 00:37:16.019 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:16.019 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:37:16.019 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:16.019 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:16.019 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:16.019 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:16.019 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:16.019 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:37:16.019 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.019 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.019 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.019 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:16.019 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:16.019 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:16.019 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:16.019 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:16.019 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:16.019 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:16.019 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:16.019 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:16.019 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:16.019 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:16.019 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:16.019 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.019 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.590 nvme0n1 00:37:16.590 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.590 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:16.590 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:16.590 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.590 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.590 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.590 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:16.590 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:16.590 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.590 15:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.590 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.591 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:37:16.591 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:16.591 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:16.591 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:37:16.591 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:16.591 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:16.591 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:16.591 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:16.591 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY1YjRjZDFjZjgwNWRkYzQ0N2ZiYmRmYmYwNDQyNTJ7Ltb5: 00:37:16.591 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjZjMjllOWUxMDJiMGNlMThiMDIyYWNhNTE5ZTgxYTY4ODMyZjU3NGFmYjgyNjQ3NDE0YmQyNDNhNTk2M2MzMv1avNw=: 00:37:16.591 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:16.591 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:16.591 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY1YjRjZDFjZjgwNWRkYzQ0N2ZiYmRmYmYwNDQyNTJ7Ltb5: 00:37:16.591 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjZjMjllOWUxMDJiMGNlMThiMDIyYWNhNTE5ZTgxYTY4ODMyZjU3NGFmYjgyNjQ3NDE0YmQyNDNhNTk2M2MzMv1avNw=: ]] 00:37:16.591 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjZjMjllOWUxMDJiMGNlMThiMDIyYWNhNTE5ZTgxYTY4ODMyZjU3NGFmYjgyNjQ3NDE0YmQyNDNhNTk2M2MzMv1avNw=: 00:37:16.591 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:37:16.591 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:16.591 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:16.591 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:16.591 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:16.591 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:16.591 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:37:16.591 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.591 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.591 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.591 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:16.591 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:16.591 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:16.591 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:16.591 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:16.591 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:16.591 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:16.591 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:16.591 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:16.591 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:16.591 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:16.591 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:16.591 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.591 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.851 nvme0n1 00:37:16.851 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.851 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:16.851 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.852 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:16.852 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.852 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.852 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:16.852 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:16.852 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.852 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.852 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.852 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:16.852 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:37:16.852 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:16.852 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:16.852 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:16.852 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:16.852 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU1M2NhNjIwMDY2ZmY3NGQ0NWY3OTBlOWE2MzJjNTQ2ZjUzZDFlNTNjMDcxZThh1FJPKA==: 00:37:16.852 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: 00:37:16.852 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:16.852 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:16.852 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU1M2NhNjIwMDY2ZmY3NGQ0NWY3OTBlOWE2MzJjNTQ2ZjUzZDFlNTNjMDcxZThh1FJPKA==: 00:37:16.852 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: ]] 00:37:16.852 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: 00:37:16.852 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:37:16.852 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:16.852 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:16.852 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:16.852 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:16.852 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:16.852 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:37:16.852 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.852 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.852 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.852 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:16.852 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:16.852 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:16.852 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:16.852 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:16.852 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:16.852 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:16.852 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:16.852 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:16.852 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:16.852 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:16.852 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:16.852 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.852 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.112 nvme0n1 00:37:17.112 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.112 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:17.112 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:17.112 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.112 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.112 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.112 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:17.112 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:17.112 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.112 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.112 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.112 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:17.112 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:37:17.112 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:17.112 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:17.112 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:17.112 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:17.112 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE0MTAzZWRjMTY1MjdhNjg4ZDk4NDAxZGVkOTZkYTKG9Q6o: 00:37:17.113 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: 00:37:17.113 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:17.113 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:17.113 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE0MTAzZWRjMTY1MjdhNjg4ZDk4NDAxZGVkOTZkYTKG9Q6o: 00:37:17.113 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: ]] 00:37:17.113 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: 00:37:17.113 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:37:17.113 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:17.113 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:17.113 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:17.113 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:17.113 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:17.113 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:37:17.113 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.113 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.113 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.113 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:17.113 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:17.113 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:17.113 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:17.113 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:17.113 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:17.113 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:17.113 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:17.113 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:17.113 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:17.113 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:17.113 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:17.113 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.113 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.375 nvme0n1 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2E1NjgwYWY1OGUwMjMyZjZlOTQxY2E5MGJlZjA2NWNmYWYzODA1NGY4Zjk1NDc5PO4ieg==: 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODQyN2MyZmVmOWMwNzE5ZDEzZjc0MDdkNjM5YTQwYWS9Mq0u: 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2E1NjgwYWY1OGUwMjMyZjZlOTQxY2E5MGJlZjA2NWNmYWYzODA1NGY4Zjk1NDc5PO4ieg==: 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODQyN2MyZmVmOWMwNzE5ZDEzZjc0MDdkNjM5YTQwYWS9Mq0u: ]] 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODQyN2MyZmVmOWMwNzE5ZDEzZjc0MDdkNjM5YTQwYWS9Mq0u: 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.375 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.637 nvme0n1 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDVlODVjZGVlYTI0MmRhZjNiZThjNDFiNzhkODI0ZTE5M2VlMjdiNjhkYmZkYzk4M2MyZjJiN2M4NGYzMzJhMHrx/qw=: 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDVlODVjZGVlYTI0MmRhZjNiZThjNDFiNzhkODI0ZTE5M2VlMjdiNjhkYmZkYzk4M2MyZjJiN2M4NGYzMzJhMHrx/qw=: 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.637 15:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.637 nvme0n1 00:37:17.637 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.637 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:17.637 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.637 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:17.637 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.951 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.951 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:17.951 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:17.951 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.951 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.951 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.951 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:17.951 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:17.951 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:37:17.951 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:17.951 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:17.951 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:17.951 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:17.952 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY1YjRjZDFjZjgwNWRkYzQ0N2ZiYmRmYmYwNDQyNTJ7Ltb5: 00:37:17.952 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjZjMjllOWUxMDJiMGNlMThiMDIyYWNhNTE5ZTgxYTY4ODMyZjU3NGFmYjgyNjQ3NDE0YmQyNDNhNTk2M2MzMv1avNw=: 00:37:17.952 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:17.952 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:17.952 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY1YjRjZDFjZjgwNWRkYzQ0N2ZiYmRmYmYwNDQyNTJ7Ltb5: 00:37:17.952 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjZjMjllOWUxMDJiMGNlMThiMDIyYWNhNTE5ZTgxYTY4ODMyZjU3NGFmYjgyNjQ3NDE0YmQyNDNhNTk2M2MzMv1avNw=: ]] 00:37:17.952 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjZjMjllOWUxMDJiMGNlMThiMDIyYWNhNTE5ZTgxYTY4ODMyZjU3NGFmYjgyNjQ3NDE0YmQyNDNhNTk2M2MzMv1avNw=: 00:37:17.952 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:37:17.952 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:17.952 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:17.952 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:17.952 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:17.952 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:17.952 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:37:17.952 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.952 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.952 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.952 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:17.952 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:17.952 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:17.952 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:17.952 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:17.952 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:17.952 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:17.952 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:17.952 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:17.952 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:17.952 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:17.952 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:17.952 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.952 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.952 nvme0n1 00:37:17.952 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.952 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:17.952 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:17.952 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.952 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.952 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.952 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:17.952 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:17.952 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.952 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU1M2NhNjIwMDY2ZmY3NGQ0NWY3OTBlOWE2MzJjNTQ2ZjUzZDFlNTNjMDcxZThh1FJPKA==: 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU1M2NhNjIwMDY2ZmY3NGQ0NWY3OTBlOWE2MzJjNTQ2ZjUzZDFlNTNjMDcxZThh1FJPKA==: 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: ]] 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.232 nvme0n1 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE0MTAzZWRjMTY1MjdhNjg4ZDk4NDAxZGVkOTZkYTKG9Q6o: 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE0MTAzZWRjMTY1MjdhNjg4ZDk4NDAxZGVkOTZkYTKG9Q6o: 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: ]] 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.232 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.509 nvme0n1 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2E1NjgwYWY1OGUwMjMyZjZlOTQxY2E5MGJlZjA2NWNmYWYzODA1NGY4Zjk1NDc5PO4ieg==: 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODQyN2MyZmVmOWMwNzE5ZDEzZjc0MDdkNjM5YTQwYWS9Mq0u: 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2E1NjgwYWY1OGUwMjMyZjZlOTQxY2E5MGJlZjA2NWNmYWYzODA1NGY4Zjk1NDc5PO4ieg==: 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODQyN2MyZmVmOWMwNzE5ZDEzZjc0MDdkNjM5YTQwYWS9Mq0u: ]] 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODQyN2MyZmVmOWMwNzE5ZDEzZjc0MDdkNjM5YTQwYWS9Mq0u: 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:18.509 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.784 15:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.784 nvme0n1 00:37:18.784 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.784 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:18.784 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:18.784 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.784 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.784 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.784 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:18.784 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:18.784 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.784 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.784 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.784 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:18.784 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:37:18.784 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:18.784 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:18.784 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:18.784 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:18.784 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDVlODVjZGVlYTI0MmRhZjNiZThjNDFiNzhkODI0ZTE5M2VlMjdiNjhkYmZkYzk4M2MyZjJiN2M4NGYzMzJhMHrx/qw=: 00:37:18.784 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:18.784 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:18.784 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:18.784 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDVlODVjZGVlYTI0MmRhZjNiZThjNDFiNzhkODI0ZTE5M2VlMjdiNjhkYmZkYzk4M2MyZjJiN2M4NGYzMzJhMHrx/qw=: 00:37:18.784 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:18.784 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:37:18.784 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:18.784 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:18.784 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:18.784 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:18.784 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:18.784 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:37:18.784 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.784 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.784 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.784 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:18.784 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:18.784 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:18.784 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:18.784 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:18.784 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:18.784 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:18.784 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:18.785 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:18.785 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:18.785 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:18.785 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:18.785 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.785 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:19.045 nvme0n1 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY1YjRjZDFjZjgwNWRkYzQ0N2ZiYmRmYmYwNDQyNTJ7Ltb5: 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjZjMjllOWUxMDJiMGNlMThiMDIyYWNhNTE5ZTgxYTY4ODMyZjU3NGFmYjgyNjQ3NDE0YmQyNDNhNTk2M2MzMv1avNw=: 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY1YjRjZDFjZjgwNWRkYzQ0N2ZiYmRmYmYwNDQyNTJ7Ltb5: 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjZjMjllOWUxMDJiMGNlMThiMDIyYWNhNTE5ZTgxYTY4ODMyZjU3NGFmYjgyNjQ3NDE0YmQyNDNhNTk2M2MzMv1avNw=: ]] 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjZjMjllOWUxMDJiMGNlMThiMDIyYWNhNTE5ZTgxYTY4ODMyZjU3NGFmYjgyNjQ3NDE0YmQyNDNhNTk2M2MzMv1avNw=: 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.045 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:19.306 nvme0n1 00:37:19.306 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.306 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:19.306 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:19.306 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.306 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:19.306 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.567 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:19.567 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:19.568 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.568 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:19.568 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.568 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:19.568 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:37:19.568 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:19.568 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:19.568 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:19.568 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:19.568 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU1M2NhNjIwMDY2ZmY3NGQ0NWY3OTBlOWE2MzJjNTQ2ZjUzZDFlNTNjMDcxZThh1FJPKA==: 00:37:19.568 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: 00:37:19.568 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:19.568 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:19.568 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU1M2NhNjIwMDY2ZmY3NGQ0NWY3OTBlOWE2MzJjNTQ2ZjUzZDFlNTNjMDcxZThh1FJPKA==: 00:37:19.568 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: ]] 00:37:19.568 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: 00:37:19.568 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:37:19.568 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:19.568 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:19.568 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:19.568 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:19.568 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:19.568 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:37:19.568 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.568 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:19.568 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.568 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:19.568 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:19.568 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:19.568 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:19.568 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:19.568 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:19.568 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:19.568 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:19.568 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:19.568 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:19.568 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:19.568 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:19.568 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.568 15:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:19.830 nvme0n1 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE0MTAzZWRjMTY1MjdhNjg4ZDk4NDAxZGVkOTZkYTKG9Q6o: 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE0MTAzZWRjMTY1MjdhNjg4ZDk4NDAxZGVkOTZkYTKG9Q6o: 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: ]] 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.830 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.092 nvme0n1 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2E1NjgwYWY1OGUwMjMyZjZlOTQxY2E5MGJlZjA2NWNmYWYzODA1NGY4Zjk1NDc5PO4ieg==: 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODQyN2MyZmVmOWMwNzE5ZDEzZjc0MDdkNjM5YTQwYWS9Mq0u: 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2E1NjgwYWY1OGUwMjMyZjZlOTQxY2E5MGJlZjA2NWNmYWYzODA1NGY4Zjk1NDc5PO4ieg==: 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODQyN2MyZmVmOWMwNzE5ZDEzZjc0MDdkNjM5YTQwYWS9Mq0u: ]] 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODQyN2MyZmVmOWMwNzE5ZDEzZjc0MDdkNjM5YTQwYWS9Mq0u: 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.092 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.354 nvme0n1 00:37:20.354 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.354 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:20.354 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:20.354 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.354 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.354 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.615 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:20.615 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:20.615 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.615 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.615 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.615 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:20.615 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:37:20.615 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:20.615 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:20.615 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:20.615 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:20.615 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDVlODVjZGVlYTI0MmRhZjNiZThjNDFiNzhkODI0ZTE5M2VlMjdiNjhkYmZkYzk4M2MyZjJiN2M4NGYzMzJhMHrx/qw=: 00:37:20.615 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:20.615 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:20.615 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:20.615 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDVlODVjZGVlYTI0MmRhZjNiZThjNDFiNzhkODI0ZTE5M2VlMjdiNjhkYmZkYzk4M2MyZjJiN2M4NGYzMzJhMHrx/qw=: 00:37:20.615 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:20.615 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:37:20.615 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:20.615 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:20.615 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:20.615 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:20.615 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:20.615 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:37:20.615 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.615 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.615 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.615 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:20.615 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:20.615 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:20.615 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:20.615 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:20.615 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:20.615 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:20.615 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:20.615 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:20.615 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:20.615 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:20.615 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:20.615 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.615 15:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.877 nvme0n1 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY1YjRjZDFjZjgwNWRkYzQ0N2ZiYmRmYmYwNDQyNTJ7Ltb5: 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjZjMjllOWUxMDJiMGNlMThiMDIyYWNhNTE5ZTgxYTY4ODMyZjU3NGFmYjgyNjQ3NDE0YmQyNDNhNTk2M2MzMv1avNw=: 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY1YjRjZDFjZjgwNWRkYzQ0N2ZiYmRmYmYwNDQyNTJ7Ltb5: 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjZjMjllOWUxMDJiMGNlMThiMDIyYWNhNTE5ZTgxYTY4ODMyZjU3NGFmYjgyNjQ3NDE0YmQyNDNhNTk2M2MzMv1avNw=: ]] 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjZjMjllOWUxMDJiMGNlMThiMDIyYWNhNTE5ZTgxYTY4ODMyZjU3NGFmYjgyNjQ3NDE0YmQyNDNhNTk2M2MzMv1avNw=: 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:20.877 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.878 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.449 nvme0n1 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU1M2NhNjIwMDY2ZmY3NGQ0NWY3OTBlOWE2MzJjNTQ2ZjUzZDFlNTNjMDcxZThh1FJPKA==: 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU1M2NhNjIwMDY2ZmY3NGQ0NWY3OTBlOWE2MzJjNTQ2ZjUzZDFlNTNjMDcxZThh1FJPKA==: 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: ]] 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.449 15:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.709 nvme0n1 00:37:21.709 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.709 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:21.709 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:21.709 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.709 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.709 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.709 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:21.709 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:21.709 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.709 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.970 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.970 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:21.970 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:37:21.970 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:21.970 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:21.970 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:21.970 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:21.970 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE0MTAzZWRjMTY1MjdhNjg4ZDk4NDAxZGVkOTZkYTKG9Q6o: 00:37:21.970 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: 00:37:21.970 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:21.970 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:21.970 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE0MTAzZWRjMTY1MjdhNjg4ZDk4NDAxZGVkOTZkYTKG9Q6o: 00:37:21.970 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: ]] 00:37:21.970 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: 00:37:21.970 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:37:21.970 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:21.970 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:21.970 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:21.970 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:21.970 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:21.970 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:21.970 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.970 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.970 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.970 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:21.970 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:21.970 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:21.970 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:21.970 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:21.970 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:21.970 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:21.970 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:21.970 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:21.970 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:21.970 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:21.970 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:21.970 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.970 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.231 nvme0n1 00:37:22.231 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.231 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:22.231 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:22.231 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.231 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.231 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.231 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:22.231 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:22.231 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.231 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.231 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.231 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:22.231 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:37:22.231 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:22.231 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:22.231 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:22.231 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:22.231 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2E1NjgwYWY1OGUwMjMyZjZlOTQxY2E5MGJlZjA2NWNmYWYzODA1NGY4Zjk1NDc5PO4ieg==: 00:37:22.231 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODQyN2MyZmVmOWMwNzE5ZDEzZjc0MDdkNjM5YTQwYWS9Mq0u: 00:37:22.231 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:22.231 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:22.231 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2E1NjgwYWY1OGUwMjMyZjZlOTQxY2E5MGJlZjA2NWNmYWYzODA1NGY4Zjk1NDc5PO4ieg==: 00:37:22.231 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODQyN2MyZmVmOWMwNzE5ZDEzZjc0MDdkNjM5YTQwYWS9Mq0u: ]] 00:37:22.231 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODQyN2MyZmVmOWMwNzE5ZDEzZjc0MDdkNjM5YTQwYWS9Mq0u: 00:37:22.231 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:37:22.231 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:22.231 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:22.231 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:22.231 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:22.231 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:22.231 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:22.231 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.231 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.231 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.231 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:22.231 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:22.491 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:22.491 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:22.491 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:22.491 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:22.491 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:22.491 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:22.491 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:22.491 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:22.491 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:22.491 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:22.491 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.491 15:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.752 nvme0n1 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDVlODVjZGVlYTI0MmRhZjNiZThjNDFiNzhkODI0ZTE5M2VlMjdiNjhkYmZkYzk4M2MyZjJiN2M4NGYzMzJhMHrx/qw=: 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDVlODVjZGVlYTI0MmRhZjNiZThjNDFiNzhkODI0ZTE5M2VlMjdiNjhkYmZkYzk4M2MyZjJiN2M4NGYzMzJhMHrx/qw=: 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.752 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.323 nvme0n1 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY1YjRjZDFjZjgwNWRkYzQ0N2ZiYmRmYmYwNDQyNTJ7Ltb5: 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjZjMjllOWUxMDJiMGNlMThiMDIyYWNhNTE5ZTgxYTY4ODMyZjU3NGFmYjgyNjQ3NDE0YmQyNDNhNTk2M2MzMv1avNw=: 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY1YjRjZDFjZjgwNWRkYzQ0N2ZiYmRmYmYwNDQyNTJ7Ltb5: 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjZjMjllOWUxMDJiMGNlMThiMDIyYWNhNTE5ZTgxYTY4ODMyZjU3NGFmYjgyNjQ3NDE0YmQyNDNhNTk2M2MzMv1avNw=: ]] 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjZjMjllOWUxMDJiMGNlMThiMDIyYWNhNTE5ZTgxYTY4ODMyZjU3NGFmYjgyNjQ3NDE0YmQyNDNhNTk2M2MzMv1avNw=: 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.323 15:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.894 nvme0n1 00:37:23.894 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.894 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:23.894 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:23.894 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.894 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.894 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.154 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:24.154 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:24.154 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.154 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.154 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.154 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:24.154 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:37:24.154 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:24.154 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:24.154 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:24.154 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:24.154 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU1M2NhNjIwMDY2ZmY3NGQ0NWY3OTBlOWE2MzJjNTQ2ZjUzZDFlNTNjMDcxZThh1FJPKA==: 00:37:24.154 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: 00:37:24.154 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:24.154 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:24.154 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU1M2NhNjIwMDY2ZmY3NGQ0NWY3OTBlOWE2MzJjNTQ2ZjUzZDFlNTNjMDcxZThh1FJPKA==: 00:37:24.154 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: ]] 00:37:24.154 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: 00:37:24.154 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:37:24.154 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:24.154 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:24.154 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:24.154 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:24.154 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:24.154 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:37:24.154 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.154 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.154 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.154 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:24.154 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:24.154 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:24.154 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:24.154 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:24.154 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:24.154 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:24.154 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:24.154 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:24.154 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:24.154 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:24.154 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:24.154 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.154 15:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.725 nvme0n1 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE0MTAzZWRjMTY1MjdhNjg4ZDk4NDAxZGVkOTZkYTKG9Q6o: 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE0MTAzZWRjMTY1MjdhNjg4ZDk4NDAxZGVkOTZkYTKG9Q6o: 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: ]] 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.725 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.667 nvme0n1 00:37:25.667 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.667 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:25.667 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:25.667 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.667 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.667 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.667 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:25.667 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:25.667 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.667 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.667 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.667 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:25.667 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:37:25.667 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:25.667 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:25.667 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:25.667 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:25.667 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2E1NjgwYWY1OGUwMjMyZjZlOTQxY2E5MGJlZjA2NWNmYWYzODA1NGY4Zjk1NDc5PO4ieg==: 00:37:25.667 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODQyN2MyZmVmOWMwNzE5ZDEzZjc0MDdkNjM5YTQwYWS9Mq0u: 00:37:25.667 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:25.667 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:25.667 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2E1NjgwYWY1OGUwMjMyZjZlOTQxY2E5MGJlZjA2NWNmYWYzODA1NGY4Zjk1NDc5PO4ieg==: 00:37:25.667 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODQyN2MyZmVmOWMwNzE5ZDEzZjc0MDdkNjM5YTQwYWS9Mq0u: ]] 00:37:25.667 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODQyN2MyZmVmOWMwNzE5ZDEzZjc0MDdkNjM5YTQwYWS9Mq0u: 00:37:25.667 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:37:25.667 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:25.667 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:25.667 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:25.667 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:25.667 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:25.667 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:37:25.667 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.667 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.667 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.667 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:25.667 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:25.667 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:25.667 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:25.667 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:25.667 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:25.668 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:25.668 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:25.668 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:25.668 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:25.668 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:25.668 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:25.668 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.668 15:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.239 nvme0n1 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDVlODVjZGVlYTI0MmRhZjNiZThjNDFiNzhkODI0ZTE5M2VlMjdiNjhkYmZkYzk4M2MyZjJiN2M4NGYzMzJhMHrx/qw=: 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDVlODVjZGVlYTI0MmRhZjNiZThjNDFiNzhkODI0ZTE5M2VlMjdiNjhkYmZkYzk4M2MyZjJiN2M4NGYzMzJhMHrx/qw=: 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.239 15:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.810 nvme0n1 00:37:26.810 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.810 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:26.810 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:26.810 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.810 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.810 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.810 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:26.810 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:26.810 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.810 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.810 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.810 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:37:26.810 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:26.810 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:26.810 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:26.810 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:26.810 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU1M2NhNjIwMDY2ZmY3NGQ0NWY3OTBlOWE2MzJjNTQ2ZjUzZDFlNTNjMDcxZThh1FJPKA==: 00:37:26.810 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: 00:37:26.810 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:26.810 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:26.811 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU1M2NhNjIwMDY2ZmY3NGQ0NWY3OTBlOWE2MzJjNTQ2ZjUzZDFlNTNjMDcxZThh1FJPKA==: 00:37:26.811 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: ]] 00:37:26.811 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: 00:37:26.811 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:26.811 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.811 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.811 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.811 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:37:26.811 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:26.811 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:26.811 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:26.811 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:26.811 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:26.811 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:26.811 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:26.811 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:26.811 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:26.811 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:26.811 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:37:26.811 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:37:26.811 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:37:26.811 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:37:26.811 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:26.811 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:37:26.811 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:26.811 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:37:26.811 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.811 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.074 request: 00:37:27.074 { 00:37:27.074 "name": "nvme0", 00:37:27.074 "trtype": "tcp", 00:37:27.074 "traddr": "10.0.0.1", 00:37:27.074 "adrfam": "ipv4", 00:37:27.074 "trsvcid": "4420", 00:37:27.074 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:37:27.074 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:37:27.074 "prchk_reftag": false, 00:37:27.074 "prchk_guard": false, 00:37:27.074 "hdgst": false, 00:37:27.074 "ddgst": false, 00:37:27.074 "allow_unrecognized_csi": false, 00:37:27.074 "method": "bdev_nvme_attach_controller", 00:37:27.074 "req_id": 1 00:37:27.074 } 00:37:27.074 Got JSON-RPC error response 00:37:27.074 response: 00:37:27.074 { 00:37:27.074 "code": -5, 00:37:27.074 "message": "Input/output error" 00:37:27.074 } 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.074 request: 00:37:27.074 { 00:37:27.074 "name": "nvme0", 00:37:27.074 "trtype": "tcp", 00:37:27.074 "traddr": "10.0.0.1", 00:37:27.074 "adrfam": "ipv4", 00:37:27.074 "trsvcid": "4420", 00:37:27.074 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:37:27.074 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:37:27.074 "prchk_reftag": false, 00:37:27.074 "prchk_guard": false, 00:37:27.074 "hdgst": false, 00:37:27.074 "ddgst": false, 00:37:27.074 "dhchap_key": "key2", 00:37:27.074 "allow_unrecognized_csi": false, 00:37:27.074 "method": "bdev_nvme_attach_controller", 00:37:27.074 "req_id": 1 00:37:27.074 } 00:37:27.074 Got JSON-RPC error response 00:37:27.074 response: 00:37:27.074 { 00:37:27.074 "code": -5, 00:37:27.074 "message": "Input/output error" 00:37:27.074 } 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.074 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.336 request: 00:37:27.336 { 00:37:27.336 "name": "nvme0", 00:37:27.336 "trtype": "tcp", 00:37:27.336 "traddr": "10.0.0.1", 00:37:27.336 "adrfam": "ipv4", 00:37:27.336 "trsvcid": "4420", 00:37:27.336 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:37:27.336 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:37:27.336 "prchk_reftag": false, 00:37:27.336 "prchk_guard": false, 00:37:27.336 "hdgst": false, 00:37:27.336 "ddgst": false, 00:37:27.336 "dhchap_key": "key1", 00:37:27.336 "dhchap_ctrlr_key": "ckey2", 00:37:27.336 "allow_unrecognized_csi": false, 00:37:27.336 "method": "bdev_nvme_attach_controller", 00:37:27.336 "req_id": 1 00:37:27.336 } 00:37:27.336 Got JSON-RPC error response 00:37:27.336 response: 00:37:27.336 { 00:37:27.336 "code": -5, 00:37:27.336 "message": "Input/output error" 00:37:27.336 } 00:37:27.336 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:37:27.336 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:37:27.336 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:27.336 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:27.336 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:27.336 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:37:27.336 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:27.336 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:27.336 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:27.336 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:27.336 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:27.336 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:27.336 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:27.336 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:27.336 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:27.336 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:27.336 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:37:27.336 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.336 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.336 nvme0n1 00:37:27.336 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.336 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:37:27.336 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:27.336 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:27.336 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:27.336 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:27.336 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE0MTAzZWRjMTY1MjdhNjg4ZDk4NDAxZGVkOTZkYTKG9Q6o: 00:37:27.336 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: 00:37:27.336 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:27.336 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:27.336 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE0MTAzZWRjMTY1MjdhNjg4ZDk4NDAxZGVkOTZkYTKG9Q6o: 00:37:27.336 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: ]] 00:37:27.336 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: 00:37:27.336 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:27.336 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.336 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.336 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.336 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:37:27.336 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:37:27.336 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.336 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.336 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.597 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:27.597 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:27.597 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:37:27.597 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:27.597 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:37:27.597 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:27.597 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:37:27.597 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:27.597 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:27.597 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.597 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.597 request: 00:37:27.597 { 00:37:27.597 "name": "nvme0", 00:37:27.597 "dhchap_key": "key1", 00:37:27.597 "dhchap_ctrlr_key": "ckey2", 00:37:27.597 "method": "bdev_nvme_set_keys", 00:37:27.597 "req_id": 1 00:37:27.597 } 00:37:27.597 Got JSON-RPC error response 00:37:27.597 response: 00:37:27.597 { 00:37:27.597 "code": -13, 00:37:27.597 "message": "Permission denied" 00:37:27.597 } 00:37:27.597 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:37:27.597 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:37:27.597 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:27.597 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:27.597 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:27.597 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:37:27.597 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:37:27.597 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.597 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.597 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.597 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:37:27.597 15:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:37:28.537 15:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:37:28.537 15:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:37:28.538 15:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.538 15:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.538 15:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.538 15:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:37:28.538 15:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:37:29.923 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:37:29.923 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:37:29.923 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.923 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.923 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.923 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:37:29.923 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:37:29.923 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:29.923 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:29.923 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:29.923 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:29.923 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU1M2NhNjIwMDY2ZmY3NGQ0NWY3OTBlOWE2MzJjNTQ2ZjUzZDFlNTNjMDcxZThh1FJPKA==: 00:37:29.923 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: 00:37:29.923 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU1M2NhNjIwMDY2ZmY3NGQ0NWY3OTBlOWE2MzJjNTQ2ZjUzZDFlNTNjMDcxZThh1FJPKA==: 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: ]] 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk0MjQ3NzJmZDVjYTkwYTk5NGFhNTFhODBkZTg2NDlkMTdiMzhjYWE5MzNjYWFlA5Uhew==: 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.924 nvme0n1 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE0MTAzZWRjMTY1MjdhNjg4ZDk4NDAxZGVkOTZkYTKG9Q6o: 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE0MTAzZWRjMTY1MjdhNjg4ZDk4NDAxZGVkOTZkYTKG9Q6o: 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: ]] 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDRmNTBjNDk2MDhjMTQ1OGI0MmQzNTJhMWMwMDM4NDc7e/5K: 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.924 request: 00:37:29.924 { 00:37:29.924 "name": "nvme0", 00:37:29.924 "dhchap_key": "key2", 00:37:29.924 "dhchap_ctrlr_key": "ckey1", 00:37:29.924 "method": "bdev_nvme_set_keys", 00:37:29.924 "req_id": 1 00:37:29.924 } 00:37:29.924 Got JSON-RPC error response 00:37:29.924 response: 00:37:29.924 { 00:37:29.924 "code": -13, 00:37:29.924 "message": "Permission denied" 00:37:29.924 } 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:37:29.924 15:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:37:31.312 15:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:37:31.312 15:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:37:31.312 15:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:31.312 15:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.312 15:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:31.312 15:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:37:31.312 15:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:37:31.312 15:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:37:31.312 15:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:37:31.312 15:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:37:31.312 15:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:37:31.312 15:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:31.312 15:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:37:31.312 15:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:31.312 15:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:31.312 rmmod nvme_tcp 00:37:31.312 rmmod nvme_fabrics 00:37:31.312 15:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:31.312 15:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:37:31.312 15:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:37:31.312 15:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@513 -- # '[' -n 609073 ']' 00:37:31.312 15:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # killprocess 609073 00:37:31.312 15:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 609073 ']' 00:37:31.312 15:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 609073 00:37:31.312 15:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:37:31.312 15:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:31.312 15:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 609073 00:37:31.312 15:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:31.312 15:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:31.312 15:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 609073' 00:37:31.312 killing process with pid 609073 00:37:31.312 15:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 609073 00:37:31.312 15:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 609073 00:37:31.312 15:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:37:31.312 15:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:37:31.312 15:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:37:31.312 15:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:37:31.312 15:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-save 00:37:31.312 15:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:37:31.312 15:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-restore 00:37:31.312 15:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:31.312 15:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:31.312 15:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:31.312 15:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:31.312 15:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:33.224 15:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:33.224 15:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:37:33.224 15:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:37:33.224 15:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:37:33.224 15:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:37:33.224 15:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # echo 0 00:37:33.485 15:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:37:33.485 15:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:37:33.485 15:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:33.485 15:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:37:33.485 15:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:37:33.485 15:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:37:33.485 15:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@722 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:37.684 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:37.684 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:37.684 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:37.684 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:37.684 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:37.684 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:37.684 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:37.684 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:37.684 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:37.684 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:37.684 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:37.684 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:37.684 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:37.684 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:37.684 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:37.684 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:37.684 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:37.684 15:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.W9f /tmp/spdk.key-null.NXU /tmp/spdk.key-sha256.jQ3 /tmp/spdk.key-sha384.HNN /tmp/spdk.key-sha512.pC9 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:37:37.684 15:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:40.982 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:37:40.982 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:37:40.982 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:37:40.982 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:37:40.982 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:37:40.982 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:37:40.982 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:37:40.982 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:37:40.982 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:37:40.982 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:37:40.983 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:37:40.983 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:37:40.983 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:37:40.983 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:37:40.983 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:37:40.983 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:37:40.983 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:37:41.554 00:37:41.554 real 1m3.421s 00:37:41.554 user 0m57.096s 00:37:41.554 sys 0m16.211s 00:37:41.554 15:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:41.554 15:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:41.554 ************************************ 00:37:41.554 END TEST nvmf_auth_host 00:37:41.554 ************************************ 00:37:41.554 15:56:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:37:41.554 15:56:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:37:41.554 15:56:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:37:41.554 15:56:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:41.554 15:56:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:41.554 ************************************ 00:37:41.554 START TEST nvmf_digest 00:37:41.554 ************************************ 00:37:41.554 15:56:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:37:41.554 * Looking for test storage... 00:37:41.554 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:41.554 15:56:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:41.554 15:56:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lcov --version 00:37:41.554 15:56:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:41.554 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:41.554 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:41.554 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:41.554 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:41.554 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:37:41.554 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:37:41.554 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:37:41.554 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:37:41.554 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:37:41.554 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:37:41.554 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:37:41.554 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:41.554 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:37:41.555 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:37:41.555 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:41.555 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:41.555 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:37:41.555 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:37:41.555 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:41.555 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:37:41.555 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:37:41.555 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:37:41.555 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:37:41.555 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:41.555 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:37:41.555 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:37:41.555 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:41.555 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:41.555 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:37:41.555 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:41.555 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:41.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:41.555 --rc genhtml_branch_coverage=1 00:37:41.555 --rc genhtml_function_coverage=1 00:37:41.555 --rc genhtml_legend=1 00:37:41.555 --rc geninfo_all_blocks=1 00:37:41.555 --rc geninfo_unexecuted_blocks=1 00:37:41.555 00:37:41.555 ' 00:37:41.555 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:41.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:41.555 --rc genhtml_branch_coverage=1 00:37:41.555 --rc genhtml_function_coverage=1 00:37:41.555 --rc genhtml_legend=1 00:37:41.555 --rc geninfo_all_blocks=1 00:37:41.555 --rc geninfo_unexecuted_blocks=1 00:37:41.555 00:37:41.555 ' 00:37:41.555 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:41.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:41.555 --rc genhtml_branch_coverage=1 00:37:41.555 --rc genhtml_function_coverage=1 00:37:41.555 --rc genhtml_legend=1 00:37:41.555 --rc geninfo_all_blocks=1 00:37:41.555 --rc geninfo_unexecuted_blocks=1 00:37:41.555 00:37:41.555 ' 00:37:41.555 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:41.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:41.555 --rc genhtml_branch_coverage=1 00:37:41.555 --rc genhtml_function_coverage=1 00:37:41.555 --rc genhtml_legend=1 00:37:41.555 --rc geninfo_all_blocks=1 00:37:41.555 --rc geninfo_unexecuted_blocks=1 00:37:41.555 00:37:41.555 ' 00:37:41.555 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:41.555 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:37:41.555 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:41.555 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:41.555 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:41.555 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:41.555 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:41.555 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:41.555 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:41.555 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:41.555 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:41.816 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:41.816 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:41.816 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:41.816 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:41.816 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:41.816 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:41.816 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:41.816 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:41.816 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:37:41.816 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:41.816 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:41.816 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:41.816 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:41.816 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:41.816 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:41.816 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:37:41.816 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:41.816 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:37:41.816 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:41.816 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:41.816 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:41.816 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:41.817 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:41.817 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:41.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:41.817 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:41.817 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:41.817 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:41.817 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:37:41.817 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:37:41.817 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:37:41.817 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:37:41.817 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:37:41.817 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:37:41.817 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:41.817 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # prepare_net_devs 00:37:41.817 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@434 -- # local -g is_hw=no 00:37:41.817 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # remove_spdk_ns 00:37:41.817 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:41.817 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:41.817 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:41.817 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:37:41.817 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:37:41.817 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:37:41.817 15:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:49.959 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:49.959 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:49.959 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:49.960 Found net devices under 0000:31:00.0: cvl_0_0 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:49.960 Found net devices under 0000:31:00.1: cvl_0_1 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # is_hw=yes 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:49.960 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:49.960 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.533 ms 00:37:49.960 00:37:49.960 --- 10.0.0.2 ping statistics --- 00:37:49.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:49.960 rtt min/avg/max/mdev = 0.533/0.533/0.533/0.000 ms 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:49.960 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:49.960 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:37:49.960 00:37:49.960 --- 10.0.0.1 ping statistics --- 00:37:49.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:49.960 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # return 0 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:49.960 ************************************ 00:37:49.960 START TEST nvmf_digest_clean 00:37:49.960 ************************************ 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # nvmfpid=626513 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # waitforlisten 626513 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 626513 ']' 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:49.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:49.960 15:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:49.960 [2024-09-27 15:56:29.646663] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:37:49.960 [2024-09-27 15:56:29.646727] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:49.960 [2024-09-27 15:56:29.720558] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:49.960 [2024-09-27 15:56:29.766560] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:49.960 [2024-09-27 15:56:29.766613] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:49.960 [2024-09-27 15:56:29.766622] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:49.960 [2024-09-27 15:56:29.766629] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:49.960 [2024-09-27 15:56:29.766636] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:49.960 [2024-09-27 15:56:29.766659] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:50.221 15:56:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:50.221 15:56:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:37:50.221 15:56:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:37:50.221 15:56:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:50.221 15:56:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:50.221 15:56:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:50.221 15:56:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:37:50.221 15:56:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:37:50.221 15:56:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:37:50.221 15:56:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:50.221 15:56:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:50.221 null0 00:37:50.221 [2024-09-27 15:56:30.591824] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:50.221 [2024-09-27 15:56:30.616178] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:50.221 15:56:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:50.221 15:56:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:37:50.221 15:56:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:50.221 15:56:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:50.221 15:56:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:37:50.221 15:56:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:37:50.221 15:56:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:37:50.221 15:56:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:50.221 15:56:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=626753 00:37:50.221 15:56:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 626753 /var/tmp/bperf.sock 00:37:50.221 15:56:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 626753 ']' 00:37:50.221 15:56:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:37:50.221 15:56:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:50.221 15:56:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:50.221 15:56:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:50.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:50.221 15:56:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:50.221 15:56:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:50.221 [2024-09-27 15:56:30.676732] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:37:50.221 [2024-09-27 15:56:30.676796] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid626753 ] 00:37:50.482 [2024-09-27 15:56:30.759007] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:50.482 [2024-09-27 15:56:30.805843] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:51.052 15:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:51.052 15:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:37:51.052 15:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:51.052 15:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:51.052 15:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:51.314 15:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:51.314 15:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:51.575 nvme0n1 00:37:51.575 15:56:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:51.575 15:56:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:51.835 Running I/O for 2 seconds... 00:37:53.715 18634.00 IOPS, 72.79 MiB/s 19658.50 IOPS, 76.79 MiB/s 00:37:53.715 Latency(us) 00:37:53.715 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:53.715 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:53.715 nvme0n1 : 2.00 19675.73 76.86 0.00 0.00 6499.03 3099.31 18677.76 00:37:53.715 =================================================================================================================== 00:37:53.715 Total : 19675.73 76.86 0.00 0.00 6499.03 3099.31 18677.76 00:37:53.715 { 00:37:53.715 "results": [ 00:37:53.715 { 00:37:53.715 "job": "nvme0n1", 00:37:53.715 "core_mask": "0x2", 00:37:53.715 "workload": "randread", 00:37:53.715 "status": "finished", 00:37:53.715 "queue_depth": 128, 00:37:53.715 "io_size": 4096, 00:37:53.715 "runtime": 2.004754, 00:37:53.715 "iops": 19675.73078791712, 00:37:53.715 "mibps": 76.85832339030125, 00:37:53.715 "io_failed": 0, 00:37:53.715 "io_timeout": 0, 00:37:53.715 "avg_latency_us": 6499.0344334305155, 00:37:53.715 "min_latency_us": 3099.306666666667, 00:37:53.715 "max_latency_us": 18677.76 00:37:53.715 } 00:37:53.715 ], 00:37:53.715 "core_count": 1 00:37:53.715 } 00:37:53.715 15:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:53.715 15:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:53.715 15:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:53.715 15:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:53.715 15:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:53.715 | select(.opcode=="crc32c") 00:37:53.715 | "\(.module_name) \(.executed)"' 00:37:53.975 15:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:53.975 15:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:53.975 15:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:53.975 15:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:53.975 15:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 626753 00:37:53.975 15:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 626753 ']' 00:37:53.975 15:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 626753 00:37:53.975 15:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:37:53.975 15:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:53.975 15:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 626753 00:37:53.975 15:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:53.975 15:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:53.975 15:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 626753' 00:37:53.975 killing process with pid 626753 00:37:53.975 15:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 626753 00:37:53.975 Received shutdown signal, test time was about 2.000000 seconds 00:37:53.975 00:37:53.975 Latency(us) 00:37:53.975 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:53.975 =================================================================================================================== 00:37:53.975 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:53.975 15:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 626753 00:37:54.238 15:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:37:54.238 15:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:54.238 15:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:54.238 15:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:37:54.238 15:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:37:54.238 15:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:37:54.238 15:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:54.238 15:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=627432 00:37:54.238 15:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 627432 /var/tmp/bperf.sock 00:37:54.238 15:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 627432 ']' 00:37:54.238 15:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:37:54.238 15:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:54.238 15:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:54.238 15:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:54.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:54.238 15:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:54.238 15:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:54.238 [2024-09-27 15:56:34.552434] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:37:54.238 [2024-09-27 15:56:34.552494] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid627432 ] 00:37:54.238 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:54.238 Zero copy mechanism will not be used. 00:37:54.238 [2024-09-27 15:56:34.631058] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:54.238 [2024-09-27 15:56:34.659462] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:55.180 15:56:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:55.180 15:56:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:37:55.180 15:56:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:55.180 15:56:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:55.180 15:56:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:55.180 15:56:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:55.180 15:56:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:55.442 nvme0n1 00:37:55.701 15:56:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:55.701 15:56:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:55.701 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:55.701 Zero copy mechanism will not be used. 00:37:55.701 Running I/O for 2 seconds... 00:37:57.581 3313.00 IOPS, 414.12 MiB/s 3298.00 IOPS, 412.25 MiB/s 00:37:57.581 Latency(us) 00:37:57.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:57.581 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:37:57.581 nvme0n1 : 2.00 3303.65 412.96 0.00 0.00 4840.52 641.71 12124.16 00:37:57.581 =================================================================================================================== 00:37:57.581 Total : 3303.65 412.96 0.00 0.00 4840.52 641.71 12124.16 00:37:57.581 { 00:37:57.581 "results": [ 00:37:57.581 { 00:37:57.581 "job": "nvme0n1", 00:37:57.581 "core_mask": "0x2", 00:37:57.581 "workload": "randread", 00:37:57.581 "status": "finished", 00:37:57.581 "queue_depth": 16, 00:37:57.581 "io_size": 131072, 00:37:57.581 "runtime": 2.001421, 00:37:57.581 "iops": 3303.652754717773, 00:37:57.581 "mibps": 412.9565943397216, 00:37:57.581 "io_failed": 0, 00:37:57.581 "io_timeout": 0, 00:37:57.581 "avg_latency_us": 4840.52197620488, 00:37:57.581 "min_latency_us": 641.7066666666667, 00:37:57.581 "max_latency_us": 12124.16 00:37:57.581 } 00:37:57.581 ], 00:37:57.581 "core_count": 1 00:37:57.581 } 00:37:57.581 15:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:57.581 15:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:57.581 15:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:57.581 15:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:57.581 | select(.opcode=="crc32c") 00:37:57.581 | "\(.module_name) \(.executed)"' 00:37:57.581 15:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:57.841 15:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:57.841 15:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:57.841 15:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:57.841 15:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:57.841 15:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 627432 00:37:57.841 15:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 627432 ']' 00:37:57.841 15:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 627432 00:37:57.841 15:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:37:57.841 15:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:57.841 15:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 627432 00:37:57.841 15:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:57.841 15:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:57.841 15:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 627432' 00:37:57.841 killing process with pid 627432 00:37:57.841 15:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 627432 00:37:57.841 Received shutdown signal, test time was about 2.000000 seconds 00:37:57.841 00:37:57.841 Latency(us) 00:37:57.841 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:57.841 =================================================================================================================== 00:37:57.841 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:57.841 15:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 627432 00:37:58.103 15:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:37:58.103 15:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:58.103 15:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:58.103 15:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:37:58.103 15:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:37:58.103 15:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:37:58.103 15:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:58.103 15:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=628118 00:37:58.103 15:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 628118 /var/tmp/bperf.sock 00:37:58.103 15:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 628118 ']' 00:37:58.103 15:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:58.103 15:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:37:58.103 15:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:58.103 15:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:58.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:58.103 15:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:58.103 15:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:58.103 [2024-09-27 15:56:38.480479] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:37:58.103 [2024-09-27 15:56:38.480535] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid628118 ] 00:37:58.103 [2024-09-27 15:56:38.558482] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:58.103 [2024-09-27 15:56:38.584771] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:59.044 15:56:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:59.044 15:56:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:37:59.044 15:56:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:59.044 15:56:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:59.044 15:56:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:59.044 15:56:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:59.044 15:56:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:59.305 nvme0n1 00:37:59.305 15:56:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:59.305 15:56:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:59.565 Running I/O for 2 seconds... 00:38:01.447 29364.00 IOPS, 114.70 MiB/s 29546.00 IOPS, 115.41 MiB/s 00:38:01.447 Latency(us) 00:38:01.447 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:01.447 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:01.447 nvme0n1 : 2.01 29549.60 115.43 0.00 0.00 4324.64 3112.96 14527.15 00:38:01.447 =================================================================================================================== 00:38:01.447 Total : 29549.60 115.43 0.00 0.00 4324.64 3112.96 14527.15 00:38:01.447 { 00:38:01.447 "results": [ 00:38:01.447 { 00:38:01.447 "job": "nvme0n1", 00:38:01.447 "core_mask": "0x2", 00:38:01.447 "workload": "randwrite", 00:38:01.447 "status": "finished", 00:38:01.447 "queue_depth": 128, 00:38:01.447 "io_size": 4096, 00:38:01.447 "runtime": 2.005442, 00:38:01.447 "iops": 29549.59555050707, 00:38:01.447 "mibps": 115.42810761916824, 00:38:01.447 "io_failed": 0, 00:38:01.447 "io_timeout": 0, 00:38:01.447 "avg_latency_us": 4324.637116436045, 00:38:01.447 "min_latency_us": 3112.96, 00:38:01.447 "max_latency_us": 14527.146666666667 00:38:01.447 } 00:38:01.447 ], 00:38:01.447 "core_count": 1 00:38:01.447 } 00:38:01.447 15:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:38:01.447 15:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:38:01.447 15:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:38:01.447 15:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:38:01.447 | select(.opcode=="crc32c") 00:38:01.447 | "\(.module_name) \(.executed)"' 00:38:01.447 15:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:38:01.708 15:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:38:01.708 15:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:38:01.708 15:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:38:01.708 15:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:38:01.708 15:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 628118 00:38:01.708 15:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 628118 ']' 00:38:01.708 15:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 628118 00:38:01.708 15:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:38:01.708 15:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:01.708 15:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 628118 00:38:01.708 15:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:01.708 15:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:01.708 15:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 628118' 00:38:01.708 killing process with pid 628118 00:38:01.708 15:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 628118 00:38:01.708 Received shutdown signal, test time was about 2.000000 seconds 00:38:01.708 00:38:01.708 Latency(us) 00:38:01.708 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:01.708 =================================================================================================================== 00:38:01.708 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:01.708 15:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 628118 00:38:01.968 15:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:38:01.968 15:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:38:01.968 15:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:38:01.968 15:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:38:01.968 15:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:38:01.968 15:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:38:01.968 15:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:38:01.968 15:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=628885 00:38:01.968 15:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 628885 /var/tmp/bperf.sock 00:38:01.968 15:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 628885 ']' 00:38:01.968 15:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:38:01.968 15:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:01.968 15:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:01.968 15:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:01.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:01.968 15:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:01.968 15:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:01.968 [2024-09-27 15:56:42.287445] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:38:01.968 [2024-09-27 15:56:42.287526] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid628885 ] 00:38:01.968 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:01.968 Zero copy mechanism will not be used. 00:38:01.968 [2024-09-27 15:56:42.367340] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:01.968 [2024-09-27 15:56:42.395709] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:02.910 15:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:02.910 15:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:38:02.910 15:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:38:02.910 15:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:38:02.910 15:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:02.911 15:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:02.911 15:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:03.171 nvme0n1 00:38:03.171 15:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:38:03.171 15:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:03.431 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:03.431 Zero copy mechanism will not be used. 00:38:03.432 Running I/O for 2 seconds... 00:38:05.313 6503.00 IOPS, 812.88 MiB/s 6827.00 IOPS, 853.38 MiB/s 00:38:05.313 Latency(us) 00:38:05.313 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:05.313 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:38:05.313 nvme0n1 : 2.00 6825.04 853.13 0.00 0.00 2340.47 1140.05 12670.29 00:38:05.314 =================================================================================================================== 00:38:05.314 Total : 6825.04 853.13 0.00 0.00 2340.47 1140.05 12670.29 00:38:05.314 { 00:38:05.314 "results": [ 00:38:05.314 { 00:38:05.314 "job": "nvme0n1", 00:38:05.314 "core_mask": "0x2", 00:38:05.314 "workload": "randwrite", 00:38:05.314 "status": "finished", 00:38:05.314 "queue_depth": 16, 00:38:05.314 "io_size": 131072, 00:38:05.314 "runtime": 2.003357, 00:38:05.314 "iops": 6825.04416337178, 00:38:05.314 "mibps": 853.1305204214725, 00:38:05.314 "io_failed": 0, 00:38:05.314 "io_timeout": 0, 00:38:05.314 "avg_latency_us": 2340.4652244082013, 00:38:05.314 "min_latency_us": 1140.0533333333333, 00:38:05.314 "max_latency_us": 12670.293333333333 00:38:05.314 } 00:38:05.314 ], 00:38:05.314 "core_count": 1 00:38:05.314 } 00:38:05.314 15:56:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:38:05.314 15:56:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:38:05.314 15:56:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:38:05.314 15:56:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:38:05.314 | select(.opcode=="crc32c") 00:38:05.314 | "\(.module_name) \(.executed)"' 00:38:05.314 15:56:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:38:05.575 15:56:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:38:05.575 15:56:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:38:05.575 15:56:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:38:05.575 15:56:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:38:05.575 15:56:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 628885 00:38:05.575 15:56:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 628885 ']' 00:38:05.575 15:56:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 628885 00:38:05.575 15:56:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:38:05.575 15:56:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:05.575 15:56:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 628885 00:38:05.575 15:56:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:05.575 15:56:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:05.575 15:56:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 628885' 00:38:05.575 killing process with pid 628885 00:38:05.575 15:56:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 628885 00:38:05.575 Received shutdown signal, test time was about 2.000000 seconds 00:38:05.575 00:38:05.575 Latency(us) 00:38:05.575 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:05.575 =================================================================================================================== 00:38:05.575 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:05.575 15:56:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 628885 00:38:05.836 15:56:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 626513 00:38:05.836 15:56:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 626513 ']' 00:38:05.836 15:56:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 626513 00:38:05.836 15:56:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:38:05.836 15:56:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:05.836 15:56:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 626513 00:38:05.836 15:56:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:05.836 15:56:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:05.836 15:56:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 626513' 00:38:05.836 killing process with pid 626513 00:38:05.836 15:56:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 626513 00:38:05.836 15:56:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 626513 00:38:05.836 00:38:05.836 real 0m16.708s 00:38:05.836 user 0m32.926s 00:38:05.836 sys 0m3.842s 00:38:05.836 15:56:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:05.836 15:56:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:05.836 ************************************ 00:38:05.836 END TEST nvmf_digest_clean 00:38:05.836 ************************************ 00:38:06.097 15:56:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:38:06.097 15:56:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:06.097 15:56:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:06.097 15:56:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:38:06.097 ************************************ 00:38:06.097 START TEST nvmf_digest_error 00:38:06.097 ************************************ 00:38:06.097 15:56:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:38:06.097 15:56:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:38:06.097 15:56:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:38:06.097 15:56:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:06.097 15:56:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:06.097 15:56:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # nvmfpid=629826 00:38:06.097 15:56:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # waitforlisten 629826 00:38:06.097 15:56:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:38:06.097 15:56:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 629826 ']' 00:38:06.097 15:56:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:06.097 15:56:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:06.097 15:56:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:06.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:06.097 15:56:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:06.097 15:56:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:06.097 [2024-09-27 15:56:46.440210] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:38:06.097 [2024-09-27 15:56:46.440296] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:06.097 [2024-09-27 15:56:46.527843] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:06.097 [2024-09-27 15:56:46.560450] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:06.097 [2024-09-27 15:56:46.560492] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:06.097 [2024-09-27 15:56:46.560498] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:06.097 [2024-09-27 15:56:46.560503] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:06.097 [2024-09-27 15:56:46.560507] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:06.097 [2024-09-27 15:56:46.560523] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:07.039 15:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:07.039 15:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:38:07.039 15:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:38:07.039 15:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:07.039 15:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:07.039 15:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:07.039 15:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:38:07.039 15:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:07.039 15:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:07.039 [2024-09-27 15:56:47.266468] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:38:07.039 15:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:07.039 15:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:38:07.039 15:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:38:07.039 15:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:07.039 15:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:07.039 null0 00:38:07.039 [2024-09-27 15:56:47.338798] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:07.039 [2024-09-27 15:56:47.362995] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:07.039 15:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:07.039 15:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:38:07.039 15:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:38:07.039 15:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:38:07.039 15:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:38:07.039 15:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:38:07.039 15:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=629861 00:38:07.039 15:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 629861 /var/tmp/bperf.sock 00:38:07.039 15:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 629861 ']' 00:38:07.039 15:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:38:07.039 15:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:07.039 15:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:07.039 15:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:07.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:07.039 15:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:07.039 15:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:07.039 [2024-09-27 15:56:47.425426] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:38:07.039 [2024-09-27 15:56:47.425475] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid629861 ] 00:38:07.039 [2024-09-27 15:56:47.503106] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:07.299 [2024-09-27 15:56:47.531590] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:07.869 15:56:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:07.869 15:56:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:38:07.869 15:56:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:07.869 15:56:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:07.869 15:56:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:38:07.869 15:56:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:07.869 15:56:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:08.130 15:56:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:08.130 15:56:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:08.130 15:56:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:08.389 nvme0n1 00:38:08.390 15:56:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:38:08.390 15:56:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:08.390 15:56:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:08.390 15:56:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:08.390 15:56:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:38:08.390 15:56:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:08.390 Running I/O for 2 seconds... 00:38:08.390 [2024-09-27 15:56:48.846618] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.390 [2024-09-27 15:56:48.846650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.390 [2024-09-27 15:56:48.846662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.390 [2024-09-27 15:56:48.857220] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.390 [2024-09-27 15:56:48.857244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.390 [2024-09-27 15:56:48.857255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.390 [2024-09-27 15:56:48.866521] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.390 [2024-09-27 15:56:48.866541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.390 [2024-09-27 15:56:48.866551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.390 [2024-09-27 15:56:48.875945] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.390 [2024-09-27 15:56:48.875965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.390 [2024-09-27 15:56:48.875974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.650 [2024-09-27 15:56:48.885195] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.650 [2024-09-27 15:56:48.885215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.650 [2024-09-27 15:56:48.885224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.650 [2024-09-27 15:56:48.893107] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.650 [2024-09-27 15:56:48.893126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.650 [2024-09-27 15:56:48.893136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.650 [2024-09-27 15:56:48.902084] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.650 [2024-09-27 15:56:48.902103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.650 [2024-09-27 15:56:48.902112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.650 [2024-09-27 15:56:48.910476] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.650 [2024-09-27 15:56:48.910494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.651 [2024-09-27 15:56:48.910503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.651 [2024-09-27 15:56:48.920666] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.651 [2024-09-27 15:56:48.920684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.651 [2024-09-27 15:56:48.920694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.651 [2024-09-27 15:56:48.929236] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.651 [2024-09-27 15:56:48.929255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.651 [2024-09-27 15:56:48.929264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.651 [2024-09-27 15:56:48.938869] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.651 [2024-09-27 15:56:48.938888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.651 [2024-09-27 15:56:48.938902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.651 [2024-09-27 15:56:48.947251] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.651 [2024-09-27 15:56:48.947270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.651 [2024-09-27 15:56:48.947279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.651 [2024-09-27 15:56:48.955863] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.651 [2024-09-27 15:56:48.955881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.651 [2024-09-27 15:56:48.955890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.651 [2024-09-27 15:56:48.964142] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.651 [2024-09-27 15:56:48.964161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.651 [2024-09-27 15:56:48.964175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.651 [2024-09-27 15:56:48.972311] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.651 [2024-09-27 15:56:48.972329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.651 [2024-09-27 15:56:48.972339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.651 [2024-09-27 15:56:48.983453] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.651 [2024-09-27 15:56:48.983470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.651 [2024-09-27 15:56:48.983480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.651 [2024-09-27 15:56:48.995667] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.651 [2024-09-27 15:56:48.995685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.651 [2024-09-27 15:56:48.995694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.651 [2024-09-27 15:56:49.005983] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.651 [2024-09-27 15:56:49.006002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.651 [2024-09-27 15:56:49.006012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.651 [2024-09-27 15:56:49.015163] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.651 [2024-09-27 15:56:49.015181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.651 [2024-09-27 15:56:49.015191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.651 [2024-09-27 15:56:49.023642] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.651 [2024-09-27 15:56:49.023660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.651 [2024-09-27 15:56:49.023670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.651 [2024-09-27 15:56:49.032675] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.651 [2024-09-27 15:56:49.032693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.651 [2024-09-27 15:56:49.032702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.651 [2024-09-27 15:56:49.041909] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.651 [2024-09-27 15:56:49.041928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.651 [2024-09-27 15:56:49.041937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.651 [2024-09-27 15:56:49.050048] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.651 [2024-09-27 15:56:49.050067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.651 [2024-09-27 15:56:49.050076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.651 [2024-09-27 15:56:49.060409] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.651 [2024-09-27 15:56:49.060427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:50 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.651 [2024-09-27 15:56:49.060436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.651 [2024-09-27 15:56:49.070023] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.651 [2024-09-27 15:56:49.070041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.651 [2024-09-27 15:56:49.070050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.651 [2024-09-27 15:56:49.078593] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.651 [2024-09-27 15:56:49.078611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.651 [2024-09-27 15:56:49.078620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.651 [2024-09-27 15:56:49.087219] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.651 [2024-09-27 15:56:49.087238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.651 [2024-09-27 15:56:49.087247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.651 [2024-09-27 15:56:49.095555] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.651 [2024-09-27 15:56:49.095574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.651 [2024-09-27 15:56:49.095583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.651 [2024-09-27 15:56:49.104782] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.651 [2024-09-27 15:56:49.104800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.651 [2024-09-27 15:56:49.104810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.651 [2024-09-27 15:56:49.113695] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.651 [2024-09-27 15:56:49.113713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.651 [2024-09-27 15:56:49.113722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.651 [2024-09-27 15:56:49.123100] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.651 [2024-09-27 15:56:49.123118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:7562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.651 [2024-09-27 15:56:49.123134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.651 [2024-09-27 15:56:49.132511] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.651 [2024-09-27 15:56:49.132529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.651 [2024-09-27 15:56:49.132538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.913 [2024-09-27 15:56:49.140366] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.913 [2024-09-27 15:56:49.140384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.913 [2024-09-27 15:56:49.140393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.913 [2024-09-27 15:56:49.148766] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.913 [2024-09-27 15:56:49.148784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.913 [2024-09-27 15:56:49.148793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.913 [2024-09-27 15:56:49.157910] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.913 [2024-09-27 15:56:49.157929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.913 [2024-09-27 15:56:49.157938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.913 [2024-09-27 15:56:49.165949] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.913 [2024-09-27 15:56:49.165968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.913 [2024-09-27 15:56:49.165978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.913 [2024-09-27 15:56:49.175586] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.913 [2024-09-27 15:56:49.175604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.913 [2024-09-27 15:56:49.175613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.913 [2024-09-27 15:56:49.186346] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.913 [2024-09-27 15:56:49.186364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.913 [2024-09-27 15:56:49.186374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.913 [2024-09-27 15:56:49.195796] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.913 [2024-09-27 15:56:49.195814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.913 [2024-09-27 15:56:49.195822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.913 [2024-09-27 15:56:49.204365] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.914 [2024-09-27 15:56:49.204387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.914 [2024-09-27 15:56:49.204397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.914 [2024-09-27 15:56:49.213537] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.914 [2024-09-27 15:56:49.213555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.914 [2024-09-27 15:56:49.213564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.914 [2024-09-27 15:56:49.221752] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.914 [2024-09-27 15:56:49.221770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.914 [2024-09-27 15:56:49.221780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.914 [2024-09-27 15:56:49.232844] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.914 [2024-09-27 15:56:49.232862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:25213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.914 [2024-09-27 15:56:49.232872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.914 [2024-09-27 15:56:49.243112] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.914 [2024-09-27 15:56:49.243131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.914 [2024-09-27 15:56:49.243140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.914 [2024-09-27 15:56:49.251737] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.914 [2024-09-27 15:56:49.251755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.914 [2024-09-27 15:56:49.251764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.914 [2024-09-27 15:56:49.260952] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.914 [2024-09-27 15:56:49.260970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.914 [2024-09-27 15:56:49.260979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.914 [2024-09-27 15:56:49.270332] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.914 [2024-09-27 15:56:49.270350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.914 [2024-09-27 15:56:49.270359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.914 [2024-09-27 15:56:49.278498] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.914 [2024-09-27 15:56:49.278517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.914 [2024-09-27 15:56:49.278526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.914 [2024-09-27 15:56:49.288043] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.914 [2024-09-27 15:56:49.288062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.914 [2024-09-27 15:56:49.288071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.914 [2024-09-27 15:56:49.295186] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.914 [2024-09-27 15:56:49.295203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.914 [2024-09-27 15:56:49.295213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.914 [2024-09-27 15:56:49.306799] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.914 [2024-09-27 15:56:49.306817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.914 [2024-09-27 15:56:49.306826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.914 [2024-09-27 15:56:49.317730] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.914 [2024-09-27 15:56:49.317748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.914 [2024-09-27 15:56:49.317757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.914 [2024-09-27 15:56:49.327473] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.914 [2024-09-27 15:56:49.327491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.914 [2024-09-27 15:56:49.327500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.914 [2024-09-27 15:56:49.336266] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.914 [2024-09-27 15:56:49.336284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:25144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.914 [2024-09-27 15:56:49.336293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.914 [2024-09-27 15:56:49.344953] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.914 [2024-09-27 15:56:49.344971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.914 [2024-09-27 15:56:49.344980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.914 [2024-09-27 15:56:49.353795] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.914 [2024-09-27 15:56:49.353813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.914 [2024-09-27 15:56:49.353821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.914 [2024-09-27 15:56:49.363633] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.914 [2024-09-27 15:56:49.363651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.914 [2024-09-27 15:56:49.363664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.914 [2024-09-27 15:56:49.371542] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.914 [2024-09-27 15:56:49.371560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.914 [2024-09-27 15:56:49.371569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.914 [2024-09-27 15:56:49.382335] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.914 [2024-09-27 15:56:49.382353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.914 [2024-09-27 15:56:49.382362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:08.914 [2024-09-27 15:56:49.392605] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:08.914 [2024-09-27 15:56:49.392624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.914 [2024-09-27 15:56:49.392633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.177 [2024-09-27 15:56:49.401039] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.177 [2024-09-27 15:56:49.401057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.177 [2024-09-27 15:56:49.401067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.177 [2024-09-27 15:56:49.409146] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.177 [2024-09-27 15:56:49.409165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:25058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.177 [2024-09-27 15:56:49.409175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.177 [2024-09-27 15:56:49.418596] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.177 [2024-09-27 15:56:49.418615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.177 [2024-09-27 15:56:49.418624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.177 [2024-09-27 15:56:49.427131] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.177 [2024-09-27 15:56:49.427150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.177 [2024-09-27 15:56:49.427159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.177 [2024-09-27 15:56:49.436033] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.177 [2024-09-27 15:56:49.436051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.177 [2024-09-27 15:56:49.436061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.177 [2024-09-27 15:56:49.445295] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.177 [2024-09-27 15:56:49.445313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.177 [2024-09-27 15:56:49.445322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.177 [2024-09-27 15:56:49.454169] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.177 [2024-09-27 15:56:49.454188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.177 [2024-09-27 15:56:49.454197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.177 [2024-09-27 15:56:49.462656] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.177 [2024-09-27 15:56:49.462674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.177 [2024-09-27 15:56:49.462684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.177 [2024-09-27 15:56:49.471899] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.177 [2024-09-27 15:56:49.471917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.177 [2024-09-27 15:56:49.471926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.177 [2024-09-27 15:56:49.480215] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.177 [2024-09-27 15:56:49.480233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.177 [2024-09-27 15:56:49.480242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.177 [2024-09-27 15:56:49.489121] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.177 [2024-09-27 15:56:49.489139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.177 [2024-09-27 15:56:49.489148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.177 [2024-09-27 15:56:49.499111] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.177 [2024-09-27 15:56:49.499130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.177 [2024-09-27 15:56:49.499140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.177 [2024-09-27 15:56:49.506874] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.177 [2024-09-27 15:56:49.506898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.177 [2024-09-27 15:56:49.506908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.177 [2024-09-27 15:56:49.515744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.177 [2024-09-27 15:56:49.515763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.177 [2024-09-27 15:56:49.515778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.177 [2024-09-27 15:56:49.524861] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.177 [2024-09-27 15:56:49.524879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.177 [2024-09-27 15:56:49.524889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.177 [2024-09-27 15:56:49.534121] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.177 [2024-09-27 15:56:49.534139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.177 [2024-09-27 15:56:49.534148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.177 [2024-09-27 15:56:49.541861] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.177 [2024-09-27 15:56:49.541880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.177 [2024-09-27 15:56:49.541889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.177 [2024-09-27 15:56:49.551777] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.177 [2024-09-27 15:56:49.551797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.177 [2024-09-27 15:56:49.551806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.177 [2024-09-27 15:56:49.560788] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.177 [2024-09-27 15:56:49.560807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.177 [2024-09-27 15:56:49.560816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.177 [2024-09-27 15:56:49.570937] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.177 [2024-09-27 15:56:49.570955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.177 [2024-09-27 15:56:49.570965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.177 [2024-09-27 15:56:49.579200] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.177 [2024-09-27 15:56:49.579218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.177 [2024-09-27 15:56:49.579227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.177 [2024-09-27 15:56:49.589361] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.177 [2024-09-27 15:56:49.589380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.177 [2024-09-27 15:56:49.589389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.177 [2024-09-27 15:56:49.598804] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.177 [2024-09-27 15:56:49.598826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.177 [2024-09-27 15:56:49.598836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.177 [2024-09-27 15:56:49.607290] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.177 [2024-09-27 15:56:49.607308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.178 [2024-09-27 15:56:49.607318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.178 [2024-09-27 15:56:49.616513] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.178 [2024-09-27 15:56:49.616530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.178 [2024-09-27 15:56:49.616540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.178 [2024-09-27 15:56:49.624424] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.178 [2024-09-27 15:56:49.624443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.178 [2024-09-27 15:56:49.624453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.178 [2024-09-27 15:56:49.634421] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.178 [2024-09-27 15:56:49.634440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.178 [2024-09-27 15:56:49.634449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.178 [2024-09-27 15:56:49.644296] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.178 [2024-09-27 15:56:49.644314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.178 [2024-09-27 15:56:49.644323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.178 [2024-09-27 15:56:49.653070] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.178 [2024-09-27 15:56:49.653088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.178 [2024-09-27 15:56:49.653097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.178 [2024-09-27 15:56:49.662071] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.178 [2024-09-27 15:56:49.662089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.178 [2024-09-27 15:56:49.662098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.440 [2024-09-27 15:56:49.672727] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.440 [2024-09-27 15:56:49.672747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.440 [2024-09-27 15:56:49.672756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.440 [2024-09-27 15:56:49.683216] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.440 [2024-09-27 15:56:49.683235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.440 [2024-09-27 15:56:49.683244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.440 [2024-09-27 15:56:49.695508] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.440 [2024-09-27 15:56:49.695527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.440 [2024-09-27 15:56:49.695536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.440 [2024-09-27 15:56:49.706607] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.440 [2024-09-27 15:56:49.706625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.440 [2024-09-27 15:56:49.706635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.440 [2024-09-27 15:56:49.714826] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.440 [2024-09-27 15:56:49.714844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.440 [2024-09-27 15:56:49.714853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.440 [2024-09-27 15:56:49.724018] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.440 [2024-09-27 15:56:49.724036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.440 [2024-09-27 15:56:49.724045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.440 [2024-09-27 15:56:49.732174] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.441 [2024-09-27 15:56:49.732193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.441 [2024-09-27 15:56:49.732202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.441 [2024-09-27 15:56:49.741367] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.441 [2024-09-27 15:56:49.741385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.441 [2024-09-27 15:56:49.741394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.441 [2024-09-27 15:56:49.750615] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.441 [2024-09-27 15:56:49.750634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.441 [2024-09-27 15:56:49.750643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.441 [2024-09-27 15:56:49.759906] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.441 [2024-09-27 15:56:49.759925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.441 [2024-09-27 15:56:49.759938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.441 [2024-09-27 15:56:49.768876] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.441 [2024-09-27 15:56:49.768899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.441 [2024-09-27 15:56:49.768908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.441 [2024-09-27 15:56:49.777903] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.441 [2024-09-27 15:56:49.777922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.441 [2024-09-27 15:56:49.777931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.441 [2024-09-27 15:56:49.786993] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.441 [2024-09-27 15:56:49.787011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.441 [2024-09-27 15:56:49.787021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.441 [2024-09-27 15:56:49.795136] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.441 [2024-09-27 15:56:49.795154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.441 [2024-09-27 15:56:49.795163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.441 [2024-09-27 15:56:49.804223] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.441 [2024-09-27 15:56:49.804241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.441 [2024-09-27 15:56:49.804251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.441 [2024-09-27 15:56:49.812799] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.441 [2024-09-27 15:56:49.812817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.441 [2024-09-27 15:56:49.812827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.441 [2024-09-27 15:56:49.821825] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.441 [2024-09-27 15:56:49.821844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.441 [2024-09-27 15:56:49.821853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.441 27528.00 IOPS, 107.53 MiB/s [2024-09-27 15:56:49.832193] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.441 [2024-09-27 15:56:49.832209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.441 [2024-09-27 15:56:49.832218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.441 [2024-09-27 15:56:49.840856] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.441 [2024-09-27 15:56:49.840874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.441 [2024-09-27 15:56:49.840884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.441 [2024-09-27 15:56:49.852419] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.441 [2024-09-27 15:56:49.852438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.441 [2024-09-27 15:56:49.852447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.441 [2024-09-27 15:56:49.862789] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.441 [2024-09-27 15:56:49.862810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.441 [2024-09-27 15:56:49.862821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.441 [2024-09-27 15:56:49.870486] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.441 [2024-09-27 15:56:49.870505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.441 [2024-09-27 15:56:49.870515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.441 [2024-09-27 15:56:49.879508] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.441 [2024-09-27 15:56:49.879527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.441 [2024-09-27 15:56:49.879536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.441 [2024-09-27 15:56:49.888401] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.441 [2024-09-27 15:56:49.888419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.441 [2024-09-27 15:56:49.888428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.441 [2024-09-27 15:56:49.897570] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.441 [2024-09-27 15:56:49.897588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.441 [2024-09-27 15:56:49.897597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.441 [2024-09-27 15:56:49.906813] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.441 [2024-09-27 15:56:49.906831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.441 [2024-09-27 15:56:49.906840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.441 [2024-09-27 15:56:49.915053] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.441 [2024-09-27 15:56:49.915072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.441 [2024-09-27 15:56:49.915086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.441 [2024-09-27 15:56:49.923668] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.441 [2024-09-27 15:56:49.923687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.441 [2024-09-27 15:56:49.923696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.703 [2024-09-27 15:56:49.932387] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.703 [2024-09-27 15:56:49.932406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:15232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.703 [2024-09-27 15:56:49.932416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.703 [2024-09-27 15:56:49.942933] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.703 [2024-09-27 15:56:49.942952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.703 [2024-09-27 15:56:49.942961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.703 [2024-09-27 15:56:49.951716] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.703 [2024-09-27 15:56:49.951734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.703 [2024-09-27 15:56:49.951744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.703 [2024-09-27 15:56:49.960049] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.703 [2024-09-27 15:56:49.960067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.703 [2024-09-27 15:56:49.960075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.703 [2024-09-27 15:56:49.969131] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.703 [2024-09-27 15:56:49.969149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.703 [2024-09-27 15:56:49.969159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.703 [2024-09-27 15:56:49.978068] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.703 [2024-09-27 15:56:49.978086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.703 [2024-09-27 15:56:49.978095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.703 [2024-09-27 15:56:49.986808] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.703 [2024-09-27 15:56:49.986827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.703 [2024-09-27 15:56:49.986836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.703 [2024-09-27 15:56:49.995141] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.703 [2024-09-27 15:56:49.995163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.703 [2024-09-27 15:56:49.995172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.703 [2024-09-27 15:56:50.004460] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.703 [2024-09-27 15:56:50.004478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.703 [2024-09-27 15:56:50.004487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.703 [2024-09-27 15:56:50.014153] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.703 [2024-09-27 15:56:50.014172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.703 [2024-09-27 15:56:50.014182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.703 [2024-09-27 15:56:50.022184] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.703 [2024-09-27 15:56:50.022203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.703 [2024-09-27 15:56:50.022212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.703 [2024-09-27 15:56:50.031928] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.703 [2024-09-27 15:56:50.031946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.703 [2024-09-27 15:56:50.031956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.703 [2024-09-27 15:56:50.040629] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.703 [2024-09-27 15:56:50.040648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.703 [2024-09-27 15:56:50.040658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.704 [2024-09-27 15:56:50.050742] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.704 [2024-09-27 15:56:50.050759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.704 [2024-09-27 15:56:50.050770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.704 [2024-09-27 15:56:50.058905] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.704 [2024-09-27 15:56:50.058922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.704 [2024-09-27 15:56:50.058931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.704 [2024-09-27 15:56:50.067121] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.704 [2024-09-27 15:56:50.067138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.704 [2024-09-27 15:56:50.067148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.704 [2024-09-27 15:56:50.076631] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.704 [2024-09-27 15:56:50.076649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.704 [2024-09-27 15:56:50.076659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.704 [2024-09-27 15:56:50.086021] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.704 [2024-09-27 15:56:50.086038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.704 [2024-09-27 15:56:50.086047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.704 [2024-09-27 15:56:50.094024] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.704 [2024-09-27 15:56:50.094042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.704 [2024-09-27 15:56:50.094051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.704 [2024-09-27 15:56:50.103446] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.704 [2024-09-27 15:56:50.103464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.704 [2024-09-27 15:56:50.103473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.704 [2024-09-27 15:56:50.112852] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.704 [2024-09-27 15:56:50.112870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.704 [2024-09-27 15:56:50.112880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.704 [2024-09-27 15:56:50.120605] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.704 [2024-09-27 15:56:50.120624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.704 [2024-09-27 15:56:50.120633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.704 [2024-09-27 15:56:50.130238] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.704 [2024-09-27 15:56:50.130257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.704 [2024-09-27 15:56:50.130266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.704 [2024-09-27 15:56:50.140291] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.704 [2024-09-27 15:56:50.140310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.704 [2024-09-27 15:56:50.140319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.704 [2024-09-27 15:56:50.148384] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.704 [2024-09-27 15:56:50.148402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.704 [2024-09-27 15:56:50.148416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.704 [2024-09-27 15:56:50.160726] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.704 [2024-09-27 15:56:50.160744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.704 [2024-09-27 15:56:50.160754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.704 [2024-09-27 15:56:50.171743] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.704 [2024-09-27 15:56:50.171761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.704 [2024-09-27 15:56:50.171770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.704 [2024-09-27 15:56:50.183689] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.704 [2024-09-27 15:56:50.183707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.704 [2024-09-27 15:56:50.183716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.964 [2024-09-27 15:56:50.191700] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.964 [2024-09-27 15:56:50.191718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.964 [2024-09-27 15:56:50.191727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.964 [2024-09-27 15:56:50.200285] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.964 [2024-09-27 15:56:50.200304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.964 [2024-09-27 15:56:50.200313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.964 [2024-09-27 15:56:50.209419] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.964 [2024-09-27 15:56:50.209438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.964 [2024-09-27 15:56:50.209447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.964 [2024-09-27 15:56:50.217907] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.964 [2024-09-27 15:56:50.217925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.964 [2024-09-27 15:56:50.217944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.964 [2024-09-27 15:56:50.227090] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.964 [2024-09-27 15:56:50.227108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.964 [2024-09-27 15:56:50.227117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.964 [2024-09-27 15:56:50.236189] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.964 [2024-09-27 15:56:50.236207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.964 [2024-09-27 15:56:50.236216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.964 [2024-09-27 15:56:50.243995] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.964 [2024-09-27 15:56:50.244013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.964 [2024-09-27 15:56:50.244022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.964 [2024-09-27 15:56:50.254610] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.964 [2024-09-27 15:56:50.254628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.964 [2024-09-27 15:56:50.254637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.964 [2024-09-27 15:56:50.264101] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.964 [2024-09-27 15:56:50.264118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.964 [2024-09-27 15:56:50.264126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.964 [2024-09-27 15:56:50.272354] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.964 [2024-09-27 15:56:50.272372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.964 [2024-09-27 15:56:50.272380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.964 [2024-09-27 15:56:50.281738] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.964 [2024-09-27 15:56:50.281756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.964 [2024-09-27 15:56:50.281765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.964 [2024-09-27 15:56:50.289844] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.964 [2024-09-27 15:56:50.289862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.964 [2024-09-27 15:56:50.289871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.964 [2024-09-27 15:56:50.299238] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.964 [2024-09-27 15:56:50.299255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.964 [2024-09-27 15:56:50.299265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.964 [2024-09-27 15:56:50.308229] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.964 [2024-09-27 15:56:50.308246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.964 [2024-09-27 15:56:50.308263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.964 [2024-09-27 15:56:50.316296] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.964 [2024-09-27 15:56:50.316313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.964 [2024-09-27 15:56:50.316322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.964 [2024-09-27 15:56:50.325118] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.964 [2024-09-27 15:56:50.325135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.964 [2024-09-27 15:56:50.325144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.965 [2024-09-27 15:56:50.334471] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.965 [2024-09-27 15:56:50.334489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.965 [2024-09-27 15:56:50.334498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.965 [2024-09-27 15:56:50.344639] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.965 [2024-09-27 15:56:50.344657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.965 [2024-09-27 15:56:50.344667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.965 [2024-09-27 15:56:50.352445] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.965 [2024-09-27 15:56:50.352464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.965 [2024-09-27 15:56:50.352473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.965 [2024-09-27 15:56:50.362122] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.965 [2024-09-27 15:56:50.362141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.965 [2024-09-27 15:56:50.362150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.965 [2024-09-27 15:56:50.371084] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.965 [2024-09-27 15:56:50.371102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.965 [2024-09-27 15:56:50.371111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.965 [2024-09-27 15:56:50.379325] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.965 [2024-09-27 15:56:50.379342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.965 [2024-09-27 15:56:50.379351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.965 [2024-09-27 15:56:50.388567] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.965 [2024-09-27 15:56:50.388589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.965 [2024-09-27 15:56:50.388598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.965 [2024-09-27 15:56:50.398616] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.965 [2024-09-27 15:56:50.398635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.965 [2024-09-27 15:56:50.398644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.965 [2024-09-27 15:56:50.406683] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.965 [2024-09-27 15:56:50.406701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.965 [2024-09-27 15:56:50.406710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.965 [2024-09-27 15:56:50.415922] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.965 [2024-09-27 15:56:50.415940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.965 [2024-09-27 15:56:50.415949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.965 [2024-09-27 15:56:50.424785] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.965 [2024-09-27 15:56:50.424803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.965 [2024-09-27 15:56:50.424812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.965 [2024-09-27 15:56:50.433451] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.965 [2024-09-27 15:56:50.433469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.965 [2024-09-27 15:56:50.433478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:09.965 [2024-09-27 15:56:50.442595] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:09.965 [2024-09-27 15:56:50.442613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:09.965 [2024-09-27 15:56:50.442622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:10.225 [2024-09-27 15:56:50.452385] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:10.225 [2024-09-27 15:56:50.452404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.225 [2024-09-27 15:56:50.452413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:10.225 [2024-09-27 15:56:50.460227] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:10.225 [2024-09-27 15:56:50.460245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.225 [2024-09-27 15:56:50.460254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:10.225 [2024-09-27 15:56:50.470939] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:10.225 [2024-09-27 15:56:50.470956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.225 [2024-09-27 15:56:50.470965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:10.225 [2024-09-27 15:56:50.480901] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:10.225 [2024-09-27 15:56:50.480919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.225 [2024-09-27 15:56:50.480928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:10.225 [2024-09-27 15:56:50.489705] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:10.225 [2024-09-27 15:56:50.489723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.225 [2024-09-27 15:56:50.489732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:10.225 [2024-09-27 15:56:50.497096] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:10.225 [2024-09-27 15:56:50.497113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.225 [2024-09-27 15:56:50.497123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:10.225 [2024-09-27 15:56:50.507321] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:10.225 [2024-09-27 15:56:50.507339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.225 [2024-09-27 15:56:50.507348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:10.225 [2024-09-27 15:56:50.516193] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:10.225 [2024-09-27 15:56:50.516211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.225 [2024-09-27 15:56:50.516220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:10.225 [2024-09-27 15:56:50.526980] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:10.225 [2024-09-27 15:56:50.526998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.225 [2024-09-27 15:56:50.527006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:10.225 [2024-09-27 15:56:50.535233] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:10.225 [2024-09-27 15:56:50.535251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.225 [2024-09-27 15:56:50.535260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:10.225 [2024-09-27 15:56:50.543730] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:10.225 [2024-09-27 15:56:50.543748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.225 [2024-09-27 15:56:50.543761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:10.225 [2024-09-27 15:56:50.553271] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:10.225 [2024-09-27 15:56:50.553290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.225 [2024-09-27 15:56:50.553299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:10.225 [2024-09-27 15:56:50.561538] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:10.225 [2024-09-27 15:56:50.561556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.225 [2024-09-27 15:56:50.561565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:10.225 [2024-09-27 15:56:50.571036] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:10.225 [2024-09-27 15:56:50.571054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.225 [2024-09-27 15:56:50.571063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:10.225 [2024-09-27 15:56:50.580602] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:10.225 [2024-09-27 15:56:50.580619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.225 [2024-09-27 15:56:50.580628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:10.226 [2024-09-27 15:56:50.588191] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:10.226 [2024-09-27 15:56:50.588209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.226 [2024-09-27 15:56:50.588219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:10.226 [2024-09-27 15:56:50.597597] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:10.226 [2024-09-27 15:56:50.597615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.226 [2024-09-27 15:56:50.597624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:10.226 [2024-09-27 15:56:50.608575] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:10.226 [2024-09-27 15:56:50.608593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.226 [2024-09-27 15:56:50.608603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:10.226 [2024-09-27 15:56:50.617773] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:10.226 [2024-09-27 15:56:50.617792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.226 [2024-09-27 15:56:50.617801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:10.226 [2024-09-27 15:56:50.627800] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:10.226 [2024-09-27 15:56:50.627820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.226 [2024-09-27 15:56:50.627830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:10.226 [2024-09-27 15:56:50.639306] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:10.226 [2024-09-27 15:56:50.639323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.226 [2024-09-27 15:56:50.639332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:10.226 [2024-09-27 15:56:50.648450] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:10.226 [2024-09-27 15:56:50.648468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.226 [2024-09-27 15:56:50.648477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:10.226 [2024-09-27 15:56:50.658127] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:10.226 [2024-09-27 15:56:50.658145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.226 [2024-09-27 15:56:50.658154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:10.226 [2024-09-27 15:56:50.668088] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:10.226 [2024-09-27 15:56:50.668107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:17932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.226 [2024-09-27 15:56:50.668115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:10.226 [2024-09-27 15:56:50.678509] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:10.226 [2024-09-27 15:56:50.678527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.226 [2024-09-27 15:56:50.678536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:10.226 [2024-09-27 15:56:50.687560] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:10.226 [2024-09-27 15:56:50.687578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.226 [2024-09-27 15:56:50.687588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:10.226 [2024-09-27 15:56:50.696257] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:10.226 [2024-09-27 15:56:50.696276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.226 [2024-09-27 15:56:50.696286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:10.226 [2024-09-27 15:56:50.706298] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:10.226 [2024-09-27 15:56:50.706316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.226 [2024-09-27 15:56:50.706325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:10.485 [2024-09-27 15:56:50.716159] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:10.485 [2024-09-27 15:56:50.716178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.485 [2024-09-27 15:56:50.716187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:10.485 [2024-09-27 15:56:50.725468] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:10.485 [2024-09-27 15:56:50.725486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.485 [2024-09-27 15:56:50.725495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:10.485 [2024-09-27 15:56:50.734659] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:10.485 [2024-09-27 15:56:50.734677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.485 [2024-09-27 15:56:50.734686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:10.485 [2024-09-27 15:56:50.742987] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:10.485 [2024-09-27 15:56:50.743005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.485 [2024-09-27 15:56:50.743014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:10.485 [2024-09-27 15:56:50.752984] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:10.486 [2024-09-27 15:56:50.753002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.486 [2024-09-27 15:56:50.753011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:10.486 [2024-09-27 15:56:50.761652] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:10.486 [2024-09-27 15:56:50.761670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.486 [2024-09-27 15:56:50.761680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:10.486 [2024-09-27 15:56:50.770217] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:10.486 [2024-09-27 15:56:50.770234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.486 [2024-09-27 15:56:50.770243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:10.486 [2024-09-27 15:56:50.779155] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:10.486 [2024-09-27 15:56:50.779173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.486 [2024-09-27 15:56:50.779183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:10.486 [2024-09-27 15:56:50.787749] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:10.486 [2024-09-27 15:56:50.787770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.486 [2024-09-27 15:56:50.787779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:10.486 [2024-09-27 15:56:50.796713] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:10.486 [2024-09-27 15:56:50.796731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.486 [2024-09-27 15:56:50.796741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:10.486 [2024-09-27 15:56:50.805075] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:10.486 [2024-09-27 15:56:50.805093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.486 [2024-09-27 15:56:50.805102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:10.486 [2024-09-27 15:56:50.814082] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:10.486 [2024-09-27 15:56:50.814099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.486 [2024-09-27 15:56:50.814108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:10.486 [2024-09-27 15:56:50.822610] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:10.486 [2024-09-27 15:56:50.822628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.486 [2024-09-27 15:56:50.822638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:10.486 [2024-09-27 15:56:50.832128] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19e0570) 00:38:10.486 [2024-09-27 15:56:50.832146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:10.486 [2024-09-27 15:56:50.832155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:10.486 27679.50 IOPS, 108.12 MiB/s 00:38:10.486 Latency(us) 00:38:10.486 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:10.486 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:38:10.486 nvme0n1 : 2.00 27707.16 108.23 0.00 0.00 4614.95 2129.92 15728.64 00:38:10.486 =================================================================================================================== 00:38:10.486 Total : 27707.16 108.23 0.00 0.00 4614.95 2129.92 15728.64 00:38:10.486 { 00:38:10.486 "results": [ 00:38:10.486 { 00:38:10.486 "job": "nvme0n1", 00:38:10.486 "core_mask": "0x2", 00:38:10.486 "workload": "randread", 00:38:10.486 "status": "finished", 00:38:10.486 "queue_depth": 128, 00:38:10.486 "io_size": 4096, 00:38:10.486 "runtime": 2.003489, 00:38:10.486 "iops": 27707.164850917576, 00:38:10.486 "mibps": 108.23111269889678, 00:38:10.486 "io_failed": 0, 00:38:10.486 "io_timeout": 0, 00:38:10.486 "avg_latency_us": 4614.949768274155, 00:38:10.486 "min_latency_us": 2129.92, 00:38:10.486 "max_latency_us": 15728.64 00:38:10.486 } 00:38:10.486 ], 00:38:10.486 "core_count": 1 00:38:10.486 } 00:38:10.486 15:56:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:38:10.486 15:56:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:38:10.486 15:56:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:38:10.486 15:56:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:38:10.486 | .driver_specific 00:38:10.486 | .nvme_error 00:38:10.486 | .status_code 00:38:10.486 | .command_transient_transport_error' 00:38:10.746 15:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 217 > 0 )) 00:38:10.746 15:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 629861 00:38:10.746 15:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 629861 ']' 00:38:10.746 15:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 629861 00:38:10.746 15:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:38:10.746 15:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:10.746 15:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 629861 00:38:10.746 15:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:10.746 15:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:10.746 15:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 629861' 00:38:10.746 killing process with pid 629861 00:38:10.746 15:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 629861 00:38:10.746 Received shutdown signal, test time was about 2.000000 seconds 00:38:10.746 00:38:10.746 Latency(us) 00:38:10.746 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:10.746 =================================================================================================================== 00:38:10.746 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:10.746 15:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 629861 00:38:10.746 15:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:38:10.746 15:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:38:10.746 15:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:38:10.746 15:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:38:10.746 15:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:38:10.746 15:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=630619 00:38:10.746 15:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 630619 /var/tmp/bperf.sock 00:38:10.746 15:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 630619 ']' 00:38:10.746 15:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:38:10.746 15:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:10.746 15:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:10.746 15:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:10.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:10.746 15:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:10.746 15:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:11.007 [2024-09-27 15:56:51.270409] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:38:11.008 [2024-09-27 15:56:51.270483] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid630619 ] 00:38:11.008 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:11.008 Zero copy mechanism will not be used. 00:38:11.008 [2024-09-27 15:56:51.347413] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:11.008 [2024-09-27 15:56:51.375860] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:11.579 15:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:11.579 15:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:38:11.579 15:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:11.579 15:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:11.840 15:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:38:11.840 15:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:11.840 15:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:11.840 15:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:11.840 15:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:11.840 15:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:12.100 nvme0n1 00:38:12.100 15:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:38:12.100 15:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:12.100 15:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:12.100 15:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:12.100 15:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:38:12.100 15:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:12.361 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:12.361 Zero copy mechanism will not be used. 00:38:12.361 Running I/O for 2 seconds... 00:38:12.361 [2024-09-27 15:56:52.676349] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.361 [2024-09-27 15:56:52.676381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.361 [2024-09-27 15:56:52.676394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:12.361 [2024-09-27 15:56:52.685496] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.361 [2024-09-27 15:56:52.685521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.361 [2024-09-27 15:56:52.685531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:12.361 [2024-09-27 15:56:52.696096] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.361 [2024-09-27 15:56:52.696116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.361 [2024-09-27 15:56:52.696127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:12.361 [2024-09-27 15:56:52.706439] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.361 [2024-09-27 15:56:52.706459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.361 [2024-09-27 15:56:52.706468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:12.361 [2024-09-27 15:56:52.717599] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.361 [2024-09-27 15:56:52.717618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.361 [2024-09-27 15:56:52.717628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:12.361 [2024-09-27 15:56:52.727357] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.361 [2024-09-27 15:56:52.727376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.361 [2024-09-27 15:56:52.727385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:12.361 [2024-09-27 15:56:52.739744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.361 [2024-09-27 15:56:52.739764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.361 [2024-09-27 15:56:52.739773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:12.361 [2024-09-27 15:56:52.750132] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.361 [2024-09-27 15:56:52.750151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.361 [2024-09-27 15:56:52.750160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:12.361 [2024-09-27 15:56:52.760347] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.361 [2024-09-27 15:56:52.760365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.361 [2024-09-27 15:56:52.760375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:12.361 [2024-09-27 15:56:52.768238] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.361 [2024-09-27 15:56:52.768258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.361 [2024-09-27 15:56:52.768267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:12.361 [2024-09-27 15:56:52.778694] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.361 [2024-09-27 15:56:52.778715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.361 [2024-09-27 15:56:52.778729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:12.361 [2024-09-27 15:56:52.789880] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.361 [2024-09-27 15:56:52.789905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.361 [2024-09-27 15:56:52.789915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:12.361 [2024-09-27 15:56:52.801221] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.361 [2024-09-27 15:56:52.801242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.361 [2024-09-27 15:56:52.801251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:12.361 [2024-09-27 15:56:52.812724] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.361 [2024-09-27 15:56:52.812743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.361 [2024-09-27 15:56:52.812752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:12.361 [2024-09-27 15:56:52.824099] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.361 [2024-09-27 15:56:52.824119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.361 [2024-09-27 15:56:52.824128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:12.361 [2024-09-27 15:56:52.835882] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.361 [2024-09-27 15:56:52.835909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.361 [2024-09-27 15:56:52.835918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:12.361 [2024-09-27 15:56:52.846561] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.361 [2024-09-27 15:56:52.846581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.361 [2024-09-27 15:56:52.846590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:12.623 [2024-09-27 15:56:52.857817] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.623 [2024-09-27 15:56:52.857838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.623 [2024-09-27 15:56:52.857847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:12.623 [2024-09-27 15:56:52.869054] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.623 [2024-09-27 15:56:52.869073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.623 [2024-09-27 15:56:52.869082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:12.623 [2024-09-27 15:56:52.879206] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.623 [2024-09-27 15:56:52.879230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.623 [2024-09-27 15:56:52.879239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:12.623 [2024-09-27 15:56:52.891167] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.623 [2024-09-27 15:56:52.891187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.623 [2024-09-27 15:56:52.891196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:12.623 [2024-09-27 15:56:52.900029] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.623 [2024-09-27 15:56:52.900049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.623 [2024-09-27 15:56:52.900058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:12.623 [2024-09-27 15:56:52.911335] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.623 [2024-09-27 15:56:52.911355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.623 [2024-09-27 15:56:52.911364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:12.623 [2024-09-27 15:56:52.921902] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.623 [2024-09-27 15:56:52.921921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.623 [2024-09-27 15:56:52.921940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:12.623 [2024-09-27 15:56:52.932954] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.623 [2024-09-27 15:56:52.932974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.623 [2024-09-27 15:56:52.932983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:12.623 [2024-09-27 15:56:52.944408] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.623 [2024-09-27 15:56:52.944429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.623 [2024-09-27 15:56:52.944438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:12.624 [2024-09-27 15:56:52.956209] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.624 [2024-09-27 15:56:52.956229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.624 [2024-09-27 15:56:52.956238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:12.624 [2024-09-27 15:56:52.967619] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.624 [2024-09-27 15:56:52.967639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.624 [2024-09-27 15:56:52.967648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:12.624 [2024-09-27 15:56:52.977773] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.624 [2024-09-27 15:56:52.977793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.624 [2024-09-27 15:56:52.977802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:12.624 [2024-09-27 15:56:52.988868] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.624 [2024-09-27 15:56:52.988888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.624 [2024-09-27 15:56:52.988901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:12.624 [2024-09-27 15:56:52.999396] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.624 [2024-09-27 15:56:52.999416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.624 [2024-09-27 15:56:52.999425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:12.624 [2024-09-27 15:56:53.007720] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.624 [2024-09-27 15:56:53.007740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.624 [2024-09-27 15:56:53.007750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:12.624 [2024-09-27 15:56:53.018732] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.624 [2024-09-27 15:56:53.018752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.624 [2024-09-27 15:56:53.018762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:12.624 [2024-09-27 15:56:53.028539] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.624 [2024-09-27 15:56:53.028559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.624 [2024-09-27 15:56:53.028568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:12.624 [2024-09-27 15:56:53.040844] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.624 [2024-09-27 15:56:53.040863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.624 [2024-09-27 15:56:53.040872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:12.624 [2024-09-27 15:56:53.052450] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.624 [2024-09-27 15:56:53.052470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.624 [2024-09-27 15:56:53.052479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:12.624 [2024-09-27 15:56:53.063827] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.624 [2024-09-27 15:56:53.063851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.624 [2024-09-27 15:56:53.063860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:12.624 [2024-09-27 15:56:53.073307] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.624 [2024-09-27 15:56:53.073327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.624 [2024-09-27 15:56:53.073337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:12.624 [2024-09-27 15:56:53.083425] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.624 [2024-09-27 15:56:53.083445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.624 [2024-09-27 15:56:53.083455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:12.624 [2024-09-27 15:56:53.094826] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.624 [2024-09-27 15:56:53.094846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.624 [2024-09-27 15:56:53.094855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:12.624 [2024-09-27 15:56:53.105473] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.624 [2024-09-27 15:56:53.105493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.624 [2024-09-27 15:56:53.105502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:12.887 [2024-09-27 15:56:53.116312] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.887 [2024-09-27 15:56:53.116333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.887 [2024-09-27 15:56:53.116342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:12.887 [2024-09-27 15:56:53.127794] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.887 [2024-09-27 15:56:53.127814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.887 [2024-09-27 15:56:53.127824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:12.887 [2024-09-27 15:56:53.138823] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.887 [2024-09-27 15:56:53.138844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.887 [2024-09-27 15:56:53.138852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:12.887 [2024-09-27 15:56:53.150986] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.887 [2024-09-27 15:56:53.151006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.887 [2024-09-27 15:56:53.151016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:12.887 [2024-09-27 15:56:53.163369] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.887 [2024-09-27 15:56:53.163389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.887 [2024-09-27 15:56:53.163398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:12.887 [2024-09-27 15:56:53.174042] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.887 [2024-09-27 15:56:53.174062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.887 [2024-09-27 15:56:53.174072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:12.887 [2024-09-27 15:56:53.183528] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.887 [2024-09-27 15:56:53.183548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.887 [2024-09-27 15:56:53.183557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:12.887 [2024-09-27 15:56:53.191698] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.887 [2024-09-27 15:56:53.191718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.887 [2024-09-27 15:56:53.191727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:12.887 [2024-09-27 15:56:53.202414] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.887 [2024-09-27 15:56:53.202434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.887 [2024-09-27 15:56:53.202443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:12.887 [2024-09-27 15:56:53.213368] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.887 [2024-09-27 15:56:53.213389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.887 [2024-09-27 15:56:53.213398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:12.887 [2024-09-27 15:56:53.224290] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.887 [2024-09-27 15:56:53.224310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.887 [2024-09-27 15:56:53.224320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:12.887 [2024-09-27 15:56:53.234105] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.887 [2024-09-27 15:56:53.234125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.887 [2024-09-27 15:56:53.234135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:12.887 [2024-09-27 15:56:53.244467] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.887 [2024-09-27 15:56:53.244488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.887 [2024-09-27 15:56:53.244501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:12.887 [2024-09-27 15:56:53.253606] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.887 [2024-09-27 15:56:53.253626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.887 [2024-09-27 15:56:53.253635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:12.887 [2024-09-27 15:56:53.264498] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.887 [2024-09-27 15:56:53.264518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.887 [2024-09-27 15:56:53.264527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:12.887 [2024-09-27 15:56:53.275241] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.887 [2024-09-27 15:56:53.275261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.887 [2024-09-27 15:56:53.275270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:12.887 [2024-09-27 15:56:53.285615] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.887 [2024-09-27 15:56:53.285635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.887 [2024-09-27 15:56:53.285644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:12.887 [2024-09-27 15:56:53.295148] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.887 [2024-09-27 15:56:53.295168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.887 [2024-09-27 15:56:53.295177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:12.887 [2024-09-27 15:56:53.304739] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.887 [2024-09-27 15:56:53.304759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.887 [2024-09-27 15:56:53.304769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:12.887 [2024-09-27 15:56:53.314817] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.887 [2024-09-27 15:56:53.314836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.887 [2024-09-27 15:56:53.314844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:12.887 [2024-09-27 15:56:53.326239] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.887 [2024-09-27 15:56:53.326259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.887 [2024-09-27 15:56:53.326269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:12.887 [2024-09-27 15:56:53.336127] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.887 [2024-09-27 15:56:53.336150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.887 [2024-09-27 15:56:53.336159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:12.887 [2024-09-27 15:56:53.346221] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.888 [2024-09-27 15:56:53.346242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.888 [2024-09-27 15:56:53.346251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:12.888 [2024-09-27 15:56:53.356429] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.888 [2024-09-27 15:56:53.356449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.888 [2024-09-27 15:56:53.356458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:12.888 [2024-09-27 15:56:53.368823] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:12.888 [2024-09-27 15:56:53.368843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:12.888 [2024-09-27 15:56:53.368852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:13.150 [2024-09-27 15:56:53.378624] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.150 [2024-09-27 15:56:53.378645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.150 [2024-09-27 15:56:53.378654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:13.150 [2024-09-27 15:56:53.388712] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.150 [2024-09-27 15:56:53.388732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.150 [2024-09-27 15:56:53.388741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:13.150 [2024-09-27 15:56:53.400614] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.150 [2024-09-27 15:56:53.400634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.150 [2024-09-27 15:56:53.400643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:13.150 [2024-09-27 15:56:53.411664] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.150 [2024-09-27 15:56:53.411684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.150 [2024-09-27 15:56:53.411692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:13.150 [2024-09-27 15:56:53.422304] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.150 [2024-09-27 15:56:53.422324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.150 [2024-09-27 15:56:53.422333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:13.150 [2024-09-27 15:56:53.433239] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.150 [2024-09-27 15:56:53.433259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.150 [2024-09-27 15:56:53.433268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:13.150 [2024-09-27 15:56:53.442632] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.150 [2024-09-27 15:56:53.442652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.150 [2024-09-27 15:56:53.442660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:13.150 [2024-09-27 15:56:53.453552] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.151 [2024-09-27 15:56:53.453572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.151 [2024-09-27 15:56:53.453581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:13.151 [2024-09-27 15:56:53.465122] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.151 [2024-09-27 15:56:53.465142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.151 [2024-09-27 15:56:53.465151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:13.151 [2024-09-27 15:56:53.475983] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.151 [2024-09-27 15:56:53.476003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.151 [2024-09-27 15:56:53.476012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:13.151 [2024-09-27 15:56:53.486724] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.151 [2024-09-27 15:56:53.486744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.151 [2024-09-27 15:56:53.486753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:13.151 [2024-09-27 15:56:53.497914] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.151 [2024-09-27 15:56:53.497934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.151 [2024-09-27 15:56:53.497943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:13.151 [2024-09-27 15:56:53.509245] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.151 [2024-09-27 15:56:53.509265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.151 [2024-09-27 15:56:53.509274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:13.151 [2024-09-27 15:56:53.519493] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.151 [2024-09-27 15:56:53.519512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.151 [2024-09-27 15:56:53.519525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:13.151 [2024-09-27 15:56:53.530892] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.151 [2024-09-27 15:56:53.530917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.151 [2024-09-27 15:56:53.530926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:13.151 [2024-09-27 15:56:53.539743] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.151 [2024-09-27 15:56:53.539762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.151 [2024-09-27 15:56:53.539771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:13.151 [2024-09-27 15:56:53.546935] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.151 [2024-09-27 15:56:53.546954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.151 [2024-09-27 15:56:53.546963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:13.151 [2024-09-27 15:56:53.557330] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.151 [2024-09-27 15:56:53.557349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.151 [2024-09-27 15:56:53.557358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:13.151 [2024-09-27 15:56:53.567184] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.151 [2024-09-27 15:56:53.567204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.151 [2024-09-27 15:56:53.567214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:13.151 [2024-09-27 15:56:53.576063] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.151 [2024-09-27 15:56:53.576083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.151 [2024-09-27 15:56:53.576092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:13.151 [2024-09-27 15:56:53.584386] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.151 [2024-09-27 15:56:53.584406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.151 [2024-09-27 15:56:53.584415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:13.151 [2024-09-27 15:56:53.594926] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.151 [2024-09-27 15:56:53.594945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.151 [2024-09-27 15:56:53.594954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:13.151 [2024-09-27 15:56:53.600767] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.151 [2024-09-27 15:56:53.600790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.151 [2024-09-27 15:56:53.600799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:13.151 [2024-09-27 15:56:53.609140] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.151 [2024-09-27 15:56:53.609160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.151 [2024-09-27 15:56:53.609170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:13.151 [2024-09-27 15:56:53.619467] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.151 [2024-09-27 15:56:53.619487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.151 [2024-09-27 15:56:53.619496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:13.151 [2024-09-27 15:56:53.629027] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.151 [2024-09-27 15:56:53.629046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.151 [2024-09-27 15:56:53.629056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:13.151 [2024-09-27 15:56:53.636738] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.151 [2024-09-27 15:56:53.636758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.151 [2024-09-27 15:56:53.636767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:13.412 [2024-09-27 15:56:53.646541] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.413 [2024-09-27 15:56:53.646561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.413 [2024-09-27 15:56:53.646571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:13.413 [2024-09-27 15:56:53.658331] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.413 [2024-09-27 15:56:53.658350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.413 [2024-09-27 15:56:53.658359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:13.413 2970.00 IOPS, 371.25 MiB/s [2024-09-27 15:56:53.669241] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.413 [2024-09-27 15:56:53.669261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.413 [2024-09-27 15:56:53.669270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:13.413 [2024-09-27 15:56:53.680620] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.413 [2024-09-27 15:56:53.680639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.413 [2024-09-27 15:56:53.680652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:13.413 [2024-09-27 15:56:53.691500] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.413 [2024-09-27 15:56:53.691520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.413 [2024-09-27 15:56:53.691530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:13.413 [2024-09-27 15:56:53.703132] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.413 [2024-09-27 15:56:53.703152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.413 [2024-09-27 15:56:53.703162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:13.413 [2024-09-27 15:56:53.714285] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.413 [2024-09-27 15:56:53.714306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.413 [2024-09-27 15:56:53.714316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:13.413 [2024-09-27 15:56:53.725613] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.413 [2024-09-27 15:56:53.725633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.413 [2024-09-27 15:56:53.725643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:13.413 [2024-09-27 15:56:53.734475] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.413 [2024-09-27 15:56:53.734495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.413 [2024-09-27 15:56:53.734505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:13.413 [2024-09-27 15:56:53.743663] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.413 [2024-09-27 15:56:53.743682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.413 [2024-09-27 15:56:53.743692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:13.413 [2024-09-27 15:56:53.753760] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.413 [2024-09-27 15:56:53.753780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.413 [2024-09-27 15:56:53.753789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:13.413 [2024-09-27 15:56:53.762186] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.413 [2024-09-27 15:56:53.762205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.413 [2024-09-27 15:56:53.762214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:13.413 [2024-09-27 15:56:53.772615] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.413 [2024-09-27 15:56:53.772638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.413 [2024-09-27 15:56:53.772647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:13.413 [2024-09-27 15:56:53.783452] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.413 [2024-09-27 15:56:53.783472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.413 [2024-09-27 15:56:53.783481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:13.413 [2024-09-27 15:56:53.793996] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.413 [2024-09-27 15:56:53.794016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.413 [2024-09-27 15:56:53.794025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:13.413 [2024-09-27 15:56:53.806648] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.413 [2024-09-27 15:56:53.806668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.413 [2024-09-27 15:56:53.806677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:13.413 [2024-09-27 15:56:53.818634] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.413 [2024-09-27 15:56:53.818654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.413 [2024-09-27 15:56:53.818663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:13.413 [2024-09-27 15:56:53.830906] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.413 [2024-09-27 15:56:53.830926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.413 [2024-09-27 15:56:53.830936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:13.413 [2024-09-27 15:56:53.842902] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.413 [2024-09-27 15:56:53.842921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.413 [2024-09-27 15:56:53.842930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:13.413 [2024-09-27 15:56:53.855161] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.413 [2024-09-27 15:56:53.855181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.413 [2024-09-27 15:56:53.855190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:13.413 [2024-09-27 15:56:53.867459] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.413 [2024-09-27 15:56:53.867478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.413 [2024-09-27 15:56:53.867488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:13.413 [2024-09-27 15:56:53.879409] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.413 [2024-09-27 15:56:53.879428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.413 [2024-09-27 15:56:53.879437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:13.413 [2024-09-27 15:56:53.891244] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.413 [2024-09-27 15:56:53.891264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.413 [2024-09-27 15:56:53.891273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:13.674 [2024-09-27 15:56:53.903335] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.674 [2024-09-27 15:56:53.903355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.674 [2024-09-27 15:56:53.903364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:13.674 [2024-09-27 15:56:53.914679] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.674 [2024-09-27 15:56:53.914698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.674 [2024-09-27 15:56:53.914708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:13.675 [2024-09-27 15:56:53.925742] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.675 [2024-09-27 15:56:53.925762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.675 [2024-09-27 15:56:53.925771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:13.675 [2024-09-27 15:56:53.936369] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.675 [2024-09-27 15:56:53.936389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.675 [2024-09-27 15:56:53.936398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:13.675 [2024-09-27 15:56:53.947928] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.675 [2024-09-27 15:56:53.947947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.675 [2024-09-27 15:56:53.947956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:13.675 [2024-09-27 15:56:53.959233] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.675 [2024-09-27 15:56:53.959251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.675 [2024-09-27 15:56:53.959260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:13.675 [2024-09-27 15:56:53.969123] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.675 [2024-09-27 15:56:53.969143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.675 [2024-09-27 15:56:53.969156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:13.675 [2024-09-27 15:56:53.979980] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.675 [2024-09-27 15:56:53.980000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.675 [2024-09-27 15:56:53.980009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:13.675 [2024-09-27 15:56:53.990939] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.675 [2024-09-27 15:56:53.990959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.675 [2024-09-27 15:56:53.990968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:13.675 [2024-09-27 15:56:54.000866] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.675 [2024-09-27 15:56:54.000885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.675 [2024-09-27 15:56:54.000897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:13.675 [2024-09-27 15:56:54.011419] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.675 [2024-09-27 15:56:54.011439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.675 [2024-09-27 15:56:54.011447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:13.675 [2024-09-27 15:56:54.022494] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.675 [2024-09-27 15:56:54.022514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.675 [2024-09-27 15:56:54.022523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:13.675 [2024-09-27 15:56:54.034172] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.675 [2024-09-27 15:56:54.034192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.675 [2024-09-27 15:56:54.034201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:13.675 [2024-09-27 15:56:54.042624] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.675 [2024-09-27 15:56:54.042643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.675 [2024-09-27 15:56:54.042653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:13.675 [2024-09-27 15:56:54.054157] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.675 [2024-09-27 15:56:54.054176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.675 [2024-09-27 15:56:54.054185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:13.675 [2024-09-27 15:56:54.065875] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.675 [2024-09-27 15:56:54.065905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.675 [2024-09-27 15:56:54.065915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:13.675 [2024-09-27 15:56:54.077146] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.675 [2024-09-27 15:56:54.077166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.675 [2024-09-27 15:56:54.077175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:13.675 [2024-09-27 15:56:54.087430] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.675 [2024-09-27 15:56:54.087449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.675 [2024-09-27 15:56:54.087458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:13.675 [2024-09-27 15:56:54.098506] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.675 [2024-09-27 15:56:54.098526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.675 [2024-09-27 15:56:54.098535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:13.675 [2024-09-27 15:56:54.109407] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.675 [2024-09-27 15:56:54.109427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.675 [2024-09-27 15:56:54.109436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:13.675 [2024-09-27 15:56:54.121051] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.675 [2024-09-27 15:56:54.121072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.675 [2024-09-27 15:56:54.121081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:13.675 [2024-09-27 15:56:54.133599] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.675 [2024-09-27 15:56:54.133619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.675 [2024-09-27 15:56:54.133628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:13.675 [2024-09-27 15:56:54.145020] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.675 [2024-09-27 15:56:54.145040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.675 [2024-09-27 15:56:54.145049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:13.675 [2024-09-27 15:56:54.156845] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.675 [2024-09-27 15:56:54.156865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.675 [2024-09-27 15:56:54.156874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:13.936 [2024-09-27 15:56:54.166332] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.936 [2024-09-27 15:56:54.166352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.936 [2024-09-27 15:56:54.166362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:13.936 [2024-09-27 15:56:54.178031] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.936 [2024-09-27 15:56:54.178051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.936 [2024-09-27 15:56:54.178060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:13.936 [2024-09-27 15:56:54.188252] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.936 [2024-09-27 15:56:54.188272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.936 [2024-09-27 15:56:54.188281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:13.936 [2024-09-27 15:56:54.198711] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.936 [2024-09-27 15:56:54.198731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.936 [2024-09-27 15:56:54.198740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:13.936 [2024-09-27 15:56:54.210372] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.936 [2024-09-27 15:56:54.210392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.936 [2024-09-27 15:56:54.210401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:13.936 [2024-09-27 15:56:54.220791] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.936 [2024-09-27 15:56:54.220811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.936 [2024-09-27 15:56:54.220820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:13.936 [2024-09-27 15:56:54.232059] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.936 [2024-09-27 15:56:54.232079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.936 [2024-09-27 15:56:54.232088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:13.936 [2024-09-27 15:56:54.242267] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.936 [2024-09-27 15:56:54.242287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.936 [2024-09-27 15:56:54.242296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:13.936 [2024-09-27 15:56:54.251551] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.936 [2024-09-27 15:56:54.251574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.936 [2024-09-27 15:56:54.251584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:13.936 [2024-09-27 15:56:54.262996] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.936 [2024-09-27 15:56:54.263016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.936 [2024-09-27 15:56:54.263025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:13.936 [2024-09-27 15:56:54.274742] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.936 [2024-09-27 15:56:54.274762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.936 [2024-09-27 15:56:54.274771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:13.936 [2024-09-27 15:56:54.286561] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.936 [2024-09-27 15:56:54.286580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.936 [2024-09-27 15:56:54.286588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:13.936 [2024-09-27 15:56:54.296738] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.937 [2024-09-27 15:56:54.296757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.937 [2024-09-27 15:56:54.296766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:13.937 [2024-09-27 15:56:54.307079] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.937 [2024-09-27 15:56:54.307098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.937 [2024-09-27 15:56:54.307106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:13.937 [2024-09-27 15:56:54.316574] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.937 [2024-09-27 15:56:54.316593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.937 [2024-09-27 15:56:54.316602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:13.937 [2024-09-27 15:56:54.322579] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.937 [2024-09-27 15:56:54.322598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.937 [2024-09-27 15:56:54.322607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:13.937 [2024-09-27 15:56:54.332773] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.937 [2024-09-27 15:56:54.332791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.937 [2024-09-27 15:56:54.332800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:13.937 [2024-09-27 15:56:54.344668] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.937 [2024-09-27 15:56:54.344688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.937 [2024-09-27 15:56:54.344696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:13.937 [2024-09-27 15:56:54.356613] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.937 [2024-09-27 15:56:54.356633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.937 [2024-09-27 15:56:54.356642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:13.937 [2024-09-27 15:56:54.367912] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.937 [2024-09-27 15:56:54.367931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.937 [2024-09-27 15:56:54.367940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:13.937 [2024-09-27 15:56:54.378913] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.937 [2024-09-27 15:56:54.378932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.937 [2024-09-27 15:56:54.378941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:13.937 [2024-09-27 15:56:54.390404] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.937 [2024-09-27 15:56:54.390424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.937 [2024-09-27 15:56:54.390433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:13.937 [2024-09-27 15:56:54.401010] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.937 [2024-09-27 15:56:54.401030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.937 [2024-09-27 15:56:54.401039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:13.937 [2024-09-27 15:56:54.410267] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.937 [2024-09-27 15:56:54.410286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.937 [2024-09-27 15:56:54.410295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:13.937 [2024-09-27 15:56:54.419292] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:13.937 [2024-09-27 15:56:54.419311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.937 [2024-09-27 15:56:54.419320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:14.198 [2024-09-27 15:56:54.429444] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:14.198 [2024-09-27 15:56:54.429465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.198 [2024-09-27 15:56:54.429477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:14.198 [2024-09-27 15:56:54.439232] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:14.198 [2024-09-27 15:56:54.439252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.198 [2024-09-27 15:56:54.439261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:14.198 [2024-09-27 15:56:54.450488] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:14.198 [2024-09-27 15:56:54.450507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.198 [2024-09-27 15:56:54.450516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:14.198 [2024-09-27 15:56:54.460575] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:14.198 [2024-09-27 15:56:54.460594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.198 [2024-09-27 15:56:54.460602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:14.198 [2024-09-27 15:56:54.471460] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:14.198 [2024-09-27 15:56:54.471480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.198 [2024-09-27 15:56:54.471489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:14.198 [2024-09-27 15:56:54.482121] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:14.198 [2024-09-27 15:56:54.482140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.198 [2024-09-27 15:56:54.482149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:14.198 [2024-09-27 15:56:54.491266] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:14.198 [2024-09-27 15:56:54.491285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.198 [2024-09-27 15:56:54.491294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:14.198 [2024-09-27 15:56:54.500697] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:14.198 [2024-09-27 15:56:54.500716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.198 [2024-09-27 15:56:54.500725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:14.198 [2024-09-27 15:56:54.511527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:14.198 [2024-09-27 15:56:54.511546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.198 [2024-09-27 15:56:54.511555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:14.198 [2024-09-27 15:56:54.523158] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:14.198 [2024-09-27 15:56:54.523181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.198 [2024-09-27 15:56:54.523190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:14.198 [2024-09-27 15:56:54.532154] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:14.198 [2024-09-27 15:56:54.532173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.198 [2024-09-27 15:56:54.532183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:14.198 [2024-09-27 15:56:54.544048] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:14.198 [2024-09-27 15:56:54.544068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.198 [2024-09-27 15:56:54.544076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:14.198 [2024-09-27 15:56:54.556612] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:14.198 [2024-09-27 15:56:54.556632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.198 [2024-09-27 15:56:54.556641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:14.198 [2024-09-27 15:56:54.569005] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:14.198 [2024-09-27 15:56:54.569025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.198 [2024-09-27 15:56:54.569033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:14.198 [2024-09-27 15:56:54.581738] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:14.198 [2024-09-27 15:56:54.581757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.198 [2024-09-27 15:56:54.581766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:14.198 [2024-09-27 15:56:54.594031] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:14.198 [2024-09-27 15:56:54.594051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.198 [2024-09-27 15:56:54.594060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:14.198 [2024-09-27 15:56:54.605612] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:14.198 [2024-09-27 15:56:54.605632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.198 [2024-09-27 15:56:54.605641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:14.198 [2024-09-27 15:56:54.615533] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:14.198 [2024-09-27 15:56:54.615554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.198 [2024-09-27 15:56:54.615563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:14.198 [2024-09-27 15:56:54.626133] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:14.198 [2024-09-27 15:56:54.626153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.198 [2024-09-27 15:56:54.626162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:14.198 [2024-09-27 15:56:54.636288] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:14.198 [2024-09-27 15:56:54.636308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.198 [2024-09-27 15:56:54.636317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:14.198 [2024-09-27 15:56:54.647177] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:14.198 [2024-09-27 15:56:54.647197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.198 [2024-09-27 15:56:54.647206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:14.198 [2024-09-27 15:56:54.658278] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:14.198 [2024-09-27 15:56:54.658298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.198 [2024-09-27 15:56:54.658308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:14.198 2909.50 IOPS, 363.69 MiB/s [2024-09-27 15:56:54.669812] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14cc0a0) 00:38:14.198 [2024-09-27 15:56:54.669832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.198 [2024-09-27 15:56:54.669842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:14.198 00:38:14.198 Latency(us) 00:38:14.198 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:14.198 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:38:14.198 nvme0n1 : 2.00 2910.50 363.81 0.00 0.00 5491.44 757.76 13107.20 00:38:14.198 =================================================================================================================== 00:38:14.199 Total : 2910.50 363.81 0.00 0.00 5491.44 757.76 13107.20 00:38:14.199 { 00:38:14.199 "results": [ 00:38:14.199 { 00:38:14.199 "job": "nvme0n1", 00:38:14.199 "core_mask": "0x2", 00:38:14.199 "workload": "randread", 00:38:14.199 "status": "finished", 00:38:14.199 "queue_depth": 16, 00:38:14.199 "io_size": 131072, 00:38:14.199 "runtime": 2.004807, 00:38:14.199 "iops": 2910.5046021886396, 00:38:14.199 "mibps": 363.81307527357995, 00:38:14.199 "io_failed": 0, 00:38:14.199 "io_timeout": 0, 00:38:14.199 "avg_latency_us": 5491.44408111968, 00:38:14.199 "min_latency_us": 757.76, 00:38:14.199 "max_latency_us": 13107.2 00:38:14.199 } 00:38:14.199 ], 00:38:14.199 "core_count": 1 00:38:14.199 } 00:38:14.460 15:56:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:38:14.460 15:56:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:38:14.460 15:56:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:38:14.460 | .driver_specific 00:38:14.460 | .nvme_error 00:38:14.460 | .status_code 00:38:14.460 | .command_transient_transport_error' 00:38:14.460 15:56:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:38:14.460 15:56:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 188 > 0 )) 00:38:14.460 15:56:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 630619 00:38:14.460 15:56:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 630619 ']' 00:38:14.460 15:56:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 630619 00:38:14.460 15:56:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:38:14.460 15:56:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:14.460 15:56:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 630619 00:38:14.460 15:56:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:14.460 15:56:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:14.460 15:56:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 630619' 00:38:14.460 killing process with pid 630619 00:38:14.460 15:56:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 630619 00:38:14.460 Received shutdown signal, test time was about 2.000000 seconds 00:38:14.460 00:38:14.460 Latency(us) 00:38:14.460 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:14.460 =================================================================================================================== 00:38:14.460 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:14.460 15:56:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 630619 00:38:14.721 15:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:38:14.721 15:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:38:14.721 15:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:38:14.721 15:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:38:14.721 15:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:38:14.721 15:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=631395 00:38:14.721 15:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 631395 /var/tmp/bperf.sock 00:38:14.721 15:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 631395 ']' 00:38:14.721 15:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:38:14.721 15:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:14.721 15:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:14.721 15:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:14.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:14.721 15:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:14.721 15:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:14.721 [2024-09-27 15:56:55.102110] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:38:14.721 [2024-09-27 15:56:55.102171] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid631395 ] 00:38:14.721 [2024-09-27 15:56:55.177795] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:14.721 [2024-09-27 15:56:55.206009] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:15.663 15:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:15.663 15:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:38:15.663 15:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:15.663 15:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:15.663 15:56:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:38:15.663 15:56:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:15.663 15:56:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:15.663 15:56:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:15.663 15:56:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:15.663 15:56:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:15.923 nvme0n1 00:38:15.923 15:56:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:38:15.923 15:56:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:15.923 15:56:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:15.923 15:56:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:15.923 15:56:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:38:15.923 15:56:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:16.184 Running I/O for 2 seconds... 00:38:16.184 [2024-09-27 15:56:56.446076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198f6cc8 00:38:16.184 [2024-09-27 15:56:56.446862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.184 [2024-09-27 15:56:56.446886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:38:16.184 [2024-09-27 15:56:56.455493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e3060 00:38:16.184 [2024-09-27 15:56:56.456461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.184 [2024-09-27 15:56:56.456481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:38:16.184 [2024-09-27 15:56:56.463960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198eb328 00:38:16.184 [2024-09-27 15:56:56.464925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.184 [2024-09-27 15:56:56.464942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:38:16.184 [2024-09-27 15:56:56.472436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198f6cc8 00:38:16.184 [2024-09-27 15:56:56.473405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.184 [2024-09-27 15:56:56.473422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:38:16.184 [2024-09-27 15:56:56.481028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198eee38 00:38:16.184 [2024-09-27 15:56:56.481963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.184 [2024-09-27 15:56:56.481981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.184 [2024-09-27 15:56:56.489493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198f0ff8 00:38:16.184 [2024-09-27 15:56:56.490462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.184 [2024-09-27 15:56:56.490480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.184 [2024-09-27 15:56:56.497926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198f4b08 00:38:16.184 [2024-09-27 15:56:56.498869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.184 [2024-09-27 15:56:56.498886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.184 [2024-09-27 15:56:56.506374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e5a90 00:38:16.184 [2024-09-27 15:56:56.507309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.184 [2024-09-27 15:56:56.507326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.184 [2024-09-27 15:56:56.514801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e38d0 00:38:16.184 [2024-09-27 15:56:56.515769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.185 [2024-09-27 15:56:56.515785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.185 [2024-09-27 15:56:56.523237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e1710 00:38:16.185 [2024-09-27 15:56:56.524205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.185 [2024-09-27 15:56:56.524222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.185 [2024-09-27 15:56:56.531649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ea248 00:38:16.185 [2024-09-27 15:56:56.532625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.185 [2024-09-27 15:56:56.532642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.185 [2024-09-27 15:56:56.540074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ec408 00:38:16.185 [2024-09-27 15:56:56.541034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.185 [2024-09-27 15:56:56.541055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.185 [2024-09-27 15:56:56.548491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ee5c8 00:38:16.185 [2024-09-27 15:56:56.549465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.185 [2024-09-27 15:56:56.549482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.185 [2024-09-27 15:56:56.556929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198eff18 00:38:16.185 [2024-09-27 15:56:56.557889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.185 [2024-09-27 15:56:56.557910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.185 [2024-09-27 15:56:56.565331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198f4298 00:38:16.185 [2024-09-27 15:56:56.566279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.185 [2024-09-27 15:56:56.566295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.185 [2024-09-27 15:56:56.573764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e73e0 00:38:16.185 [2024-09-27 15:56:56.574717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.185 [2024-09-27 15:56:56.574734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.185 [2024-09-27 15:56:56.582173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e5220 00:38:16.185 [2024-09-27 15:56:56.583143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.185 [2024-09-27 15:56:56.583160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.185 [2024-09-27 15:56:56.590602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e3060 00:38:16.185 [2024-09-27 15:56:56.591564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.185 [2024-09-27 15:56:56.591581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.185 [2024-09-27 15:56:56.599017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e9168 00:38:16.185 [2024-09-27 15:56:56.599979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.185 [2024-09-27 15:56:56.599995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.185 [2024-09-27 15:56:56.607422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198eaab8 00:38:16.185 [2024-09-27 15:56:56.608396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.185 [2024-09-27 15:56:56.608412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.185 [2024-09-27 15:56:56.615846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ecc78 00:38:16.185 [2024-09-27 15:56:56.616815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.185 [2024-09-27 15:56:56.616832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.185 [2024-09-27 15:56:56.624279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198eee38 00:38:16.185 [2024-09-27 15:56:56.625193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.185 [2024-09-27 15:56:56.625209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.185 [2024-09-27 15:56:56.632709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198f0ff8 00:38:16.185 [2024-09-27 15:56:56.633682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.185 [2024-09-27 15:56:56.633698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.185 [2024-09-27 15:56:56.641137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198f4b08 00:38:16.185 [2024-09-27 15:56:56.642057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.185 [2024-09-27 15:56:56.642074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.185 [2024-09-27 15:56:56.649555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e5a90 00:38:16.185 [2024-09-27 15:56:56.650509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.185 [2024-09-27 15:56:56.650525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.185 [2024-09-27 15:56:56.657965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e38d0 00:38:16.185 [2024-09-27 15:56:56.658922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.185 [2024-09-27 15:56:56.658938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.185 [2024-09-27 15:56:56.666529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e1710 00:38:16.185 [2024-09-27 15:56:56.667508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.185 [2024-09-27 15:56:56.667524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.446 [2024-09-27 15:56:56.674997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ea248 00:38:16.446 [2024-09-27 15:56:56.675967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.446 [2024-09-27 15:56:56.675983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.446 [2024-09-27 15:56:56.683410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ec408 00:38:16.447 [2024-09-27 15:56:56.684334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.447 [2024-09-27 15:56:56.684350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.447 [2024-09-27 15:56:56.691845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ee5c8 00:38:16.447 [2024-09-27 15:56:56.692804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.447 [2024-09-27 15:56:56.692820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.447 [2024-09-27 15:56:56.700257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198eff18 00:38:16.447 [2024-09-27 15:56:56.701195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.447 [2024-09-27 15:56:56.701211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.447 [2024-09-27 15:56:56.708665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198f4298 00:38:16.447 [2024-09-27 15:56:56.709574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.447 [2024-09-27 15:56:56.709590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.447 [2024-09-27 15:56:56.717074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e73e0 00:38:16.447 [2024-09-27 15:56:56.718041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.447 [2024-09-27 15:56:56.718058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.447 [2024-09-27 15:56:56.725496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e5220 00:38:16.447 [2024-09-27 15:56:56.726470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.447 [2024-09-27 15:56:56.726486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.447 [2024-09-27 15:56:56.733932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e3060 00:38:16.447 [2024-09-27 15:56:56.734900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.447 [2024-09-27 15:56:56.734917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.447 [2024-09-27 15:56:56.742349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e9168 00:38:16.447 [2024-09-27 15:56:56.743317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.447 [2024-09-27 15:56:56.743333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.447 [2024-09-27 15:56:56.750762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198eaab8 00:38:16.447 [2024-09-27 15:56:56.751709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.447 [2024-09-27 15:56:56.751726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.447 [2024-09-27 15:56:56.759167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ecc78 00:38:16.447 [2024-09-27 15:56:56.760103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.447 [2024-09-27 15:56:56.760126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.447 [2024-09-27 15:56:56.767593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198eee38 00:38:16.447 [2024-09-27 15:56:56.768563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.447 [2024-09-27 15:56:56.768580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.447 [2024-09-27 15:56:56.776023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198f0ff8 00:38:16.447 [2024-09-27 15:56:56.776987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:11928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.447 [2024-09-27 15:56:56.777003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.447 [2024-09-27 15:56:56.784428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198f4b08 00:38:16.447 [2024-09-27 15:56:56.785393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.447 [2024-09-27 15:56:56.785410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.447 [2024-09-27 15:56:56.792833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e5a90 00:38:16.447 [2024-09-27 15:56:56.793793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.447 [2024-09-27 15:56:56.793809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.447 [2024-09-27 15:56:56.801252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e38d0 00:38:16.447 [2024-09-27 15:56:56.802196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.447 [2024-09-27 15:56:56.802212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.447 [2024-09-27 15:56:56.809659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e1710 00:38:16.447 [2024-09-27 15:56:56.810686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.447 [2024-09-27 15:56:56.810702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.447 [2024-09-27 15:56:56.818187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ea248 00:38:16.447 [2024-09-27 15:56:56.819153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.447 [2024-09-27 15:56:56.819170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.447 [2024-09-27 15:56:56.826623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ec408 00:38:16.447 [2024-09-27 15:56:56.827599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.447 [2024-09-27 15:56:56.827615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.447 [2024-09-27 15:56:56.835042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ee5c8 00:38:16.447 [2024-09-27 15:56:56.835982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.447 [2024-09-27 15:56:56.835999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.447 [2024-09-27 15:56:56.843446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198eff18 00:38:16.447 [2024-09-27 15:56:56.844414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.447 [2024-09-27 15:56:56.844430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.447 [2024-09-27 15:56:56.851858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198f4298 00:38:16.447 [2024-09-27 15:56:56.852826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.447 [2024-09-27 15:56:56.852843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.447 [2024-09-27 15:56:56.860273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e73e0 00:38:16.447 [2024-09-27 15:56:56.861219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.447 [2024-09-27 15:56:56.861235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.447 [2024-09-27 15:56:56.868704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e5220 00:38:16.448 [2024-09-27 15:56:56.869657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.448 [2024-09-27 15:56:56.869673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.448 [2024-09-27 15:56:56.877141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e3060 00:38:16.448 [2024-09-27 15:56:56.878101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.448 [2024-09-27 15:56:56.878117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.448 [2024-09-27 15:56:56.885553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e9168 00:38:16.448 [2024-09-27 15:56:56.886498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:41 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.448 [2024-09-27 15:56:56.886514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.448 [2024-09-27 15:56:56.893954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198eaab8 00:38:16.448 [2024-09-27 15:56:56.894876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.448 [2024-09-27 15:56:56.894897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.448 [2024-09-27 15:56:56.902388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ecc78 00:38:16.448 [2024-09-27 15:56:56.903318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:15639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.448 [2024-09-27 15:56:56.903334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.448 [2024-09-27 15:56:56.910816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198eee38 00:38:16.448 [2024-09-27 15:56:56.911729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:3431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.448 [2024-09-27 15:56:56.911745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.448 [2024-09-27 15:56:56.919252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198f0ff8 00:38:16.448 [2024-09-27 15:56:56.920211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.448 [2024-09-27 15:56:56.920228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.448 [2024-09-27 15:56:56.927661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198f4b08 00:38:16.448 [2024-09-27 15:56:56.928626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.448 [2024-09-27 15:56:56.928643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.710 [2024-09-27 15:56:56.936070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e5a90 00:38:16.710 [2024-09-27 15:56:56.937008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.710 [2024-09-27 15:56:56.937024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.710 [2024-09-27 15:56:56.944486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e38d0 00:38:16.710 [2024-09-27 15:56:56.945455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.710 [2024-09-27 15:56:56.945471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.710 [2024-09-27 15:56:56.952916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e1710 00:38:16.710 [2024-09-27 15:56:56.953882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.710 [2024-09-27 15:56:56.953902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.710 [2024-09-27 15:56:56.961340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ea248 00:38:16.710 [2024-09-27 15:56:56.962309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.710 [2024-09-27 15:56:56.962326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.710 [2024-09-27 15:56:56.969759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ec408 00:38:16.710 [2024-09-27 15:56:56.970727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.710 [2024-09-27 15:56:56.970745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.710 [2024-09-27 15:56:56.978182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ee5c8 00:38:16.710 [2024-09-27 15:56:56.979143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.710 [2024-09-27 15:56:56.979163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.710 [2024-09-27 15:56:56.986579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198eff18 00:38:16.710 [2024-09-27 15:56:56.987544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.710 [2024-09-27 15:56:56.987561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.710 [2024-09-27 15:56:56.995004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198f4298 00:38:16.710 [2024-09-27 15:56:56.995949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.710 [2024-09-27 15:56:56.995966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.710 [2024-09-27 15:56:57.003423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e73e0 00:38:16.710 [2024-09-27 15:56:57.004390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.710 [2024-09-27 15:56:57.004406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.710 [2024-09-27 15:56:57.011839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e5220 00:38:16.710 [2024-09-27 15:56:57.012803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.710 [2024-09-27 15:56:57.012820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.710 [2024-09-27 15:56:57.020245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e3060 00:38:16.710 [2024-09-27 15:56:57.021203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.710 [2024-09-27 15:56:57.021219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.710 [2024-09-27 15:56:57.028641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e9168 00:38:16.710 [2024-09-27 15:56:57.029608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.710 [2024-09-27 15:56:57.029625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.710 [2024-09-27 15:56:57.037041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198eaab8 00:38:16.710 [2024-09-27 15:56:57.037968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.710 [2024-09-27 15:56:57.037985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.710 [2024-09-27 15:56:57.045468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ecc78 00:38:16.710 [2024-09-27 15:56:57.046415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.710 [2024-09-27 15:56:57.046431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.711 [2024-09-27 15:56:57.053906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198eee38 00:38:16.711 [2024-09-27 15:56:57.054867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.711 [2024-09-27 15:56:57.054884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.711 [2024-09-27 15:56:57.062333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198f0ff8 00:38:16.711 [2024-09-27 15:56:57.063301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.711 [2024-09-27 15:56:57.063318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.711 [2024-09-27 15:56:57.070779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198f4b08 00:38:16.711 [2024-09-27 15:56:57.071746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.711 [2024-09-27 15:56:57.071763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.711 [2024-09-27 15:56:57.079196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e5a90 00:38:16.711 [2024-09-27 15:56:57.080170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.711 [2024-09-27 15:56:57.080186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.711 [2024-09-27 15:56:57.087621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e38d0 00:38:16.711 [2024-09-27 15:56:57.088592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.711 [2024-09-27 15:56:57.088608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.711 [2024-09-27 15:56:57.096078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e1710 00:38:16.711 [2024-09-27 15:56:57.097051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.711 [2024-09-27 15:56:57.097068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.711 [2024-09-27 15:56:57.104498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ea248 00:38:16.711 [2024-09-27 15:56:57.105460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.711 [2024-09-27 15:56:57.105477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.711 [2024-09-27 15:56:57.112921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ec408 00:38:16.711 [2024-09-27 15:56:57.113868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.711 [2024-09-27 15:56:57.113884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.711 [2024-09-27 15:56:57.121335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ee5c8 00:38:16.711 [2024-09-27 15:56:57.122299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.711 [2024-09-27 15:56:57.122315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.711 [2024-09-27 15:56:57.129735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198eff18 00:38:16.711 [2024-09-27 15:56:57.130705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.711 [2024-09-27 15:56:57.130722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.711 [2024-09-27 15:56:57.138162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198f4298 00:38:16.711 [2024-09-27 15:56:57.139137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.711 [2024-09-27 15:56:57.139153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.711 [2024-09-27 15:56:57.146575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e73e0 00:38:16.711 [2024-09-27 15:56:57.147512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.711 [2024-09-27 15:56:57.147528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.711 [2024-09-27 15:56:57.154994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e5220 00:38:16.711 [2024-09-27 15:56:57.155942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.711 [2024-09-27 15:56:57.155959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.711 [2024-09-27 15:56:57.163396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e3060 00:38:16.711 [2024-09-27 15:56:57.164326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.711 [2024-09-27 15:56:57.164342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.711 [2024-09-27 15:56:57.171796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e9168 00:38:16.711 [2024-09-27 15:56:57.172765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.711 [2024-09-27 15:56:57.172782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.711 [2024-09-27 15:56:57.180219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198eaab8 00:38:16.711 [2024-09-27 15:56:57.181165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.711 [2024-09-27 15:56:57.181181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.711 [2024-09-27 15:56:57.188631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ecc78 00:38:16.711 [2024-09-27 15:56:57.189602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.711 [2024-09-27 15:56:57.189619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.711 [2024-09-27 15:56:57.197067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198eee38 00:38:16.973 [2024-09-27 15:56:57.198044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.973 [2024-09-27 15:56:57.198064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.973 [2024-09-27 15:56:57.205478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198f0ff8 00:38:16.973 [2024-09-27 15:56:57.206437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.973 [2024-09-27 15:56:57.206454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.973 [2024-09-27 15:56:57.213882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198f4b08 00:38:16.973 [2024-09-27 15:56:57.214830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.973 [2024-09-27 15:56:57.214846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.973 [2024-09-27 15:56:57.222293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e5a90 00:38:16.973 [2024-09-27 15:56:57.223261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.973 [2024-09-27 15:56:57.223277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.973 [2024-09-27 15:56:57.230726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e38d0 00:38:16.973 [2024-09-27 15:56:57.231696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.973 [2024-09-27 15:56:57.231713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.973 [2024-09-27 15:56:57.239143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e1710 00:38:16.973 [2024-09-27 15:56:57.240118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.973 [2024-09-27 15:56:57.240134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.973 [2024-09-27 15:56:57.247570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ea248 00:38:16.973 [2024-09-27 15:56:57.248542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.973 [2024-09-27 15:56:57.248559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.973 [2024-09-27 15:56:57.255974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ec408 00:38:16.973 [2024-09-27 15:56:57.256920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.973 [2024-09-27 15:56:57.256936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.973 [2024-09-27 15:56:57.264429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ee5c8 00:38:16.973 [2024-09-27 15:56:57.265379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:24449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.973 [2024-09-27 15:56:57.265395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.973 [2024-09-27 15:56:57.272842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198eff18 00:38:16.973 [2024-09-27 15:56:57.273816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.973 [2024-09-27 15:56:57.273832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.973 [2024-09-27 15:56:57.281276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198f4298 00:38:16.973 [2024-09-27 15:56:57.282247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.973 [2024-09-27 15:56:57.282264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.973 [2024-09-27 15:56:57.289678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e73e0 00:38:16.973 [2024-09-27 15:56:57.290611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.973 [2024-09-27 15:56:57.290627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.973 [2024-09-27 15:56:57.298068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e5220 00:38:16.973 [2024-09-27 15:56:57.299014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.973 [2024-09-27 15:56:57.299031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.974 [2024-09-27 15:56:57.306458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e3060 00:38:16.974 [2024-09-27 15:56:57.307415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.974 [2024-09-27 15:56:57.307432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.974 [2024-09-27 15:56:57.314876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e9168 00:38:16.974 [2024-09-27 15:56:57.315838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.974 [2024-09-27 15:56:57.315854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.974 [2024-09-27 15:56:57.323311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198eaab8 00:38:16.974 [2024-09-27 15:56:57.324283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.974 [2024-09-27 15:56:57.324299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.974 [2024-09-27 15:56:57.331719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ecc78 00:38:16.974 [2024-09-27 15:56:57.332686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.974 [2024-09-27 15:56:57.332702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.974 [2024-09-27 15:56:57.340119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198eee38 00:38:16.974 [2024-09-27 15:56:57.341072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.974 [2024-09-27 15:56:57.341089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.974 [2024-09-27 15:56:57.348515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198f0ff8 00:38:16.974 [2024-09-27 15:56:57.349492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:18002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.974 [2024-09-27 15:56:57.349508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.974 [2024-09-27 15:56:57.356912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198f4b08 00:38:16.974 [2024-09-27 15:56:57.357871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.974 [2024-09-27 15:56:57.357887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.974 [2024-09-27 15:56:57.365330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e5a90 00:38:16.974 [2024-09-27 15:56:57.366270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.974 [2024-09-27 15:56:57.366287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.974 [2024-09-27 15:56:57.373735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e38d0 00:38:16.974 [2024-09-27 15:56:57.374706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.974 [2024-09-27 15:56:57.374722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.974 [2024-09-27 15:56:57.382144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e1710 00:38:16.974 [2024-09-27 15:56:57.383120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.974 [2024-09-27 15:56:57.383136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.974 [2024-09-27 15:56:57.390539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ea248 00:38:16.974 [2024-09-27 15:56:57.391465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.974 [2024-09-27 15:56:57.391481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.974 [2024-09-27 15:56:57.398936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ec408 00:38:16.974 [2024-09-27 15:56:57.399898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.974 [2024-09-27 15:56:57.399915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.974 [2024-09-27 15:56:57.407342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ee5c8 00:38:16.974 [2024-09-27 15:56:57.408312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.974 [2024-09-27 15:56:57.408329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.974 [2024-09-27 15:56:57.415757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198eff18 00:38:16.974 [2024-09-27 15:56:57.416719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.974 [2024-09-27 15:56:57.416739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.974 [2024-09-27 15:56:57.424194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198f4298 00:38:16.974 [2024-09-27 15:56:57.425149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.974 [2024-09-27 15:56:57.425165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.974 30006.00 IOPS, 117.21 MiB/s [2024-09-27 15:56:57.432585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e73e0 00:38:16.974 [2024-09-27 15:56:57.433546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.974 [2024-09-27 15:56:57.433563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.974 [2024-09-27 15:56:57.441087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e4140 00:38:16.974 [2024-09-27 15:56:57.442039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.974 [2024-09-27 15:56:57.442055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.974 [2024-09-27 15:56:57.449511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198f57b0 00:38:16.974 [2024-09-27 15:56:57.450468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.974 [2024-09-27 15:56:57.450484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:16.974 [2024-09-27 15:56:57.457946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e23b8 00:38:16.974 [2024-09-27 15:56:57.458885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:16.974 [2024-09-27 15:56:57.458908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:17.236 [2024-09-27 15:56:57.466360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ea680 00:38:17.236 [2024-09-27 15:56:57.467299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.236 [2024-09-27 15:56:57.467317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:17.236 [2024-09-27 15:56:57.474786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ed920 00:38:17.236 [2024-09-27 15:56:57.475742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.236 [2024-09-27 15:56:57.475759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:17.236 [2024-09-27 15:56:57.484306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198efae0 00:38:17.236 [2024-09-27 15:56:57.485754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.236 [2024-09-27 15:56:57.485769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:17.236 [2024-09-27 15:56:57.490710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198f46d0 00:38:17.237 [2024-09-27 15:56:57.491447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.237 [2024-09-27 15:56:57.491462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:38:17.237 [2024-09-27 15:56:57.499967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e27f0 00:38:17.237 [2024-09-27 15:56:57.500830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.237 [2024-09-27 15:56:57.500846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:38:17.237 [2024-09-27 15:56:57.507703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198dece0 00:38:17.237 [2024-09-27 15:56:57.508468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.237 [2024-09-27 15:56:57.508485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:38:17.237 [2024-09-27 15:56:57.516836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ecc78 00:38:17.237 [2024-09-27 15:56:57.517702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.237 [2024-09-27 15:56:57.517718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:38:17.237 [2024-09-27 15:56:57.525428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e9168 00:38:17.237 [2024-09-27 15:56:57.526292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.237 [2024-09-27 15:56:57.526308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.237 [2024-09-27 15:56:57.533829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198eb760 00:38:17.237 [2024-09-27 15:56:57.534702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.237 [2024-09-27 15:56:57.534719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.237 [2024-09-27 15:56:57.542252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ec840 00:38:17.237 [2024-09-27 15:56:57.543095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.237 [2024-09-27 15:56:57.543112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.237 [2024-09-27 15:56:57.550661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ed920 00:38:17.237 [2024-09-27 15:56:57.551521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.237 [2024-09-27 15:56:57.551538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.237 [2024-09-27 15:56:57.559068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ddc00 00:38:17.237 [2024-09-27 15:56:57.559925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.237 [2024-09-27 15:56:57.559942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.237 [2024-09-27 15:56:57.567474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198fbcf0 00:38:17.237 [2024-09-27 15:56:57.568333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.237 [2024-09-27 15:56:57.568349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.237 [2024-09-27 15:56:57.575862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198dece0 00:38:17.237 [2024-09-27 15:56:57.576722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.237 [2024-09-27 15:56:57.576739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.237 [2024-09-27 15:56:57.584257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198f8618 00:38:17.237 [2024-09-27 15:56:57.585129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.237 [2024-09-27 15:56:57.585146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.237 [2024-09-27 15:56:57.592654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198eb328 00:38:17.237 [2024-09-27 15:56:57.593529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.237 [2024-09-27 15:56:57.593546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.237 [2024-09-27 15:56:57.601081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ea248 00:38:17.237 [2024-09-27 15:56:57.601929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.237 [2024-09-27 15:56:57.601951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.237 [2024-09-27 15:56:57.609480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ee190 00:38:17.237 [2024-09-27 15:56:57.610290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.237 [2024-09-27 15:56:57.610306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.237 [2024-09-27 15:56:57.617871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ef270 00:38:17.237 [2024-09-27 15:56:57.618730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:9825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.237 [2024-09-27 15:56:57.618747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.237 [2024-09-27 15:56:57.626286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198f4f40 00:38:17.237 [2024-09-27 15:56:57.627160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.237 [2024-09-27 15:56:57.627176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.237 [2024-09-27 15:56:57.634677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198fc560 00:38:17.237 [2024-09-27 15:56:57.635537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.237 [2024-09-27 15:56:57.635556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.237 [2024-09-27 15:56:57.643104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198df550 00:38:17.237 [2024-09-27 15:56:57.643945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.237 [2024-09-27 15:56:57.643961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.237 [2024-09-27 15:56:57.651540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198f7da8 00:38:17.237 [2024-09-27 15:56:57.652351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.237 [2024-09-27 15:56:57.652367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.237 [2024-09-27 15:56:57.659937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ecc78 00:38:17.237 [2024-09-27 15:56:57.660781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.237 [2024-09-27 15:56:57.660798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.237 [2024-09-27 15:56:57.668498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e9168 00:38:17.237 [2024-09-27 15:56:57.669311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.237 [2024-09-27 15:56:57.669328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.237 [2024-09-27 15:56:57.676900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198eb760 00:38:17.237 [2024-09-27 15:56:57.677760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.238 [2024-09-27 15:56:57.677775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.238 [2024-09-27 15:56:57.685304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ec840 00:38:17.238 [2024-09-27 15:56:57.686156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.238 [2024-09-27 15:56:57.686173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.238 [2024-09-27 15:56:57.693711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ed920 00:38:17.238 [2024-09-27 15:56:57.694571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.238 [2024-09-27 15:56:57.694587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.238 [2024-09-27 15:56:57.702127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ddc00 00:38:17.238 [2024-09-27 15:56:57.702971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.238 [2024-09-27 15:56:57.702987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.238 [2024-09-27 15:56:57.710516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198fbcf0 00:38:17.238 [2024-09-27 15:56:57.711353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.238 [2024-09-27 15:56:57.711368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.238 [2024-09-27 15:56:57.718918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198dece0 00:38:17.238 [2024-09-27 15:56:57.719766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:3848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.238 [2024-09-27 15:56:57.719784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.499 [2024-09-27 15:56:57.727323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198f8618 00:38:17.500 [2024-09-27 15:56:57.728157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.500 [2024-09-27 15:56:57.728174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.500 [2024-09-27 15:56:57.735768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198eb328 00:38:17.500 [2024-09-27 15:56:57.736635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.500 [2024-09-27 15:56:57.736651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.500 [2024-09-27 15:56:57.744175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ea248 00:38:17.500 [2024-09-27 15:56:57.745038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.500 [2024-09-27 15:56:57.745054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.500 [2024-09-27 15:56:57.752597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ee190 00:38:17.500 [2024-09-27 15:56:57.753446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.500 [2024-09-27 15:56:57.753463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.500 [2024-09-27 15:56:57.760986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ef270 00:38:17.500 [2024-09-27 15:56:57.761848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.500 [2024-09-27 15:56:57.761864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.500 [2024-09-27 15:56:57.769390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198f4f40 00:38:17.500 [2024-09-27 15:56:57.770272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.500 [2024-09-27 15:56:57.770288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.500 [2024-09-27 15:56:57.777793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198fc560 00:38:17.500 [2024-09-27 15:56:57.778661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.500 [2024-09-27 15:56:57.778677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.500 [2024-09-27 15:56:57.786226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198df550 00:38:17.500 [2024-09-27 15:56:57.787100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.500 [2024-09-27 15:56:57.787116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.500 [2024-09-27 15:56:57.794622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198f7da8 00:38:17.500 [2024-09-27 15:56:57.795483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.500 [2024-09-27 15:56:57.795500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.500 [2024-09-27 15:56:57.803052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ecc78 00:38:17.500 [2024-09-27 15:56:57.803915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.500 [2024-09-27 15:56:57.803932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.500 [2024-09-27 15:56:57.811443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e9168 00:38:17.500 [2024-09-27 15:56:57.812299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.500 [2024-09-27 15:56:57.812315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.500 [2024-09-27 15:56:57.819870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198eb760 00:38:17.500 [2024-09-27 15:56:57.820731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.500 [2024-09-27 15:56:57.820747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.500 [2024-09-27 15:56:57.828284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ec840 00:38:17.500 [2024-09-27 15:56:57.829163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.500 [2024-09-27 15:56:57.829179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.500 [2024-09-27 15:56:57.836771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ed920 00:38:17.500 [2024-09-27 15:56:57.837621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:18520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.500 [2024-09-27 15:56:57.837637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.500 [2024-09-27 15:56:57.845167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ddc00 00:38:17.500 [2024-09-27 15:56:57.846028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.500 [2024-09-27 15:56:57.846044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.500 [2024-09-27 15:56:57.853568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198fbcf0 00:38:17.500 [2024-09-27 15:56:57.854433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.500 [2024-09-27 15:56:57.854453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.500 [2024-09-27 15:56:57.861954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198dece0 00:38:17.500 [2024-09-27 15:56:57.862824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.500 [2024-09-27 15:56:57.862840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.500 [2024-09-27 15:56:57.870384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198f8618 00:38:17.500 [2024-09-27 15:56:57.871242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.500 [2024-09-27 15:56:57.871258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.500 [2024-09-27 15:56:57.878788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198eb328 00:38:17.500 [2024-09-27 15:56:57.879654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.500 [2024-09-27 15:56:57.879670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.500 [2024-09-27 15:56:57.887203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ea248 00:38:17.500 [2024-09-27 15:56:57.888068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.500 [2024-09-27 15:56:57.888084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.500 [2024-09-27 15:56:57.895588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ee190 00:38:17.500 [2024-09-27 15:56:57.896456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.500 [2024-09-27 15:56:57.896472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.500 [2024-09-27 15:56:57.903988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ef270 00:38:17.500 [2024-09-27 15:56:57.904846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.501 [2024-09-27 15:56:57.904862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.501 [2024-09-27 15:56:57.912388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198f4f40 00:38:17.501 [2024-09-27 15:56:57.913247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.501 [2024-09-27 15:56:57.913263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.501 [2024-09-27 15:56:57.920806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198fc560 00:38:17.501 [2024-09-27 15:56:57.921653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.501 [2024-09-27 15:56:57.921669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.501 [2024-09-27 15:56:57.929214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198df550 00:38:17.501 [2024-09-27 15:56:57.930082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:14116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.501 [2024-09-27 15:56:57.930098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.501 [2024-09-27 15:56:57.937624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198f7da8 00:38:17.501 [2024-09-27 15:56:57.938512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.501 [2024-09-27 15:56:57.938528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.501 [2024-09-27 15:56:57.946041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ecc78 00:38:17.501 [2024-09-27 15:56:57.946886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.501 [2024-09-27 15:56:57.946906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.501 [2024-09-27 15:56:57.954451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e9168 00:38:17.501 [2024-09-27 15:56:57.955309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.501 [2024-09-27 15:56:57.955325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.501 [2024-09-27 15:56:57.962848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198eb760 00:38:17.501 [2024-09-27 15:56:57.963720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.501 [2024-09-27 15:56:57.963736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.501 [2024-09-27 15:56:57.971278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ec840 00:38:17.501 [2024-09-27 15:56:57.972116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:15865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.501 [2024-09-27 15:56:57.972132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.501 [2024-09-27 15:56:57.979681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ed920 00:38:17.501 [2024-09-27 15:56:57.980513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.501 [2024-09-27 15:56:57.980529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.763 [2024-09-27 15:56:57.988091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ddc00 00:38:17.764 [2024-09-27 15:56:57.988944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.764 [2024-09-27 15:56:57.988960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.764 [2024-09-27 15:56:57.996481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198fbcf0 00:38:17.764 [2024-09-27 15:56:57.997303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.764 [2024-09-27 15:56:57.997320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.764 [2024-09-27 15:56:58.004882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198dece0 00:38:17.764 [2024-09-27 15:56:58.005743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.764 [2024-09-27 15:56:58.005759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.764 [2024-09-27 15:56:58.013282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198f8618 00:38:17.764 [2024-09-27 15:56:58.014123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.764 [2024-09-27 15:56:58.014140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.764 [2024-09-27 15:56:58.021717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198eb328 00:38:17.764 [2024-09-27 15:56:58.022569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.764 [2024-09-27 15:56:58.022585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.764 [2024-09-27 15:56:58.030120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ea248 00:38:17.764 [2024-09-27 15:56:58.030972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.764 [2024-09-27 15:56:58.030989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.764 [2024-09-27 15:56:58.038524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ee190 00:38:17.764 [2024-09-27 15:56:58.039351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.764 [2024-09-27 15:56:58.039367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.764 [2024-09-27 15:56:58.046920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ef270 00:38:17.764 [2024-09-27 15:56:58.047780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.764 [2024-09-27 15:56:58.047797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.764 [2024-09-27 15:56:58.055352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198f4f40 00:38:17.764 [2024-09-27 15:56:58.056211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.764 [2024-09-27 15:56:58.056227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.764 [2024-09-27 15:56:58.063773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198fc560 00:38:17.764 [2024-09-27 15:56:58.064643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.764 [2024-09-27 15:56:58.064660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.764 [2024-09-27 15:56:58.072203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198df550 00:38:17.764 [2024-09-27 15:56:58.073055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.764 [2024-09-27 15:56:58.073074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.764 [2024-09-27 15:56:58.080585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198f7da8 00:38:17.764 [2024-09-27 15:56:58.081452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.764 [2024-09-27 15:56:58.081468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.764 [2024-09-27 15:56:58.088992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ecc78 00:38:17.764 [2024-09-27 15:56:58.089849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.764 [2024-09-27 15:56:58.089865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.764 [2024-09-27 15:56:58.097390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e9168 00:38:17.764 [2024-09-27 15:56:58.098230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.764 [2024-09-27 15:56:58.098246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.764 [2024-09-27 15:56:58.105803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198eb760 00:38:17.764 [2024-09-27 15:56:58.106670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.764 [2024-09-27 15:56:58.106687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.764 [2024-09-27 15:56:58.114232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ec840 00:38:17.764 [2024-09-27 15:56:58.115096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.764 [2024-09-27 15:56:58.115112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.764 [2024-09-27 15:56:58.122636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ed920 00:38:17.764 [2024-09-27 15:56:58.123498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.764 [2024-09-27 15:56:58.123514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.764 [2024-09-27 15:56:58.131035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ddc00 00:38:17.764 [2024-09-27 15:56:58.131906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.764 [2024-09-27 15:56:58.131922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.764 [2024-09-27 15:56:58.139428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198fbcf0 00:38:17.764 [2024-09-27 15:56:58.140266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.764 [2024-09-27 15:56:58.140282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.764 [2024-09-27 15:56:58.147853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198dece0 00:38:17.764 [2024-09-27 15:56:58.148702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.764 [2024-09-27 15:56:58.148721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.764 [2024-09-27 15:56:58.156270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198f8618 00:38:17.764 [2024-09-27 15:56:58.157121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.764 [2024-09-27 15:56:58.157138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.764 [2024-09-27 15:56:58.164672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198eb328 00:38:17.764 [2024-09-27 15:56:58.165518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.764 [2024-09-27 15:56:58.165535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.764 [2024-09-27 15:56:58.173076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ea248 00:38:17.764 [2024-09-27 15:56:58.173919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.764 [2024-09-27 15:56:58.173935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.764 [2024-09-27 15:56:58.181475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ee190 00:38:17.764 [2024-09-27 15:56:58.182324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.764 [2024-09-27 15:56:58.182340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.764 [2024-09-27 15:56:58.189885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ef270 00:38:17.764 [2024-09-27 15:56:58.190741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.765 [2024-09-27 15:56:58.190757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.765 [2024-09-27 15:56:58.198304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198f4f40 00:38:17.765 [2024-09-27 15:56:58.199185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.765 [2024-09-27 15:56:58.199201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.765 [2024-09-27 15:56:58.206718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198fc560 00:38:17.765 [2024-09-27 15:56:58.207586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:3409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.765 [2024-09-27 15:56:58.207602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.765 [2024-09-27 15:56:58.215122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198df550 00:38:17.765 [2024-09-27 15:56:58.215973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.765 [2024-09-27 15:56:58.215989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.765 [2024-09-27 15:56:58.223551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198f7da8 00:38:17.765 [2024-09-27 15:56:58.224404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.765 [2024-09-27 15:56:58.224420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:17.765 [2024-09-27 15:56:58.232267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198f20d8 00:38:17.765 [2024-09-27 15:56:58.232998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:18892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.765 [2024-09-27 15:56:58.233015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:38:17.765 [2024-09-27 15:56:58.241002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e9e10 00:38:17.765 [2024-09-27 15:56:58.241984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.765 [2024-09-27 15:56:58.242001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:38:17.765 [2024-09-27 15:56:58.249587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198f0788 00:38:17.765 [2024-09-27 15:56:58.250454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:17.765 [2024-09-27 15:56:58.250471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:18.027 [2024-09-27 15:56:58.257726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e8088 00:38:18.027 [2024-09-27 15:56:58.258915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:18.027 [2024-09-27 15:56:58.258932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.027 [2024-09-27 15:56:58.266255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ebb98 00:38:18.027 [2024-09-27 15:56:58.267244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:18.027 [2024-09-27 15:56:58.267260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:18.027 [2024-09-27 15:56:58.274015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e5220 00:38:18.027 [2024-09-27 15:56:58.274900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:18.027 [2024-09-27 15:56:58.274916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:38:18.027 [2024-09-27 15:56:58.282761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198efae0 00:38:18.027 [2024-09-27 15:56:58.283659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:18.027 [2024-09-27 15:56:58.283674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:38:18.027 [2024-09-27 15:56:58.291197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e84c0 00:38:18.027 [2024-09-27 15:56:58.292039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:18.027 [2024-09-27 15:56:58.292055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:38:18.027 [2024-09-27 15:56:58.299685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e5220 00:38:18.027 [2024-09-27 15:56:58.300550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:18.027 [2024-09-27 15:56:58.300566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:38:18.027 [2024-09-27 15:56:58.308111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198efae0 00:38:18.027 [2024-09-27 15:56:58.308974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:18.027 [2024-09-27 15:56:58.308990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:38:18.027 [2024-09-27 15:56:58.316514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e84c0 00:38:18.027 [2024-09-27 15:56:58.317387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:18.027 [2024-09-27 15:56:58.317404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:38:18.027 [2024-09-27 15:56:58.324942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e5220 00:38:18.027 [2024-09-27 15:56:58.325808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:18.027 [2024-09-27 15:56:58.325824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:38:18.027 [2024-09-27 15:56:58.333359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198efae0 00:38:18.027 [2024-09-27 15:56:58.334225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:18.027 [2024-09-27 15:56:58.334241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:38:18.027 [2024-09-27 15:56:58.341783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e84c0 00:38:18.027 [2024-09-27 15:56:58.342662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:18.027 [2024-09-27 15:56:58.342678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:38:18.027 [2024-09-27 15:56:58.350239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e5220 00:38:18.027 [2024-09-27 15:56:58.351126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:15854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:18.027 [2024-09-27 15:56:58.351142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:38:18.027 [2024-09-27 15:56:58.358666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198efae0 00:38:18.027 [2024-09-27 15:56:58.359550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:11936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:18.027 [2024-09-27 15:56:58.359566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:38:18.028 [2024-09-27 15:56:58.367073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e84c0 00:38:18.028 [2024-09-27 15:56:58.367957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:18.028 [2024-09-27 15:56:58.367979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:38:18.028 [2024-09-27 15:56:58.375506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e5220 00:38:18.028 [2024-09-27 15:56:58.376380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:18.028 [2024-09-27 15:56:58.376396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:38:18.028 [2024-09-27 15:56:58.383928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198efae0 00:38:18.028 [2024-09-27 15:56:58.384811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:18.028 [2024-09-27 15:56:58.384827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:38:18.028 [2024-09-27 15:56:58.392670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ddc00 00:38:18.028 [2024-09-27 15:56:58.393425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:18.028 [2024-09-27 15:56:58.393442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:38:18.028 [2024-09-27 15:56:58.401669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198f4298 00:38:18.028 [2024-09-27 15:56:58.402751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:18.028 [2024-09-27 15:56:58.402767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:38:18.028 [2024-09-27 15:56:58.410113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198ea248 00:38:18.028 [2024-09-27 15:56:58.411218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:18.028 [2024-09-27 15:56:58.411234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:38:18.028 [2024-09-27 15:56:58.418546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198e6300 00:38:18.028 [2024-09-27 15:56:58.419671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:18.028 [2024-09-27 15:56:58.419687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:38:18.028 [2024-09-27 15:56:58.426992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410fa0) with pdu=0x2000198f4298 00:38:18.028 [2024-09-27 15:56:58.428108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:18.028 [2024-09-27 15:56:58.428125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:38:18.028 30180.50 IOPS, 117.89 MiB/s 00:38:18.028 Latency(us) 00:38:18.028 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:18.028 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:18.028 nvme0n1 : 2.00 30176.86 117.88 0.00 0.00 4236.38 2184.53 16493.23 00:38:18.028 =================================================================================================================== 00:38:18.028 Total : 30176.86 117.88 0.00 0.00 4236.38 2184.53 16493.23 00:38:18.028 { 00:38:18.028 "results": [ 00:38:18.028 { 00:38:18.028 "job": "nvme0n1", 00:38:18.028 "core_mask": "0x2", 00:38:18.028 "workload": "randwrite", 00:38:18.028 "status": "finished", 00:38:18.028 "queue_depth": 128, 00:38:18.028 "io_size": 4096, 00:38:18.028 "runtime": 2.002362, 00:38:18.028 "iops": 30176.861127009004, 00:38:18.028 "mibps": 117.87836377737892, 00:38:18.028 "io_failed": 0, 00:38:18.028 "io_timeout": 0, 00:38:18.028 "avg_latency_us": 4236.37728048545, 00:38:18.028 "min_latency_us": 2184.5333333333333, 00:38:18.028 "max_latency_us": 16493.226666666666 00:38:18.028 } 00:38:18.028 ], 00:38:18.028 "core_count": 1 00:38:18.028 } 00:38:18.028 15:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:38:18.028 15:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:38:18.028 15:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:38:18.028 | .driver_specific 00:38:18.028 | .nvme_error 00:38:18.028 | .status_code 00:38:18.028 | .command_transient_transport_error' 00:38:18.028 15:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:38:18.289 15:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 236 > 0 )) 00:38:18.289 15:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 631395 00:38:18.289 15:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 631395 ']' 00:38:18.289 15:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 631395 00:38:18.290 15:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:38:18.290 15:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:18.290 15:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 631395 00:38:18.290 15:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:18.290 15:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:18.290 15:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 631395' 00:38:18.290 killing process with pid 631395 00:38:18.290 15:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 631395 00:38:18.290 Received shutdown signal, test time was about 2.000000 seconds 00:38:18.290 00:38:18.290 Latency(us) 00:38:18.290 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:18.290 =================================================================================================================== 00:38:18.290 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:18.290 15:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 631395 00:38:18.551 15:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:38:18.551 15:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:38:18.551 15:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:38:18.551 15:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:38:18.551 15:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:38:18.551 15:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=632153 00:38:18.551 15:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 632153 /var/tmp/bperf.sock 00:38:18.551 15:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 632153 ']' 00:38:18.551 15:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:38:18.551 15:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:18.551 15:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:18.551 15:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:18.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:18.551 15:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:18.551 15:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:18.551 [2024-09-27 15:56:58.866125] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:38:18.551 [2024-09-27 15:56:58.866179] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid632153 ] 00:38:18.551 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:18.551 Zero copy mechanism will not be used. 00:38:18.551 [2024-09-27 15:56:58.943733] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:18.551 [2024-09-27 15:56:58.970039] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:19.492 15:56:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:19.492 15:56:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:38:19.492 15:56:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:19.493 15:56:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:19.493 15:56:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:38:19.493 15:56:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:19.493 15:56:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:19.493 15:56:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:19.493 15:56:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:19.493 15:56:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:19.753 nvme0n1 00:38:19.753 15:57:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:38:19.753 15:57:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:19.753 15:57:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:19.753 15:57:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:19.753 15:57:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:38:19.753 15:57:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:20.015 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:20.015 Zero copy mechanism will not be used. 00:38:20.015 Running I/O for 2 seconds... 00:38:20.015 [2024-09-27 15:57:00.329146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.015 [2024-09-27 15:57:00.329388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.015 [2024-09-27 15:57:00.329421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.015 [2024-09-27 15:57:00.338829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.015 [2024-09-27 15:57:00.339047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.015 [2024-09-27 15:57:00.339072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.015 [2024-09-27 15:57:00.346044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.015 [2024-09-27 15:57:00.346378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.015 [2024-09-27 15:57:00.346399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.015 [2024-09-27 15:57:00.353911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.015 [2024-09-27 15:57:00.354227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.015 [2024-09-27 15:57:00.354246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.015 [2024-09-27 15:57:00.363630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.015 [2024-09-27 15:57:00.363854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.015 [2024-09-27 15:57:00.363874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.015 [2024-09-27 15:57:00.373411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.015 [2024-09-27 15:57:00.373740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.015 [2024-09-27 15:57:00.373760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.015 [2024-09-27 15:57:00.382828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.015 [2024-09-27 15:57:00.383123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.015 [2024-09-27 15:57:00.383142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.015 [2024-09-27 15:57:00.393722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.015 [2024-09-27 15:57:00.394055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.015 [2024-09-27 15:57:00.394075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.015 [2024-09-27 15:57:00.403486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.015 [2024-09-27 15:57:00.403716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.015 [2024-09-27 15:57:00.403739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.015 [2024-09-27 15:57:00.413783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.015 [2024-09-27 15:57:00.414005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.015 [2024-09-27 15:57:00.414024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.015 [2024-09-27 15:57:00.425038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.015 [2024-09-27 15:57:00.425385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.015 [2024-09-27 15:57:00.425404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.015 [2024-09-27 15:57:00.435818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.015 [2024-09-27 15:57:00.436123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.015 [2024-09-27 15:57:00.436142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.015 [2024-09-27 15:57:00.447165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.015 [2024-09-27 15:57:00.447444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.015 [2024-09-27 15:57:00.447463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.015 [2024-09-27 15:57:00.457978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.015 [2024-09-27 15:57:00.458335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.015 [2024-09-27 15:57:00.458354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.015 [2024-09-27 15:57:00.469255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.015 [2024-09-27 15:57:00.469539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.015 [2024-09-27 15:57:00.469558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.015 [2024-09-27 15:57:00.480031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.015 [2024-09-27 15:57:00.480281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.015 [2024-09-27 15:57:00.480301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.015 [2024-09-27 15:57:00.490950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.015 [2024-09-27 15:57:00.491196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.015 [2024-09-27 15:57:00.491215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.015 [2024-09-27 15:57:00.501841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.015 [2024-09-27 15:57:00.502165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.015 [2024-09-27 15:57:00.502184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.277 [2024-09-27 15:57:00.512440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.277 [2024-09-27 15:57:00.512714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.277 [2024-09-27 15:57:00.512733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.277 [2024-09-27 15:57:00.523036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.277 [2024-09-27 15:57:00.523254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.277 [2024-09-27 15:57:00.523273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.277 [2024-09-27 15:57:00.533519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.277 [2024-09-27 15:57:00.533881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.277 [2024-09-27 15:57:00.533905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.277 [2024-09-27 15:57:00.542533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.277 [2024-09-27 15:57:00.542809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.277 [2024-09-27 15:57:00.542828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.277 [2024-09-27 15:57:00.553214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.277 [2024-09-27 15:57:00.553453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.277 [2024-09-27 15:57:00.553472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.277 [2024-09-27 15:57:00.564410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.277 [2024-09-27 15:57:00.564617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.277 [2024-09-27 15:57:00.564636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.277 [2024-09-27 15:57:00.575323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.277 [2024-09-27 15:57:00.575534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.277 [2024-09-27 15:57:00.575554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.277 [2024-09-27 15:57:00.586868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.277 [2024-09-27 15:57:00.587239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.277 [2024-09-27 15:57:00.587258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.277 [2024-09-27 15:57:00.597504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.277 [2024-09-27 15:57:00.597772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.277 [2024-09-27 15:57:00.597792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.277 [2024-09-27 15:57:00.607798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.277 [2024-09-27 15:57:00.608080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.277 [2024-09-27 15:57:00.608100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.277 [2024-09-27 15:57:00.618981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.277 [2024-09-27 15:57:00.619271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.277 [2024-09-27 15:57:00.619290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.277 [2024-09-27 15:57:00.629047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.277 [2024-09-27 15:57:00.629262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.277 [2024-09-27 15:57:00.629281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.277 [2024-09-27 15:57:00.640001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.277 [2024-09-27 15:57:00.640264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.277 [2024-09-27 15:57:00.640283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.277 [2024-09-27 15:57:00.649596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.277 [2024-09-27 15:57:00.649862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.277 [2024-09-27 15:57:00.649881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.277 [2024-09-27 15:57:00.658008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.277 [2024-09-27 15:57:00.658387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.277 [2024-09-27 15:57:00.658406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.277 [2024-09-27 15:57:00.664230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.277 [2024-09-27 15:57:00.664417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.277 [2024-09-27 15:57:00.664436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.277 [2024-09-27 15:57:00.668786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.277 [2024-09-27 15:57:00.668979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.277 [2024-09-27 15:57:00.669004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.277 [2024-09-27 15:57:00.677006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.277 [2024-09-27 15:57:00.677289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.277 [2024-09-27 15:57:00.677304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.277 [2024-09-27 15:57:00.688101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.277 [2024-09-27 15:57:00.688369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.278 [2024-09-27 15:57:00.688385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.278 [2024-09-27 15:57:00.699841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.278 [2024-09-27 15:57:00.700129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.278 [2024-09-27 15:57:00.700145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.278 [2024-09-27 15:57:00.711543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.278 [2024-09-27 15:57:00.711785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.278 [2024-09-27 15:57:00.711801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.278 [2024-09-27 15:57:00.723310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.278 [2024-09-27 15:57:00.723651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.278 [2024-09-27 15:57:00.723668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.278 [2024-09-27 15:57:00.734335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.278 [2024-09-27 15:57:00.734608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.278 [2024-09-27 15:57:00.734624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.278 [2024-09-27 15:57:00.744357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.278 [2024-09-27 15:57:00.744589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.278 [2024-09-27 15:57:00.744606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.278 [2024-09-27 15:57:00.754655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.278 [2024-09-27 15:57:00.754917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.278 [2024-09-27 15:57:00.754934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.278 [2024-09-27 15:57:00.764254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.278 [2024-09-27 15:57:00.764540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.278 [2024-09-27 15:57:00.764557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.540 [2024-09-27 15:57:00.774662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.540 [2024-09-27 15:57:00.774735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.540 [2024-09-27 15:57:00.774755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.540 [2024-09-27 15:57:00.784747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.540 [2024-09-27 15:57:00.785009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.540 [2024-09-27 15:57:00.785027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.540 [2024-09-27 15:57:00.794569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.540 [2024-09-27 15:57:00.794878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.540 [2024-09-27 15:57:00.794900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.540 [2024-09-27 15:57:00.804942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.540 [2024-09-27 15:57:00.805248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.540 [2024-09-27 15:57:00.805265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.540 [2024-09-27 15:57:00.814561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.540 [2024-09-27 15:57:00.814866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.540 [2024-09-27 15:57:00.814884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.540 [2024-09-27 15:57:00.824965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.540 [2024-09-27 15:57:00.825284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.540 [2024-09-27 15:57:00.825302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.540 [2024-09-27 15:57:00.835420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.540 [2024-09-27 15:57:00.835706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.540 [2024-09-27 15:57:00.835723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.540 [2024-09-27 15:57:00.845623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.540 [2024-09-27 15:57:00.845942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.540 [2024-09-27 15:57:00.845959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.540 [2024-09-27 15:57:00.855754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.540 [2024-09-27 15:57:00.856034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.540 [2024-09-27 15:57:00.856052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.540 [2024-09-27 15:57:00.864862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.540 [2024-09-27 15:57:00.865145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.540 [2024-09-27 15:57:00.865163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.540 [2024-09-27 15:57:00.874713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.540 [2024-09-27 15:57:00.874818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.540 [2024-09-27 15:57:00.874836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.540 [2024-09-27 15:57:00.884466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.540 [2024-09-27 15:57:00.884726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.540 [2024-09-27 15:57:00.884745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.540 [2024-09-27 15:57:00.888920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.540 [2024-09-27 15:57:00.889007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.540 [2024-09-27 15:57:00.889025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.540 [2024-09-27 15:57:00.891604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.540 [2024-09-27 15:57:00.891693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.540 [2024-09-27 15:57:00.891710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.540 [2024-09-27 15:57:00.894287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.540 [2024-09-27 15:57:00.894390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.540 [2024-09-27 15:57:00.894407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.540 [2024-09-27 15:57:00.896911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.540 [2024-09-27 15:57:00.897011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.540 [2024-09-27 15:57:00.897028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.540 [2024-09-27 15:57:00.899592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.540 [2024-09-27 15:57:00.899681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.540 [2024-09-27 15:57:00.899701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.541 [2024-09-27 15:57:00.902145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.541 [2024-09-27 15:57:00.902247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.541 [2024-09-27 15:57:00.902265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.541 [2024-09-27 15:57:00.904605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.541 [2024-09-27 15:57:00.904704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.541 [2024-09-27 15:57:00.904721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.541 [2024-09-27 15:57:00.907045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.541 [2024-09-27 15:57:00.907145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.541 [2024-09-27 15:57:00.907162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.541 [2024-09-27 15:57:00.909484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.541 [2024-09-27 15:57:00.909582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.541 [2024-09-27 15:57:00.909600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.541 [2024-09-27 15:57:00.911957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.541 [2024-09-27 15:57:00.912065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.541 [2024-09-27 15:57:00.912082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.541 [2024-09-27 15:57:00.914383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.541 [2024-09-27 15:57:00.914485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.541 [2024-09-27 15:57:00.914501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.541 [2024-09-27 15:57:00.917110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.541 [2024-09-27 15:57:00.917229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.541 [2024-09-27 15:57:00.917246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.541 [2024-09-27 15:57:00.920404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.541 [2024-09-27 15:57:00.920506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.541 [2024-09-27 15:57:00.920523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.541 [2024-09-27 15:57:00.922847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.541 [2024-09-27 15:57:00.922964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.541 [2024-09-27 15:57:00.922982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.541 [2024-09-27 15:57:00.925265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.541 [2024-09-27 15:57:00.925365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.541 [2024-09-27 15:57:00.925382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.541 [2024-09-27 15:57:00.927707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.541 [2024-09-27 15:57:00.927801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.541 [2024-09-27 15:57:00.927818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.541 [2024-09-27 15:57:00.930133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.541 [2024-09-27 15:57:00.930235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.541 [2024-09-27 15:57:00.930252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.541 [2024-09-27 15:57:00.932555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.541 [2024-09-27 15:57:00.932662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.541 [2024-09-27 15:57:00.932680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.541 [2024-09-27 15:57:00.935006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.541 [2024-09-27 15:57:00.935118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.541 [2024-09-27 15:57:00.935136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.541 [2024-09-27 15:57:00.937419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.541 [2024-09-27 15:57:00.937527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.541 [2024-09-27 15:57:00.937544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.541 [2024-09-27 15:57:00.939812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.541 [2024-09-27 15:57:00.939919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.541 [2024-09-27 15:57:00.939936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.541 [2024-09-27 15:57:00.942196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.541 [2024-09-27 15:57:00.942283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.541 [2024-09-27 15:57:00.942302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.541 [2024-09-27 15:57:00.945042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.541 [2024-09-27 15:57:00.945199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.541 [2024-09-27 15:57:00.945216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.541 [2024-09-27 15:57:00.950205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.541 [2024-09-27 15:57:00.950438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.541 [2024-09-27 15:57:00.950455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.541 [2024-09-27 15:57:00.959859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.541 [2024-09-27 15:57:00.959975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.541 [2024-09-27 15:57:00.959992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.541 [2024-09-27 15:57:00.968777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.541 [2024-09-27 15:57:00.968859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.541 [2024-09-27 15:57:00.968876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.541 [2024-09-27 15:57:00.977265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.541 [2024-09-27 15:57:00.977361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.541 [2024-09-27 15:57:00.977379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.541 [2024-09-27 15:57:00.987458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.542 [2024-09-27 15:57:00.987583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.542 [2024-09-27 15:57:00.987599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.542 [2024-09-27 15:57:00.997914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.542 [2024-09-27 15:57:00.998207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.542 [2024-09-27 15:57:00.998223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.542 [2024-09-27 15:57:01.008842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.542 [2024-09-27 15:57:01.009131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.542 [2024-09-27 15:57:01.009148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.542 [2024-09-27 15:57:01.019344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.542 [2024-09-27 15:57:01.019688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.542 [2024-09-27 15:57:01.019705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.804 [2024-09-27 15:57:01.030154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.804 [2024-09-27 15:57:01.030380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.804 [2024-09-27 15:57:01.030396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.804 [2024-09-27 15:57:01.040061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.804 [2024-09-27 15:57:01.040333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.804 [2024-09-27 15:57:01.040350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.804 [2024-09-27 15:57:01.050943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.804 [2024-09-27 15:57:01.051216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.804 [2024-09-27 15:57:01.051232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.804 [2024-09-27 15:57:01.061979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.804 [2024-09-27 15:57:01.062292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.804 [2024-09-27 15:57:01.062309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.804 [2024-09-27 15:57:01.072304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.804 [2024-09-27 15:57:01.072604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.804 [2024-09-27 15:57:01.072621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.804 [2024-09-27 15:57:01.082308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.804 [2024-09-27 15:57:01.082589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.804 [2024-09-27 15:57:01.082606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.804 [2024-09-27 15:57:01.092993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.804 [2024-09-27 15:57:01.093237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.804 [2024-09-27 15:57:01.093254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.804 [2024-09-27 15:57:01.103838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.804 [2024-09-27 15:57:01.104205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.804 [2024-09-27 15:57:01.104222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.804 [2024-09-27 15:57:01.113899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.804 [2024-09-27 15:57:01.114215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.804 [2024-09-27 15:57:01.114233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.804 [2024-09-27 15:57:01.124313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.804 [2024-09-27 15:57:01.124559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.804 [2024-09-27 15:57:01.124577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.804 [2024-09-27 15:57:01.134707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.804 [2024-09-27 15:57:01.135033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.804 [2024-09-27 15:57:01.135051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.804 [2024-09-27 15:57:01.144928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.805 [2024-09-27 15:57:01.145289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.805 [2024-09-27 15:57:01.145308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.805 [2024-09-27 15:57:01.154694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.805 [2024-09-27 15:57:01.154992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.805 [2024-09-27 15:57:01.155010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.805 [2024-09-27 15:57:01.164092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.805 [2024-09-27 15:57:01.164414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.805 [2024-09-27 15:57:01.164431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.805 [2024-09-27 15:57:01.173152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.805 [2024-09-27 15:57:01.173467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.805 [2024-09-27 15:57:01.173484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.805 [2024-09-27 15:57:01.183368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.805 [2024-09-27 15:57:01.183690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.805 [2024-09-27 15:57:01.183707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.805 [2024-09-27 15:57:01.191831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.805 [2024-09-27 15:57:01.192157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.805 [2024-09-27 15:57:01.192179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.805 [2024-09-27 15:57:01.202200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.805 [2024-09-27 15:57:01.202330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.805 [2024-09-27 15:57:01.202348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.805 [2024-09-27 15:57:01.207266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.805 [2024-09-27 15:57:01.207378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.805 [2024-09-27 15:57:01.207396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.805 [2024-09-27 15:57:01.209925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.805 [2024-09-27 15:57:01.210025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.805 [2024-09-27 15:57:01.210042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.805 [2024-09-27 15:57:01.212595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.805 [2024-09-27 15:57:01.212708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.805 [2024-09-27 15:57:01.212726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.805 [2024-09-27 15:57:01.215248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.805 [2024-09-27 15:57:01.215329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.805 [2024-09-27 15:57:01.215346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.805 [2024-09-27 15:57:01.217843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.805 [2024-09-27 15:57:01.217920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.805 [2024-09-27 15:57:01.217942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.805 [2024-09-27 15:57:01.220441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.805 [2024-09-27 15:57:01.220515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.805 [2024-09-27 15:57:01.220533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.805 [2024-09-27 15:57:01.222929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.805 [2024-09-27 15:57:01.222997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.805 [2024-09-27 15:57:01.223016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.805 [2024-09-27 15:57:01.225388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.805 [2024-09-27 15:57:01.225447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.805 [2024-09-27 15:57:01.225467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.805 [2024-09-27 15:57:01.227872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.805 [2024-09-27 15:57:01.227950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.805 [2024-09-27 15:57:01.227968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.805 [2024-09-27 15:57:01.230434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.805 [2024-09-27 15:57:01.230489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.805 [2024-09-27 15:57:01.230509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.805 [2024-09-27 15:57:01.234263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.805 [2024-09-27 15:57:01.234345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.805 [2024-09-27 15:57:01.234362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.805 [2024-09-27 15:57:01.236947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.805 [2024-09-27 15:57:01.237078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.805 [2024-09-27 15:57:01.237096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.805 [2024-09-27 15:57:01.240013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.805 [2024-09-27 15:57:01.240093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.805 [2024-09-27 15:57:01.240109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.805 [2024-09-27 15:57:01.243160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.805 [2024-09-27 15:57:01.243276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.805 [2024-09-27 15:57:01.243293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.805 [2024-09-27 15:57:01.248919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.805 [2024-09-27 15:57:01.249192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.805 [2024-09-27 15:57:01.249209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.805 [2024-09-27 15:57:01.252463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.806 [2024-09-27 15:57:01.252520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.806 [2024-09-27 15:57:01.252539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.806 [2024-09-27 15:57:01.254903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.806 [2024-09-27 15:57:01.254964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.806 [2024-09-27 15:57:01.254984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.806 [2024-09-27 15:57:01.257803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.806 [2024-09-27 15:57:01.257905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.806 [2024-09-27 15:57:01.257922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.806 [2024-09-27 15:57:01.264056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.806 [2024-09-27 15:57:01.264329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.806 [2024-09-27 15:57:01.264344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.806 [2024-09-27 15:57:01.273386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.806 [2024-09-27 15:57:01.273602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.806 [2024-09-27 15:57:01.273618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:20.806 [2024-09-27 15:57:01.281172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.806 [2024-09-27 15:57:01.281287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.806 [2024-09-27 15:57:01.281305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:20.806 [2024-09-27 15:57:01.283785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.806 [2024-09-27 15:57:01.283880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.806 [2024-09-27 15:57:01.283902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:20.806 [2024-09-27 15:57:01.286422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.806 [2024-09-27 15:57:01.286517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.806 [2024-09-27 15:57:01.286535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:20.806 [2024-09-27 15:57:01.289054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:20.806 [2024-09-27 15:57:01.289146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:20.806 [2024-09-27 15:57:01.289164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.069 [2024-09-27 15:57:01.291725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.069 [2024-09-27 15:57:01.291821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.069 [2024-09-27 15:57:01.291841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.069 [2024-09-27 15:57:01.294311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.069 [2024-09-27 15:57:01.294411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.069 [2024-09-27 15:57:01.294429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.069 [2024-09-27 15:57:01.297020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.069 [2024-09-27 15:57:01.297116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.069 [2024-09-27 15:57:01.297134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.069 [2024-09-27 15:57:01.299771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.069 [2024-09-27 15:57:01.299869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.069 [2024-09-27 15:57:01.299887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.069 [2024-09-27 15:57:01.302281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.069 [2024-09-27 15:57:01.302375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.069 [2024-09-27 15:57:01.302392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.069 [2024-09-27 15:57:01.305408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.069 [2024-09-27 15:57:01.305518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.069 [2024-09-27 15:57:01.305535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.069 [2024-09-27 15:57:01.308342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.069 [2024-09-27 15:57:01.308453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.069 [2024-09-27 15:57:01.308472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.069 [2024-09-27 15:57:01.311639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.069 [2024-09-27 15:57:01.311761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.069 [2024-09-27 15:57:01.311779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.069 [2024-09-27 15:57:01.314253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.069 [2024-09-27 15:57:01.314348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.069 [2024-09-27 15:57:01.314366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.069 4295.00 IOPS, 536.88 MiB/s [2024-09-27 15:57:01.318980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.069 [2024-09-27 15:57:01.319237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.069 [2024-09-27 15:57:01.319254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.069 [2024-09-27 15:57:01.324395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.069 [2024-09-27 15:57:01.324478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.069 [2024-09-27 15:57:01.324496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.069 [2024-09-27 15:57:01.327156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.069 [2024-09-27 15:57:01.327228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.069 [2024-09-27 15:57:01.327247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.069 [2024-09-27 15:57:01.332939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.069 [2024-09-27 15:57:01.333247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.069 [2024-09-27 15:57:01.333264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.069 [2024-09-27 15:57:01.340944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.069 [2024-09-27 15:57:01.341236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.069 [2024-09-27 15:57:01.341254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.070 [2024-09-27 15:57:01.348656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.070 [2024-09-27 15:57:01.348928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.070 [2024-09-27 15:57:01.348946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.070 [2024-09-27 15:57:01.356749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.070 [2024-09-27 15:57:01.356824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.070 [2024-09-27 15:57:01.356843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.070 [2024-09-27 15:57:01.359733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.070 [2024-09-27 15:57:01.359807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.070 [2024-09-27 15:57:01.359827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.070 [2024-09-27 15:57:01.362719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.070 [2024-09-27 15:57:01.362795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.070 [2024-09-27 15:57:01.362816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.070 [2024-09-27 15:57:01.365454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.070 [2024-09-27 15:57:01.365533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.070 [2024-09-27 15:57:01.365554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.070 [2024-09-27 15:57:01.368196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.070 [2024-09-27 15:57:01.368279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.070 [2024-09-27 15:57:01.368300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.070 [2024-09-27 15:57:01.370922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.070 [2024-09-27 15:57:01.371010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.070 [2024-09-27 15:57:01.371031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.070 [2024-09-27 15:57:01.373916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.070 [2024-09-27 15:57:01.373989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.070 [2024-09-27 15:57:01.374010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.070 [2024-09-27 15:57:01.377015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.070 [2024-09-27 15:57:01.377109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.070 [2024-09-27 15:57:01.377126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.070 [2024-09-27 15:57:01.379477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.070 [2024-09-27 15:57:01.379559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.070 [2024-09-27 15:57:01.379576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.070 [2024-09-27 15:57:01.381919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.070 [2024-09-27 15:57:01.382002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.070 [2024-09-27 15:57:01.382018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.070 [2024-09-27 15:57:01.384981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.070 [2024-09-27 15:57:01.385051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.070 [2024-09-27 15:57:01.385072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.070 [2024-09-27 15:57:01.392722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.070 [2024-09-27 15:57:01.392829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.070 [2024-09-27 15:57:01.392849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.070 [2024-09-27 15:57:01.396599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.070 [2024-09-27 15:57:01.396715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.070 [2024-09-27 15:57:01.396733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.070 [2024-09-27 15:57:01.403311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.070 [2024-09-27 15:57:01.403448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.070 [2024-09-27 15:57:01.403465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.070 [2024-09-27 15:57:01.412094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.070 [2024-09-27 15:57:01.412384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.070 [2024-09-27 15:57:01.412401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.070 [2024-09-27 15:57:01.421926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.070 [2024-09-27 15:57:01.422207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.070 [2024-09-27 15:57:01.422223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.070 [2024-09-27 15:57:01.432323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.070 [2024-09-27 15:57:01.432417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.070 [2024-09-27 15:57:01.432433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.070 [2024-09-27 15:57:01.442672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.070 [2024-09-27 15:57:01.443023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.070 [2024-09-27 15:57:01.443040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.070 [2024-09-27 15:57:01.452934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.070 [2024-09-27 15:57:01.453280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.070 [2024-09-27 15:57:01.453297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.070 [2024-09-27 15:57:01.463990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.070 [2024-09-27 15:57:01.464305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.070 [2024-09-27 15:57:01.464321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.070 [2024-09-27 15:57:01.474646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.070 [2024-09-27 15:57:01.474892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.070 [2024-09-27 15:57:01.474913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.070 [2024-09-27 15:57:01.485518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.071 [2024-09-27 15:57:01.485610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.071 [2024-09-27 15:57:01.485627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.071 [2024-09-27 15:57:01.496476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.071 [2024-09-27 15:57:01.496810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.071 [2024-09-27 15:57:01.496826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.071 [2024-09-27 15:57:01.506990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.071 [2024-09-27 15:57:01.507270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.071 [2024-09-27 15:57:01.507287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.071 [2024-09-27 15:57:01.517163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.071 [2024-09-27 15:57:01.517385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.071 [2024-09-27 15:57:01.517401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.071 [2024-09-27 15:57:01.527565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.071 [2024-09-27 15:57:01.527778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.071 [2024-09-27 15:57:01.527794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.071 [2024-09-27 15:57:01.537708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.071 [2024-09-27 15:57:01.537988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.071 [2024-09-27 15:57:01.538005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.071 [2024-09-27 15:57:01.545143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.071 [2024-09-27 15:57:01.545216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.071 [2024-09-27 15:57:01.545236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.071 [2024-09-27 15:57:01.548913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.071 [2024-09-27 15:57:01.548973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.071 [2024-09-27 15:57:01.548993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.071 [2024-09-27 15:57:01.551945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.071 [2024-09-27 15:57:01.552008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.071 [2024-09-27 15:57:01.552028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.333 [2024-09-27 15:57:01.556409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.333 [2024-09-27 15:57:01.556460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.333 [2024-09-27 15:57:01.556478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.333 [2024-09-27 15:57:01.559965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.333 [2024-09-27 15:57:01.560044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.333 [2024-09-27 15:57:01.560061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.333 [2024-09-27 15:57:01.563460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.333 [2024-09-27 15:57:01.563507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.333 [2024-09-27 15:57:01.563526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.333 [2024-09-27 15:57:01.566849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.333 [2024-09-27 15:57:01.566928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.333 [2024-09-27 15:57:01.566949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.333 [2024-09-27 15:57:01.570394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.333 [2024-09-27 15:57:01.570441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.333 [2024-09-27 15:57:01.570460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.333 [2024-09-27 15:57:01.574634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.333 [2024-09-27 15:57:01.574733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.333 [2024-09-27 15:57:01.574749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.333 [2024-09-27 15:57:01.578938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.333 [2024-09-27 15:57:01.579172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.333 [2024-09-27 15:57:01.579189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.333 [2024-09-27 15:57:01.588997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.333 [2024-09-27 15:57:01.589232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.333 [2024-09-27 15:57:01.589251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.333 [2024-09-27 15:57:01.598934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.333 [2024-09-27 15:57:01.599227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.333 [2024-09-27 15:57:01.599244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.334 [2024-09-27 15:57:01.609254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.334 [2024-09-27 15:57:01.609484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.334 [2024-09-27 15:57:01.609500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.334 [2024-09-27 15:57:01.619413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.334 [2024-09-27 15:57:01.619712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.334 [2024-09-27 15:57:01.619729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.334 [2024-09-27 15:57:01.630519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.334 [2024-09-27 15:57:01.630770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.334 [2024-09-27 15:57:01.630786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.334 [2024-09-27 15:57:01.641486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.334 [2024-09-27 15:57:01.641760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.334 [2024-09-27 15:57:01.641776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.334 [2024-09-27 15:57:01.651100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.334 [2024-09-27 15:57:01.651353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.334 [2024-09-27 15:57:01.651370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.334 [2024-09-27 15:57:01.661283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.334 [2024-09-27 15:57:01.661578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.334 [2024-09-27 15:57:01.661594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.334 [2024-09-27 15:57:01.670493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.334 [2024-09-27 15:57:01.670720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.334 [2024-09-27 15:57:01.670736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.334 [2024-09-27 15:57:01.675792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.334 [2024-09-27 15:57:01.675842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.334 [2024-09-27 15:57:01.675861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.334 [2024-09-27 15:57:01.678537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.334 [2024-09-27 15:57:01.678589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.334 [2024-09-27 15:57:01.678607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.334 [2024-09-27 15:57:01.681179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.334 [2024-09-27 15:57:01.681233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.334 [2024-09-27 15:57:01.681252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.334 [2024-09-27 15:57:01.683839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.334 [2024-09-27 15:57:01.683886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.334 [2024-09-27 15:57:01.683909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.334 [2024-09-27 15:57:01.686464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.334 [2024-09-27 15:57:01.686517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.334 [2024-09-27 15:57:01.686535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.334 [2024-09-27 15:57:01.689109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.334 [2024-09-27 15:57:01.689159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.334 [2024-09-27 15:57:01.689180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.334 [2024-09-27 15:57:01.691705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.334 [2024-09-27 15:57:01.691771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.334 [2024-09-27 15:57:01.691789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.334 [2024-09-27 15:57:01.694204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.334 [2024-09-27 15:57:01.694256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.334 [2024-09-27 15:57:01.694277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.334 [2024-09-27 15:57:01.696623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.334 [2024-09-27 15:57:01.696903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.334 [2024-09-27 15:57:01.696922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.334 [2024-09-27 15:57:01.699719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.334 [2024-09-27 15:57:01.699785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.334 [2024-09-27 15:57:01.699803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.334 [2024-09-27 15:57:01.702144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.334 [2024-09-27 15:57:01.702197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.334 [2024-09-27 15:57:01.702216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.334 [2024-09-27 15:57:01.705074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.334 [2024-09-27 15:57:01.705156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.334 [2024-09-27 15:57:01.705175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.334 [2024-09-27 15:57:01.708143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.334 [2024-09-27 15:57:01.708218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.334 [2024-09-27 15:57:01.708237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.334 [2024-09-27 15:57:01.710983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.334 [2024-09-27 15:57:01.711051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.334 [2024-09-27 15:57:01.711068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.334 [2024-09-27 15:57:01.717462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.334 [2024-09-27 15:57:01.717518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.334 [2024-09-27 15:57:01.717538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.334 [2024-09-27 15:57:01.720672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.334 [2024-09-27 15:57:01.720799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.334 [2024-09-27 15:57:01.720815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.335 [2024-09-27 15:57:01.726581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.335 [2024-09-27 15:57:01.726845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.335 [2024-09-27 15:57:01.726861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.335 [2024-09-27 15:57:01.736438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.335 [2024-09-27 15:57:01.736684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.335 [2024-09-27 15:57:01.736704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.335 [2024-09-27 15:57:01.746603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.335 [2024-09-27 15:57:01.746836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.335 [2024-09-27 15:57:01.746852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.335 [2024-09-27 15:57:01.756739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.335 [2024-09-27 15:57:01.756999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.335 [2024-09-27 15:57:01.757016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.335 [2024-09-27 15:57:01.766866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.335 [2024-09-27 15:57:01.767150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.335 [2024-09-27 15:57:01.767167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.335 [2024-09-27 15:57:01.775325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.335 [2024-09-27 15:57:01.775393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.335 [2024-09-27 15:57:01.775410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.335 [2024-09-27 15:57:01.778859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.335 [2024-09-27 15:57:01.778914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.335 [2024-09-27 15:57:01.778934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.335 [2024-09-27 15:57:01.781521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.335 [2024-09-27 15:57:01.781665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.335 [2024-09-27 15:57:01.781682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.335 [2024-09-27 15:57:01.784410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.335 [2024-09-27 15:57:01.784460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.335 [2024-09-27 15:57:01.784478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.335 [2024-09-27 15:57:01.787033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.335 [2024-09-27 15:57:01.787086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.335 [2024-09-27 15:57:01.787104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.335 [2024-09-27 15:57:01.789705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.335 [2024-09-27 15:57:01.789751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.335 [2024-09-27 15:57:01.789769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.335 [2024-09-27 15:57:01.792611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.335 [2024-09-27 15:57:01.792686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.335 [2024-09-27 15:57:01.792702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.335 [2024-09-27 15:57:01.796052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.335 [2024-09-27 15:57:01.796107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.335 [2024-09-27 15:57:01.796125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.335 [2024-09-27 15:57:01.803858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.335 [2024-09-27 15:57:01.803921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.335 [2024-09-27 15:57:01.803943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.335 [2024-09-27 15:57:01.811213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.335 [2024-09-27 15:57:01.811275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.335 [2024-09-27 15:57:01.811296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.335 [2024-09-27 15:57:01.819135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.335 [2024-09-27 15:57:01.819231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.335 [2024-09-27 15:57:01.819248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.598 [2024-09-27 15:57:01.826263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.598 [2024-09-27 15:57:01.826549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.598 [2024-09-27 15:57:01.826566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.598 [2024-09-27 15:57:01.834817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.598 [2024-09-27 15:57:01.834867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.598 [2024-09-27 15:57:01.834886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.598 [2024-09-27 15:57:01.838112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.598 [2024-09-27 15:57:01.838167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.598 [2024-09-27 15:57:01.838191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.598 [2024-09-27 15:57:01.841149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.598 [2024-09-27 15:57:01.841196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.598 [2024-09-27 15:57:01.841215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.598 [2024-09-27 15:57:01.844566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.598 [2024-09-27 15:57:01.844675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.598 [2024-09-27 15:57:01.844692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.598 [2024-09-27 15:57:01.851321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.598 [2024-09-27 15:57:01.851392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.598 [2024-09-27 15:57:01.851410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.598 [2024-09-27 15:57:01.856224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.598 [2024-09-27 15:57:01.856466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.598 [2024-09-27 15:57:01.856481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.598 [2024-09-27 15:57:01.865655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.598 [2024-09-27 15:57:01.865733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.598 [2024-09-27 15:57:01.865752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.598 [2024-09-27 15:57:01.873663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.598 [2024-09-27 15:57:01.873936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.598 [2024-09-27 15:57:01.873952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.598 [2024-09-27 15:57:01.882193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.598 [2024-09-27 15:57:01.882474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.598 [2024-09-27 15:57:01.882491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.598 [2024-09-27 15:57:01.887261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.598 [2024-09-27 15:57:01.887548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.598 [2024-09-27 15:57:01.887564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.598 [2024-09-27 15:57:01.894249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.598 [2024-09-27 15:57:01.894336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.598 [2024-09-27 15:57:01.894356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.598 [2024-09-27 15:57:01.901694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.598 [2024-09-27 15:57:01.901755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.598 [2024-09-27 15:57:01.901775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.598 [2024-09-27 15:57:01.905408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.598 [2024-09-27 15:57:01.905457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.598 [2024-09-27 15:57:01.905476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.598 [2024-09-27 15:57:01.908674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.598 [2024-09-27 15:57:01.908726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.598 [2024-09-27 15:57:01.908744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.598 [2024-09-27 15:57:01.911433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.598 [2024-09-27 15:57:01.911482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.598 [2024-09-27 15:57:01.911501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.598 [2024-09-27 15:57:01.914040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.598 [2024-09-27 15:57:01.914111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.598 [2024-09-27 15:57:01.914130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.598 [2024-09-27 15:57:01.916720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.598 [2024-09-27 15:57:01.916786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.598 [2024-09-27 15:57:01.916806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.598 [2024-09-27 15:57:01.919432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.598 [2024-09-27 15:57:01.919479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.598 [2024-09-27 15:57:01.919497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.598 [2024-09-27 15:57:01.922058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.598 [2024-09-27 15:57:01.922124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.598 [2024-09-27 15:57:01.922143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.598 [2024-09-27 15:57:01.924716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.598 [2024-09-27 15:57:01.924786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.599 [2024-09-27 15:57:01.924806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.599 [2024-09-27 15:57:01.927348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.599 [2024-09-27 15:57:01.927400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.599 [2024-09-27 15:57:01.927419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.599 [2024-09-27 15:57:01.929801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.599 [2024-09-27 15:57:01.929850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.599 [2024-09-27 15:57:01.929869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.599 [2024-09-27 15:57:01.932202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.599 [2024-09-27 15:57:01.932247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.599 [2024-09-27 15:57:01.932266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.599 [2024-09-27 15:57:01.934644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.599 [2024-09-27 15:57:01.934701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.599 [2024-09-27 15:57:01.934720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.599 [2024-09-27 15:57:01.937091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.599 [2024-09-27 15:57:01.937150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.599 [2024-09-27 15:57:01.937170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.599 [2024-09-27 15:57:01.939514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.599 [2024-09-27 15:57:01.939562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.599 [2024-09-27 15:57:01.939581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.599 [2024-09-27 15:57:01.941920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.599 [2024-09-27 15:57:01.941969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.599 [2024-09-27 15:57:01.941986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.599 [2024-09-27 15:57:01.944297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.599 [2024-09-27 15:57:01.944341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.599 [2024-09-27 15:57:01.944363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.599 [2024-09-27 15:57:01.946689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.599 [2024-09-27 15:57:01.946739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.599 [2024-09-27 15:57:01.946757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.599 [2024-09-27 15:57:01.949073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.599 [2024-09-27 15:57:01.949128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.599 [2024-09-27 15:57:01.949146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.599 [2024-09-27 15:57:01.951466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.599 [2024-09-27 15:57:01.951509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.599 [2024-09-27 15:57:01.951527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.599 [2024-09-27 15:57:01.953866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.599 [2024-09-27 15:57:01.953920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.599 [2024-09-27 15:57:01.953939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.599 [2024-09-27 15:57:01.956256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.599 [2024-09-27 15:57:01.956303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.599 [2024-09-27 15:57:01.956321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.599 [2024-09-27 15:57:01.958648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.599 [2024-09-27 15:57:01.958729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.599 [2024-09-27 15:57:01.958745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.599 [2024-09-27 15:57:01.961704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.599 [2024-09-27 15:57:01.961792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.599 [2024-09-27 15:57:01.961809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.599 [2024-09-27 15:57:01.965529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.599 [2024-09-27 15:57:01.965828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.599 [2024-09-27 15:57:01.965846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.599 [2024-09-27 15:57:01.971541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.599 [2024-09-27 15:57:01.971611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.599 [2024-09-27 15:57:01.971633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.599 [2024-09-27 15:57:01.974038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.599 [2024-09-27 15:57:01.974100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.599 [2024-09-27 15:57:01.974120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.599 [2024-09-27 15:57:01.976477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.599 [2024-09-27 15:57:01.976527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.599 [2024-09-27 15:57:01.976549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.599 [2024-09-27 15:57:01.978856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.599 [2024-09-27 15:57:01.978921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.599 [2024-09-27 15:57:01.978941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.599 [2024-09-27 15:57:01.981273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.599 [2024-09-27 15:57:01.981341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.599 [2024-09-27 15:57:01.981360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.599 [2024-09-27 15:57:01.984019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.599 [2024-09-27 15:57:01.984098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.599 [2024-09-27 15:57:01.984114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.599 [2024-09-27 15:57:01.988206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.599 [2024-09-27 15:57:01.988284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.599 [2024-09-27 15:57:01.988300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.599 [2024-09-27 15:57:01.991174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.599 [2024-09-27 15:57:01.991269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.600 [2024-09-27 15:57:01.991285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.600 [2024-09-27 15:57:01.994015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.600 [2024-09-27 15:57:01.994076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.600 [2024-09-27 15:57:01.994097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.600 [2024-09-27 15:57:02.000055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.600 [2024-09-27 15:57:02.000353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.600 [2024-09-27 15:57:02.000370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.600 [2024-09-27 15:57:02.007632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.600 [2024-09-27 15:57:02.007871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.600 [2024-09-27 15:57:02.007887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.600 [2024-09-27 15:57:02.016768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.600 [2024-09-27 15:57:02.017020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.600 [2024-09-27 15:57:02.017036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.600 [2024-09-27 15:57:02.026740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.600 [2024-09-27 15:57:02.027058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.600 [2024-09-27 15:57:02.027075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.600 [2024-09-27 15:57:02.037404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.600 [2024-09-27 15:57:02.037601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.600 [2024-09-27 15:57:02.037618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.600 [2024-09-27 15:57:02.047629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.600 [2024-09-27 15:57:02.047877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.600 [2024-09-27 15:57:02.047899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.600 [2024-09-27 15:57:02.057915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.600 [2024-09-27 15:57:02.058165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.600 [2024-09-27 15:57:02.058181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.600 [2024-09-27 15:57:02.068072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.600 [2024-09-27 15:57:02.068422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.600 [2024-09-27 15:57:02.068439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.600 [2024-09-27 15:57:02.075244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.600 [2024-09-27 15:57:02.075446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.600 [2024-09-27 15:57:02.075466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.600 [2024-09-27 15:57:02.080596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.600 [2024-09-27 15:57:02.080647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.600 [2024-09-27 15:57:02.080665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.600 [2024-09-27 15:57:02.083285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.600 [2024-09-27 15:57:02.083350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.600 [2024-09-27 15:57:02.083369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.863 [2024-09-27 15:57:02.085986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.863 [2024-09-27 15:57:02.086035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.863 [2024-09-27 15:57:02.086053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.863 [2024-09-27 15:57:02.088619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.863 [2024-09-27 15:57:02.088680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.863 [2024-09-27 15:57:02.088700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.863 [2024-09-27 15:57:02.091220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.863 [2024-09-27 15:57:02.091281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.863 [2024-09-27 15:57:02.091301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.863 [2024-09-27 15:57:02.093850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.863 [2024-09-27 15:57:02.093904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.863 [2024-09-27 15:57:02.093922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.863 [2024-09-27 15:57:02.096569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.863 [2024-09-27 15:57:02.096659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.863 [2024-09-27 15:57:02.096674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.863 [2024-09-27 15:57:02.099328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.863 [2024-09-27 15:57:02.099374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.863 [2024-09-27 15:57:02.099393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.863 [2024-09-27 15:57:02.101753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.863 [2024-09-27 15:57:02.101815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.863 [2024-09-27 15:57:02.101838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.863 [2024-09-27 15:57:02.104949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.863 [2024-09-27 15:57:02.105033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.863 [2024-09-27 15:57:02.105049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.863 [2024-09-27 15:57:02.107689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.863 [2024-09-27 15:57:02.107734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.863 [2024-09-27 15:57:02.107752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.863 [2024-09-27 15:57:02.110101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.863 [2024-09-27 15:57:02.110162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.863 [2024-09-27 15:57:02.110182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.863 [2024-09-27 15:57:02.112745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.863 [2024-09-27 15:57:02.112818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.863 [2024-09-27 15:57:02.112836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.863 [2024-09-27 15:57:02.116571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.863 [2024-09-27 15:57:02.116643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.864 [2024-09-27 15:57:02.116665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.864 [2024-09-27 15:57:02.119975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.864 [2024-09-27 15:57:02.120062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.864 [2024-09-27 15:57:02.120079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.864 [2024-09-27 15:57:02.123059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.864 [2024-09-27 15:57:02.123110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.864 [2024-09-27 15:57:02.123129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.864 [2024-09-27 15:57:02.126002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.864 [2024-09-27 15:57:02.126050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.864 [2024-09-27 15:57:02.126068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.864 [2024-09-27 15:57:02.128642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.864 [2024-09-27 15:57:02.128711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.864 [2024-09-27 15:57:02.128729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.864 [2024-09-27 15:57:02.131328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.864 [2024-09-27 15:57:02.131396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.864 [2024-09-27 15:57:02.131415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.864 [2024-09-27 15:57:02.136577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.864 [2024-09-27 15:57:02.136638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.864 [2024-09-27 15:57:02.136658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.864 [2024-09-27 15:57:02.143789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.864 [2024-09-27 15:57:02.143847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.864 [2024-09-27 15:57:02.143867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.864 [2024-09-27 15:57:02.149587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.864 [2024-09-27 15:57:02.149635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.864 [2024-09-27 15:57:02.149653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.864 [2024-09-27 15:57:02.156063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.864 [2024-09-27 15:57:02.156143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.864 [2024-09-27 15:57:02.156160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.864 [2024-09-27 15:57:02.160999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.864 [2024-09-27 15:57:02.161067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.864 [2024-09-27 15:57:02.161085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.864 [2024-09-27 15:57:02.168830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.864 [2024-09-27 15:57:02.169154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.864 [2024-09-27 15:57:02.169172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.864 [2024-09-27 15:57:02.178870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.864 [2024-09-27 15:57:02.179119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.864 [2024-09-27 15:57:02.179139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.864 [2024-09-27 15:57:02.189478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.864 [2024-09-27 15:57:02.189683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.864 [2024-09-27 15:57:02.189699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.864 [2024-09-27 15:57:02.199651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.864 [2024-09-27 15:57:02.199908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.864 [2024-09-27 15:57:02.199925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.864 [2024-09-27 15:57:02.209868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.864 [2024-09-27 15:57:02.210172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.864 [2024-09-27 15:57:02.210188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.864 [2024-09-27 15:57:02.219570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.864 [2024-09-27 15:57:02.219762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.864 [2024-09-27 15:57:02.219778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.864 [2024-09-27 15:57:02.230207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.864 [2024-09-27 15:57:02.230437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.864 [2024-09-27 15:57:02.230454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.864 [2024-09-27 15:57:02.240886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.864 [2024-09-27 15:57:02.241191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.864 [2024-09-27 15:57:02.241207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.864 [2024-09-27 15:57:02.251340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.864 [2024-09-27 15:57:02.251623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.864 [2024-09-27 15:57:02.251641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.864 [2024-09-27 15:57:02.260753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.864 [2024-09-27 15:57:02.260838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.864 [2024-09-27 15:57:02.260854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.864 [2024-09-27 15:57:02.268800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.864 [2024-09-27 15:57:02.269095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.864 [2024-09-27 15:57:02.269114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.864 [2024-09-27 15:57:02.273563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.864 [2024-09-27 15:57:02.273627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.864 [2024-09-27 15:57:02.273646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.864 [2024-09-27 15:57:02.277303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.864 [2024-09-27 15:57:02.277350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.864 [2024-09-27 15:57:02.277368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.864 [2024-09-27 15:57:02.280354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.864 [2024-09-27 15:57:02.280420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.864 [2024-09-27 15:57:02.280439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.864 [2024-09-27 15:57:02.283588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.864 [2024-09-27 15:57:02.283633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.864 [2024-09-27 15:57:02.283652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.864 [2024-09-27 15:57:02.289181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.864 [2024-09-27 15:57:02.289242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.865 [2024-09-27 15:57:02.289261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.865 [2024-09-27 15:57:02.297440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.865 [2024-09-27 15:57:02.297508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.865 [2024-09-27 15:57:02.297526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:21.865 [2024-09-27 15:57:02.303700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.865 [2024-09-27 15:57:02.303760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.865 [2024-09-27 15:57:02.303781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.865 [2024-09-27 15:57:02.311602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.865 [2024-09-27 15:57:02.311889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.865 [2024-09-27 15:57:02.311910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.865 4972.50 IOPS, 621.56 MiB/s [2024-09-27 15:57:02.319804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14112e0) with pdu=0x2000198fef90 00:38:21.865 [2024-09-27 15:57:02.320091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.865 [2024-09-27 15:57:02.320107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:21.865 00:38:21.865 Latency(us) 00:38:21.865 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:21.865 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:38:21.865 nvme0n1 : 2.00 4971.92 621.49 0.00 0.00 3213.71 1112.75 11905.71 00:38:21.865 =================================================================================================================== 00:38:21.865 Total : 4971.92 621.49 0.00 0.00 3213.71 1112.75 11905.71 00:38:21.865 { 00:38:21.865 "results": [ 00:38:21.865 { 00:38:21.865 "job": "nvme0n1", 00:38:21.865 "core_mask": "0x2", 00:38:21.865 "workload": "randwrite", 00:38:21.865 "status": "finished", 00:38:21.865 "queue_depth": 16, 00:38:21.865 "io_size": 131072, 00:38:21.865 "runtime": 2.004054, 00:38:21.865 "iops": 4971.921914279755, 00:38:21.865 "mibps": 621.4902392849693, 00:38:21.865 "io_failed": 0, 00:38:21.865 "io_timeout": 0, 00:38:21.865 "avg_latency_us": 3213.712254783889, 00:38:21.865 "min_latency_us": 1112.7466666666667, 00:38:21.865 "max_latency_us": 11905.706666666667 00:38:21.865 } 00:38:21.865 ], 00:38:21.865 "core_count": 1 00:38:21.865 } 00:38:21.865 15:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:38:22.127 15:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:38:22.127 15:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:38:22.127 | .driver_specific 00:38:22.127 | .nvme_error 00:38:22.127 | .status_code 00:38:22.127 | .command_transient_transport_error' 00:38:22.127 15:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:38:22.127 15:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 321 > 0 )) 00:38:22.127 15:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 632153 00:38:22.127 15:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 632153 ']' 00:38:22.127 15:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 632153 00:38:22.127 15:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:38:22.127 15:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:22.127 15:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 632153 00:38:22.127 15:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:22.127 15:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:22.127 15:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 632153' 00:38:22.127 killing process with pid 632153 00:38:22.127 15:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 632153 00:38:22.127 Received shutdown signal, test time was about 2.000000 seconds 00:38:22.127 00:38:22.127 Latency(us) 00:38:22.127 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:22.127 =================================================================================================================== 00:38:22.127 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:22.127 15:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 632153 00:38:22.388 15:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 629826 00:38:22.388 15:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 629826 ']' 00:38:22.388 15:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 629826 00:38:22.388 15:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:38:22.388 15:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:22.388 15:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 629826 00:38:22.388 15:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:22.388 15:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:22.388 15:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 629826' 00:38:22.388 killing process with pid 629826 00:38:22.388 15:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 629826 00:38:22.388 15:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 629826 00:38:22.648 00:38:22.648 real 0m16.514s 00:38:22.648 user 0m32.644s 00:38:22.648 sys 0m3.638s 00:38:22.648 15:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:22.648 15:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:22.648 ************************************ 00:38:22.648 END TEST nvmf_digest_error 00:38:22.648 ************************************ 00:38:22.648 15:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:38:22.648 15:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:38:22.648 15:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@512 -- # nvmfcleanup 00:38:22.648 15:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:38:22.648 15:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:22.648 15:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:38:22.648 15:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:22.648 15:57:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:22.648 rmmod nvme_tcp 00:38:22.648 rmmod nvme_fabrics 00:38:22.648 rmmod nvme_keyring 00:38:22.648 15:57:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:22.648 15:57:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:38:22.648 15:57:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:38:22.648 15:57:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@513 -- # '[' -n 629826 ']' 00:38:22.648 15:57:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # killprocess 629826 00:38:22.648 15:57:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 629826 ']' 00:38:22.648 15:57:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 629826 00:38:22.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (629826) - No such process 00:38:22.648 15:57:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 629826 is not found' 00:38:22.648 Process with pid 629826 is not found 00:38:22.648 15:57:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:38:22.648 15:57:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:38:22.648 15:57:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:38:22.648 15:57:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:38:22.648 15:57:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-save 00:38:22.648 15:57:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:38:22.648 15:57:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-restore 00:38:22.648 15:57:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:22.648 15:57:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:22.648 15:57:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:22.648 15:57:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:22.648 15:57:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:25.196 00:38:25.196 real 0m43.270s 00:38:25.196 user 1m7.745s 00:38:25.196 sys 0m13.259s 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:38:25.196 ************************************ 00:38:25.196 END TEST nvmf_digest 00:38:25.196 ************************************ 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:25.196 ************************************ 00:38:25.196 START TEST nvmf_bdevperf 00:38:25.196 ************************************ 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:38:25.196 * Looking for test storage... 00:38:25.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:25.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:25.196 --rc genhtml_branch_coverage=1 00:38:25.196 --rc genhtml_function_coverage=1 00:38:25.196 --rc genhtml_legend=1 00:38:25.196 --rc geninfo_all_blocks=1 00:38:25.196 --rc geninfo_unexecuted_blocks=1 00:38:25.196 00:38:25.196 ' 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:25.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:25.196 --rc genhtml_branch_coverage=1 00:38:25.196 --rc genhtml_function_coverage=1 00:38:25.196 --rc genhtml_legend=1 00:38:25.196 --rc geninfo_all_blocks=1 00:38:25.196 --rc geninfo_unexecuted_blocks=1 00:38:25.196 00:38:25.196 ' 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:25.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:25.196 --rc genhtml_branch_coverage=1 00:38:25.196 --rc genhtml_function_coverage=1 00:38:25.196 --rc genhtml_legend=1 00:38:25.196 --rc geninfo_all_blocks=1 00:38:25.196 --rc geninfo_unexecuted_blocks=1 00:38:25.196 00:38:25.196 ' 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:25.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:25.196 --rc genhtml_branch_coverage=1 00:38:25.196 --rc genhtml_function_coverage=1 00:38:25.196 --rc genhtml_legend=1 00:38:25.196 --rc geninfo_all_blocks=1 00:38:25.196 --rc geninfo_unexecuted_blocks=1 00:38:25.196 00:38:25.196 ' 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.196 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:38:25.197 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.197 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:38:25.197 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:25.197 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:25.197 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:25.197 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:25.197 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:25.197 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:25.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:25.197 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:25.197 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:25.197 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:25.197 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:25.197 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:25.197 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:38:25.197 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:38:25.197 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:25.197 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # prepare_net_devs 00:38:25.197 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:38:25.197 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:38:25.197 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:25.197 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:25.197 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:25.197 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:38:25.197 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:38:25.197 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:38:25.197 15:57:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:33.346 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:33.346 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:38:33.346 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:33.346 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:33.346 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:33.346 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:33.346 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:33.346 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:38:33.346 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:33.346 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:38:33.346 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:38:33.346 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:38:33.346 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:38:33.346 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:38:33.346 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:38:33.346 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:33.346 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:33.346 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:33.346 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:33.346 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:33.346 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:33.346 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:33.346 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:33.346 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:33.346 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:38:33.347 Found 0000:31:00.0 (0x8086 - 0x159b) 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:38:33.347 Found 0000:31:00.1 (0x8086 - 0x159b) 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:38:33.347 Found net devices under 0000:31:00.0: cvl_0_0 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:38:33.347 Found net devices under 0000:31:00.1: cvl_0_1 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # is_hw=yes 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:33.347 15:57:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:33.347 15:57:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:33.347 15:57:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:33.347 15:57:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:33.347 15:57:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:33.347 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:33.347 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.660 ms 00:38:33.347 00:38:33.347 --- 10.0.0.2 ping statistics --- 00:38:33.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:33.347 rtt min/avg/max/mdev = 0.660/0.660/0.660/0.000 ms 00:38:33.347 15:57:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:33.347 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:33.347 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:38:33.347 00:38:33.347 --- 10.0.0.1 ping statistics --- 00:38:33.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:33.347 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:38:33.347 15:57:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:33.347 15:57:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # return 0 00:38:33.347 15:57:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:38:33.347 15:57:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:33.347 15:57:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:38:33.347 15:57:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:38:33.347 15:57:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:33.347 15:57:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:38:33.347 15:57:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:38:33.347 15:57:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:38:33.347 15:57:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:38:33.347 15:57:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:38:33.347 15:57:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:33.347 15:57:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:33.347 15:57:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # nvmfpid=637663 00:38:33.347 15:57:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # waitforlisten 637663 00:38:33.347 15:57:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:38:33.347 15:57:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 637663 ']' 00:38:33.347 15:57:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:33.347 15:57:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:33.347 15:57:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:33.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:33.347 15:57:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:33.347 15:57:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:33.347 [2024-09-27 15:57:13.240782] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:38:33.347 [2024-09-27 15:57:13.240847] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:33.347 [2024-09-27 15:57:13.333267] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:33.347 [2024-09-27 15:57:13.380692] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:33.347 [2024-09-27 15:57:13.380751] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:33.348 [2024-09-27 15:57:13.380765] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:33.348 [2024-09-27 15:57:13.380775] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:33.348 [2024-09-27 15:57:13.380783] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:33.348 [2024-09-27 15:57:13.380938] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:38:33.348 [2024-09-27 15:57:13.381025] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:33.348 [2024-09-27 15:57:13.381024] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:38:33.609 15:57:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:33.609 15:57:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:38:33.609 15:57:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:38:33.609 15:57:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:33.609 15:57:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:33.870 15:57:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:33.870 15:57:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:33.870 15:57:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:33.870 15:57:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:33.870 [2024-09-27 15:57:14.113552] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:33.870 15:57:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:33.870 15:57:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:33.870 15:57:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:33.870 15:57:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:33.870 Malloc0 00:38:33.870 15:57:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:33.870 15:57:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:33.870 15:57:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:33.870 15:57:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:33.870 15:57:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:33.870 15:57:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:33.870 15:57:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:33.870 15:57:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:33.870 15:57:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:33.870 15:57:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:33.870 15:57:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:33.870 15:57:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:33.870 [2024-09-27 15:57:14.189530] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:33.870 15:57:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:33.870 15:57:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:38:33.870 15:57:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:38:33.870 15:57:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # config=() 00:38:33.870 15:57:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # local subsystem config 00:38:33.870 15:57:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:38:33.870 15:57:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:38:33.870 { 00:38:33.870 "params": { 00:38:33.870 "name": "Nvme$subsystem", 00:38:33.870 "trtype": "$TEST_TRANSPORT", 00:38:33.870 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:33.870 "adrfam": "ipv4", 00:38:33.870 "trsvcid": "$NVMF_PORT", 00:38:33.870 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:33.870 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:33.870 "hdgst": ${hdgst:-false}, 00:38:33.870 "ddgst": ${ddgst:-false} 00:38:33.870 }, 00:38:33.870 "method": "bdev_nvme_attach_controller" 00:38:33.870 } 00:38:33.870 EOF 00:38:33.870 )") 00:38:33.870 15:57:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # cat 00:38:33.870 15:57:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # jq . 00:38:33.870 15:57:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@581 -- # IFS=, 00:38:33.870 15:57:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:38:33.870 "params": { 00:38:33.870 "name": "Nvme1", 00:38:33.870 "trtype": "tcp", 00:38:33.870 "traddr": "10.0.0.2", 00:38:33.870 "adrfam": "ipv4", 00:38:33.870 "trsvcid": "4420", 00:38:33.870 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:33.870 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:33.870 "hdgst": false, 00:38:33.870 "ddgst": false 00:38:33.870 }, 00:38:33.870 "method": "bdev_nvme_attach_controller" 00:38:33.870 }' 00:38:33.870 [2024-09-27 15:57:14.247111] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:38:33.870 [2024-09-27 15:57:14.247176] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid637912 ] 00:38:33.870 [2024-09-27 15:57:14.316993] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:34.131 [2024-09-27 15:57:14.363509] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:34.392 Running I/O for 1 seconds... 00:38:35.336 8715.00 IOPS, 34.04 MiB/s 00:38:35.336 Latency(us) 00:38:35.336 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:35.336 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:38:35.336 Verification LBA range: start 0x0 length 0x4000 00:38:35.336 Nvme1n1 : 1.01 8741.52 34.15 0.00 0.00 14575.22 3222.19 15510.19 00:38:35.336 =================================================================================================================== 00:38:35.336 Total : 8741.52 34.15 0.00 0.00 14575.22 3222.19 15510.19 00:38:35.596 15:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=638256 00:38:35.596 15:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:38:35.596 15:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:38:35.596 15:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:38:35.597 15:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # config=() 00:38:35.597 15:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # local subsystem config 00:38:35.597 15:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:38:35.597 15:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:38:35.597 { 00:38:35.597 "params": { 00:38:35.597 "name": "Nvme$subsystem", 00:38:35.597 "trtype": "$TEST_TRANSPORT", 00:38:35.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:35.597 "adrfam": "ipv4", 00:38:35.597 "trsvcid": "$NVMF_PORT", 00:38:35.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:35.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:35.597 "hdgst": ${hdgst:-false}, 00:38:35.597 "ddgst": ${ddgst:-false} 00:38:35.597 }, 00:38:35.597 "method": "bdev_nvme_attach_controller" 00:38:35.597 } 00:38:35.597 EOF 00:38:35.597 )") 00:38:35.597 15:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # cat 00:38:35.597 15:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # jq . 00:38:35.597 15:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@581 -- # IFS=, 00:38:35.597 15:57:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:38:35.597 "params": { 00:38:35.597 "name": "Nvme1", 00:38:35.597 "trtype": "tcp", 00:38:35.597 "traddr": "10.0.0.2", 00:38:35.597 "adrfam": "ipv4", 00:38:35.597 "trsvcid": "4420", 00:38:35.597 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:35.597 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:35.597 "hdgst": false, 00:38:35.597 "ddgst": false 00:38:35.597 }, 00:38:35.597 "method": "bdev_nvme_attach_controller" 00:38:35.597 }' 00:38:35.597 [2024-09-27 15:57:15.898765] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:38:35.597 [2024-09-27 15:57:15.898837] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid638256 ] 00:38:35.597 [2024-09-27 15:57:15.982651] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:35.597 [2024-09-27 15:57:16.014378] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:35.857 Running I/O for 15 seconds... 00:38:38.694 8851.00 IOPS, 34.57 MiB/s 10070.00 IOPS, 39.34 MiB/s 15:57:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 637663 00:38:38.694 15:57:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:38:38.694 [2024-09-27 15:57:18.862652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:101992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.694 [2024-09-27 15:57:18.862690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.694 [2024-09-27 15:57:18.862706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:102000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.694 [2024-09-27 15:57:18.862718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.694 [2024-09-27 15:57:18.862727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:102008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.694 [2024-09-27 15:57:18.862733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.694 [2024-09-27 15:57:18.862741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:102016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.694 [2024-09-27 15:57:18.862746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.694 [2024-09-27 15:57:18.862753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:102024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.694 [2024-09-27 15:57:18.862758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.694 [2024-09-27 15:57:18.862769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:102032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.694 [2024-09-27 15:57:18.862774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.694 [2024-09-27 15:57:18.862781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:102040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.694 [2024-09-27 15:57:18.862787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.694 [2024-09-27 15:57:18.862794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:102048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.694 [2024-09-27 15:57:18.862801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.694 [2024-09-27 15:57:18.862809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.694 [2024-09-27 15:57:18.862815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.694 [2024-09-27 15:57:18.862824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:102064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.694 [2024-09-27 15:57:18.862830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.694 [2024-09-27 15:57:18.862837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:102072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.694 [2024-09-27 15:57:18.862843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.694 [2024-09-27 15:57:18.862851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:102080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.694 [2024-09-27 15:57:18.862856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.694 [2024-09-27 15:57:18.862863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:102088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.694 [2024-09-27 15:57:18.862869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.694 [2024-09-27 15:57:18.862876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:102096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.694 [2024-09-27 15:57:18.862881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.694 [2024-09-27 15:57:18.862891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:102104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.694 [2024-09-27 15:57:18.862987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.694 [2024-09-27 15:57:18.862994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:102112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.694 [2024-09-27 15:57:18.863001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.694 [2024-09-27 15:57:18.863008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:102120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.694 [2024-09-27 15:57:18.863014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.694 [2024-09-27 15:57:18.863021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:102128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.694 [2024-09-27 15:57:18.863028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.695 [2024-09-27 15:57:18.863036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:102136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.695 [2024-09-27 15:57:18.863041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.695 [2024-09-27 15:57:18.863048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:102144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.695 [2024-09-27 15:57:18.863055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.695 [2024-09-27 15:57:18.863063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:102152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.695 [2024-09-27 15:57:18.863069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.695 [2024-09-27 15:57:18.863078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:102160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.695 [2024-09-27 15:57:18.863083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.695 [2024-09-27 15:57:18.863091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:102168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.695 [2024-09-27 15:57:18.863096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.695 [2024-09-27 15:57:18.863103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:102176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.695 [2024-09-27 15:57:18.863108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.695 [2024-09-27 15:57:18.863115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:102184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.695 [2024-09-27 15:57:18.863120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.695 [2024-09-27 15:57:18.863127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:102192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.695 [2024-09-27 15:57:18.863132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.695 [2024-09-27 15:57:18.863139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:102200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.695 [2024-09-27 15:57:18.863145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.695 [2024-09-27 15:57:18.863152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:102208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.695 [2024-09-27 15:57:18.863157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.695 [2024-09-27 15:57:18.863163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:102216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.695 [2024-09-27 15:57:18.863168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.695 [2024-09-27 15:57:18.863175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:102224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.695 [2024-09-27 15:57:18.863180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.695 [2024-09-27 15:57:18.863187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.695 [2024-09-27 15:57:18.863192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.695 [2024-09-27 15:57:18.863198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:102240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.695 [2024-09-27 15:57:18.863204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.695 [2024-09-27 15:57:18.863210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:102248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.695 [2024-09-27 15:57:18.863215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.695 [2024-09-27 15:57:18.863222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:102256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.695 [2024-09-27 15:57:18.863227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.695 [2024-09-27 15:57:18.863234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:102264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.695 [2024-09-27 15:57:18.863239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.695 [2024-09-27 15:57:18.863246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:102272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.695 [2024-09-27 15:57:18.863251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.695 [2024-09-27 15:57:18.863258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:102280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.695 [2024-09-27 15:57:18.863263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.695 [2024-09-27 15:57:18.863270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:102288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.695 [2024-09-27 15:57:18.863275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.695 [2024-09-27 15:57:18.863282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.695 [2024-09-27 15:57:18.863289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.695 [2024-09-27 15:57:18.863295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:102304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.695 [2024-09-27 15:57:18.863302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.695 [2024-09-27 15:57:18.863309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:102312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.695 [2024-09-27 15:57:18.863314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.695 [2024-09-27 15:57:18.863321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:102320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.695 [2024-09-27 15:57:18.863326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.695 [2024-09-27 15:57:18.863332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:102328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.695 [2024-09-27 15:57:18.863338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.695 [2024-09-27 15:57:18.863345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:102336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.695 [2024-09-27 15:57:18.863350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.695 [2024-09-27 15:57:18.863357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:102344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.695 [2024-09-27 15:57:18.863362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.695 [2024-09-27 15:57:18.863368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.695 [2024-09-27 15:57:18.863374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.695 [2024-09-27 15:57:18.863380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:102360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.695 [2024-09-27 15:57:18.863386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.695 [2024-09-27 15:57:18.863392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.695 [2024-09-27 15:57:18.863398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.695 [2024-09-27 15:57:18.863404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:102376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.695 [2024-09-27 15:57:18.863409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.695 [2024-09-27 15:57:18.863416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:102384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.695 [2024-09-27 15:57:18.863422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.695 [2024-09-27 15:57:18.863428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:102392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.695 [2024-09-27 15:57:18.863433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.695 [2024-09-27 15:57:18.863440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:102400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.696 [2024-09-27 15:57:18.863445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.696 [2024-09-27 15:57:18.863453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:102408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.696 [2024-09-27 15:57:18.863459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.696 [2024-09-27 15:57:18.863466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:102416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.696 [2024-09-27 15:57:18.863471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.696 [2024-09-27 15:57:18.863478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:102424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.696 [2024-09-27 15:57:18.863483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.696 [2024-09-27 15:57:18.863490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:102432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.696 [2024-09-27 15:57:18.863495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.696 [2024-09-27 15:57:18.863501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.696 [2024-09-27 15:57:18.863506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.696 [2024-09-27 15:57:18.863513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:102448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.696 [2024-09-27 15:57:18.863518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.696 [2024-09-27 15:57:18.863525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:102456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.696 [2024-09-27 15:57:18.863530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.696 [2024-09-27 15:57:18.863537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:102464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.696 [2024-09-27 15:57:18.863542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.696 [2024-09-27 15:57:18.863548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:102472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.696 [2024-09-27 15:57:18.863554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.696 [2024-09-27 15:57:18.863560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:102480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.696 [2024-09-27 15:57:18.863565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.696 [2024-09-27 15:57:18.863572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:102488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.696 [2024-09-27 15:57:18.863577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.696 [2024-09-27 15:57:18.863584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:102496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.696 [2024-09-27 15:57:18.863589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.696 [2024-09-27 15:57:18.863596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:102504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.696 [2024-09-27 15:57:18.863602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.696 [2024-09-27 15:57:18.863608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:102512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.696 [2024-09-27 15:57:18.863613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.696 [2024-09-27 15:57:18.863620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:102520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.696 [2024-09-27 15:57:18.863625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.696 [2024-09-27 15:57:18.863632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:102528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.696 [2024-09-27 15:57:18.863637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.696 [2024-09-27 15:57:18.863643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:102536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.696 [2024-09-27 15:57:18.863648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.696 [2024-09-27 15:57:18.863655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:102544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.696 [2024-09-27 15:57:18.863660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.696 [2024-09-27 15:57:18.863666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:102552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.696 [2024-09-27 15:57:18.863673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.696 [2024-09-27 15:57:18.863680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:102560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.696 [2024-09-27 15:57:18.863685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.696 [2024-09-27 15:57:18.863692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:102568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.696 [2024-09-27 15:57:18.863697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.696 [2024-09-27 15:57:18.863704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:102576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.696 [2024-09-27 15:57:18.863709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.696 [2024-09-27 15:57:18.863715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:102584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.696 [2024-09-27 15:57:18.863720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.696 [2024-09-27 15:57:18.863728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:102968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.696 [2024-09-27 15:57:18.863735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.696 [2024-09-27 15:57:18.863743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:102976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.696 [2024-09-27 15:57:18.863748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.696 [2024-09-27 15:57:18.863755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:102984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.696 [2024-09-27 15:57:18.863761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.696 [2024-09-27 15:57:18.863767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:102992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.696 [2024-09-27 15:57:18.863772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.696 [2024-09-27 15:57:18.863779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:103000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.696 [2024-09-27 15:57:18.863784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.696 [2024-09-27 15:57:18.863790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:38.696 [2024-09-27 15:57:18.863796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.696 [2024-09-27 15:57:18.863802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:102592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.696 [2024-09-27 15:57:18.863807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.696 [2024-09-27 15:57:18.863814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:102600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.696 [2024-09-27 15:57:18.863819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.696 [2024-09-27 15:57:18.863826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:102608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.696 [2024-09-27 15:57:18.863831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.696 [2024-09-27 15:57:18.863838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:102616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.696 [2024-09-27 15:57:18.863844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.696 [2024-09-27 15:57:18.863850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:102624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.696 [2024-09-27 15:57:18.863855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.696 [2024-09-27 15:57:18.863862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:102632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.696 [2024-09-27 15:57:18.863867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.696 [2024-09-27 15:57:18.863874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:102640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.696 [2024-09-27 15:57:18.863879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.696 [2024-09-27 15:57:18.863885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:102648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.696 [2024-09-27 15:57:18.863891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.696 [2024-09-27 15:57:18.863902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:102656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.697 [2024-09-27 15:57:18.863908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.697 [2024-09-27 15:57:18.863915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:102664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.697 [2024-09-27 15:57:18.863920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.697 [2024-09-27 15:57:18.863927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:102672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.697 [2024-09-27 15:57:18.863932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.697 [2024-09-27 15:57:18.863939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:102680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.697 [2024-09-27 15:57:18.863944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.697 [2024-09-27 15:57:18.863950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:102688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.697 [2024-09-27 15:57:18.863955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.697 [2024-09-27 15:57:18.863962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:102696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.697 [2024-09-27 15:57:18.863967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.697 [2024-09-27 15:57:18.863973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:102704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.697 [2024-09-27 15:57:18.863978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.697 [2024-09-27 15:57:18.863985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:102712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.697 [2024-09-27 15:57:18.863990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.697 [2024-09-27 15:57:18.863996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:102720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.697 [2024-09-27 15:57:18.864001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.697 [2024-09-27 15:57:18.864008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:102728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.697 [2024-09-27 15:57:18.864013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.697 [2024-09-27 15:57:18.864020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:102736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.697 [2024-09-27 15:57:18.864025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.697 [2024-09-27 15:57:18.864031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:102744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.697 [2024-09-27 15:57:18.864036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.697 [2024-09-27 15:57:18.864043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:102752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.697 [2024-09-27 15:57:18.864048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.697 [2024-09-27 15:57:18.864056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:102760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.697 [2024-09-27 15:57:18.864062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.697 [2024-09-27 15:57:18.864069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:102768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.697 [2024-09-27 15:57:18.864074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.697 [2024-09-27 15:57:18.864081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.697 [2024-09-27 15:57:18.864086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.697 [2024-09-27 15:57:18.864092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:102784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.697 [2024-09-27 15:57:18.864098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.697 [2024-09-27 15:57:18.864104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:102792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.697 [2024-09-27 15:57:18.864109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.697 [2024-09-27 15:57:18.864116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:102800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.697 [2024-09-27 15:57:18.864121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.697 [2024-09-27 15:57:18.864127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.697 [2024-09-27 15:57:18.864132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.697 [2024-09-27 15:57:18.864139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.697 [2024-09-27 15:57:18.864144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.697 [2024-09-27 15:57:18.864151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:102824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.697 [2024-09-27 15:57:18.864155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.697 [2024-09-27 15:57:18.864162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:102832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.697 [2024-09-27 15:57:18.864167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.697 [2024-09-27 15:57:18.864174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.697 [2024-09-27 15:57:18.864179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.697 [2024-09-27 15:57:18.864185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:102848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.697 [2024-09-27 15:57:18.864191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.697 [2024-09-27 15:57:18.864197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:102856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.697 [2024-09-27 15:57:18.864205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.697 [2024-09-27 15:57:18.864212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:102864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.697 [2024-09-27 15:57:18.864218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.697 [2024-09-27 15:57:18.864224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:102872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.697 [2024-09-27 15:57:18.864229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.697 [2024-09-27 15:57:18.864236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:102880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.697 [2024-09-27 15:57:18.864241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.697 [2024-09-27 15:57:18.864248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:102888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.697 [2024-09-27 15:57:18.864252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.697 [2024-09-27 15:57:18.864259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:102896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.697 [2024-09-27 15:57:18.864264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.697 [2024-09-27 15:57:18.864271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:102904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.697 [2024-09-27 15:57:18.864276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.697 [2024-09-27 15:57:18.864282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:102912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.697 [2024-09-27 15:57:18.864287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.697 [2024-09-27 15:57:18.864294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:102920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.697 [2024-09-27 15:57:18.864299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.697 [2024-09-27 15:57:18.864305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:102928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.697 [2024-09-27 15:57:18.864310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.697 [2024-09-27 15:57:18.864317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.697 [2024-09-27 15:57:18.864322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.697 [2024-09-27 15:57:18.864329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:102944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.697 [2024-09-27 15:57:18.864333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.697 [2024-09-27 15:57:18.864340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:102952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.697 [2024-09-27 15:57:18.864345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.697 [2024-09-27 15:57:18.864351] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd350 is same with the state(6) to be set 00:38:38.698 [2024-09-27 15:57:18.864359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:38.698 [2024-09-27 15:57:18.864363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:38.698 [2024-09-27 15:57:18.864368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102960 len:8 PRP1 0x0 PRP2 0x0 00:38:38.698 [2024-09-27 15:57:18.864373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:38.698 [2024-09-27 15:57:18.864404] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6fd350 was disconnected and freed. reset controller. 00:38:38.698 [2024-09-27 15:57:18.866846] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:38.698 [2024-09-27 15:57:18.866885] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.698 [2024-09-27 15:57:18.867520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.698 [2024-09-27 15:57:18.867534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:38.698 [2024-09-27 15:57:18.867540] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:38.698 [2024-09-27 15:57:18.867691] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.698 [2024-09-27 15:57:18.867843] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.698 [2024-09-27 15:57:18.867851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.698 [2024-09-27 15:57:18.867859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.698 [2024-09-27 15:57:18.870306] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:38.698 [2024-09-27 15:57:18.879644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:38.698 [2024-09-27 15:57:18.880202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.698 [2024-09-27 15:57:18.880233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:38.698 [2024-09-27 15:57:18.880241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:38.698 [2024-09-27 15:57:18.880412] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.698 [2024-09-27 15:57:18.880566] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.698 [2024-09-27 15:57:18.880573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.698 [2024-09-27 15:57:18.880578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.698 [2024-09-27 15:57:18.883028] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:38.698 [2024-09-27 15:57:18.892381] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:38.698 [2024-09-27 15:57:18.892958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.698 [2024-09-27 15:57:18.892988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:38.698 [2024-09-27 15:57:18.892997] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:38.698 [2024-09-27 15:57:18.893166] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.698 [2024-09-27 15:57:18.893321] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.698 [2024-09-27 15:57:18.893330] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.698 [2024-09-27 15:57:18.893335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.698 [2024-09-27 15:57:18.895787] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:38.698 [2024-09-27 15:57:18.905132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:38.698 [2024-09-27 15:57:18.905600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.698 [2024-09-27 15:57:18.905614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:38.698 [2024-09-27 15:57:18.905620] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:38.698 [2024-09-27 15:57:18.905771] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.698 [2024-09-27 15:57:18.905936] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.698 [2024-09-27 15:57:18.905943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.698 [2024-09-27 15:57:18.905948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.698 [2024-09-27 15:57:18.908390] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:38.698 [2024-09-27 15:57:18.917866] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:38.698 [2024-09-27 15:57:18.918324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.698 [2024-09-27 15:57:18.918338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:38.698 [2024-09-27 15:57:18.918343] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:38.698 [2024-09-27 15:57:18.918494] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.698 [2024-09-27 15:57:18.918645] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.698 [2024-09-27 15:57:18.918651] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.698 [2024-09-27 15:57:18.918656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.698 [2024-09-27 15:57:18.921094] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:38.698 [2024-09-27 15:57:18.930567] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:38.698 [2024-09-27 15:57:18.931219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.698 [2024-09-27 15:57:18.931249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:38.698 [2024-09-27 15:57:18.931258] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:38.698 [2024-09-27 15:57:18.931425] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.698 [2024-09-27 15:57:18.931580] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.698 [2024-09-27 15:57:18.931586] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.698 [2024-09-27 15:57:18.931591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.698 [2024-09-27 15:57:18.934041] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:38.698 [2024-09-27 15:57:18.943245] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:38.698 [2024-09-27 15:57:18.943800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.698 [2024-09-27 15:57:18.943830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:38.698 [2024-09-27 15:57:18.943839] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:38.698 [2024-09-27 15:57:18.944012] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.698 [2024-09-27 15:57:18.944167] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.698 [2024-09-27 15:57:18.944173] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.698 [2024-09-27 15:57:18.944178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.698 [2024-09-27 15:57:18.946653] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:38.698 [2024-09-27 15:57:18.955875] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:38.698 [2024-09-27 15:57:18.956418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.698 [2024-09-27 15:57:18.956448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:38.698 [2024-09-27 15:57:18.956457] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:38.698 [2024-09-27 15:57:18.956623] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.698 [2024-09-27 15:57:18.956778] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.698 [2024-09-27 15:57:18.956784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.698 [2024-09-27 15:57:18.956789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.698 [2024-09-27 15:57:18.959253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:38.698 [2024-09-27 15:57:18.968606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:38.698 [2024-09-27 15:57:18.969193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.698 [2024-09-27 15:57:18.969224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:38.698 [2024-09-27 15:57:18.969232] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:38.698 [2024-09-27 15:57:18.969401] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.698 [2024-09-27 15:57:18.969556] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.698 [2024-09-27 15:57:18.969562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.698 [2024-09-27 15:57:18.969567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.698 [2024-09-27 15:57:18.972021] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:38.698 [2024-09-27 15:57:18.981230] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:38.698 [2024-09-27 15:57:18.981738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.698 [2024-09-27 15:57:18.981767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:38.699 [2024-09-27 15:57:18.981779] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:38.699 [2024-09-27 15:57:18.981952] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.699 [2024-09-27 15:57:18.982107] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.699 [2024-09-27 15:57:18.982113] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.699 [2024-09-27 15:57:18.982118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.699 [2024-09-27 15:57:18.984562] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:38.699 [2024-09-27 15:57:18.993917] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:38.699 [2024-09-27 15:57:18.994379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.699 [2024-09-27 15:57:18.994393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:38.699 [2024-09-27 15:57:18.994399] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:38.699 [2024-09-27 15:57:18.994550] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.699 [2024-09-27 15:57:18.994701] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.699 [2024-09-27 15:57:18.994707] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.699 [2024-09-27 15:57:18.994712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.699 [2024-09-27 15:57:18.997157] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:38.699 [2024-09-27 15:57:19.006659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:38.699 [2024-09-27 15:57:19.007236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.699 [2024-09-27 15:57:19.007267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:38.699 [2024-09-27 15:57:19.007275] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:38.699 [2024-09-27 15:57:19.007442] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.699 [2024-09-27 15:57:19.007597] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.699 [2024-09-27 15:57:19.007603] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.699 [2024-09-27 15:57:19.007608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.699 [2024-09-27 15:57:19.010055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:38.699 [2024-09-27 15:57:19.019392] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:38.699 [2024-09-27 15:57:19.019883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.699 [2024-09-27 15:57:19.019902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:38.699 [2024-09-27 15:57:19.019908] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:38.699 [2024-09-27 15:57:19.020059] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.699 [2024-09-27 15:57:19.020210] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.699 [2024-09-27 15:57:19.020216] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.699 [2024-09-27 15:57:19.020225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.699 [2024-09-27 15:57:19.022664] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:38.699 [2024-09-27 15:57:19.032144] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:38.699 [2024-09-27 15:57:19.032645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.699 [2024-09-27 15:57:19.032657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:38.699 [2024-09-27 15:57:19.032662] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:38.699 [2024-09-27 15:57:19.032813] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.699 [2024-09-27 15:57:19.032969] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.699 [2024-09-27 15:57:19.032975] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.699 [2024-09-27 15:57:19.032980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.699 [2024-09-27 15:57:19.035417] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:38.699 [2024-09-27 15:57:19.044896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:38.699 [2024-09-27 15:57:19.045342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.699 [2024-09-27 15:57:19.045354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:38.699 [2024-09-27 15:57:19.045359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:38.699 [2024-09-27 15:57:19.045509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.699 [2024-09-27 15:57:19.045660] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.699 [2024-09-27 15:57:19.045666] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.699 [2024-09-27 15:57:19.045672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.699 [2024-09-27 15:57:19.048111] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:38.699 [2024-09-27 15:57:19.057589] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:38.699 [2024-09-27 15:57:19.058059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.699 [2024-09-27 15:57:19.058089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:38.699 [2024-09-27 15:57:19.058098] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:38.699 [2024-09-27 15:57:19.058267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.699 [2024-09-27 15:57:19.058421] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.699 [2024-09-27 15:57:19.058427] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.699 [2024-09-27 15:57:19.058433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.699 [2024-09-27 15:57:19.060888] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:38.699 [2024-09-27 15:57:19.070241] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:38.699 [2024-09-27 15:57:19.070824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.699 [2024-09-27 15:57:19.070854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:38.699 [2024-09-27 15:57:19.070863] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:38.699 [2024-09-27 15:57:19.071039] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.699 [2024-09-27 15:57:19.071194] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.699 [2024-09-27 15:57:19.071200] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.699 [2024-09-27 15:57:19.071205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.699 [2024-09-27 15:57:19.073650] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:38.699 [2024-09-27 15:57:19.082998] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:38.699 [2024-09-27 15:57:19.083456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.699 [2024-09-27 15:57:19.083470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:38.699 [2024-09-27 15:57:19.083476] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:38.699 [2024-09-27 15:57:19.083627] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.699 [2024-09-27 15:57:19.083779] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.699 [2024-09-27 15:57:19.083784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.700 [2024-09-27 15:57:19.083789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.700 [2024-09-27 15:57:19.086237] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:38.700 [2024-09-27 15:57:19.095723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:38.700 [2024-09-27 15:57:19.096140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.700 [2024-09-27 15:57:19.096153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:38.700 [2024-09-27 15:57:19.096158] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:38.700 [2024-09-27 15:57:19.096309] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.700 [2024-09-27 15:57:19.096460] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.700 [2024-09-27 15:57:19.096466] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.700 [2024-09-27 15:57:19.096470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.700 [2024-09-27 15:57:19.098913] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:38.700 [2024-09-27 15:57:19.108409] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:38.700 [2024-09-27 15:57:19.108993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.700 [2024-09-27 15:57:19.109023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:38.700 [2024-09-27 15:57:19.109032] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:38.700 [2024-09-27 15:57:19.109205] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.700 [2024-09-27 15:57:19.109359] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.700 [2024-09-27 15:57:19.109366] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.700 [2024-09-27 15:57:19.109371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.700 [2024-09-27 15:57:19.111821] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:38.700 [2024-09-27 15:57:19.121161] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:38.700 [2024-09-27 15:57:19.121727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.700 [2024-09-27 15:57:19.121757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:38.700 [2024-09-27 15:57:19.121766] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:38.700 [2024-09-27 15:57:19.121939] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.700 [2024-09-27 15:57:19.122093] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.700 [2024-09-27 15:57:19.122099] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.700 [2024-09-27 15:57:19.122105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.700 [2024-09-27 15:57:19.124547] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:38.700 [2024-09-27 15:57:19.133889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:38.700 [2024-09-27 15:57:19.134371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.700 [2024-09-27 15:57:19.134386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:38.700 [2024-09-27 15:57:19.134392] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:38.700 [2024-09-27 15:57:19.134544] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.700 [2024-09-27 15:57:19.134695] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.700 [2024-09-27 15:57:19.134702] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.700 [2024-09-27 15:57:19.134706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.700 [2024-09-27 15:57:19.137150] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:38.700 [2024-09-27 15:57:19.146629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:38.700 [2024-09-27 15:57:19.147169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.700 [2024-09-27 15:57:19.147181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:38.700 [2024-09-27 15:57:19.147187] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:38.700 [2024-09-27 15:57:19.147338] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.700 [2024-09-27 15:57:19.147488] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.700 [2024-09-27 15:57:19.147494] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.700 [2024-09-27 15:57:19.147499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.700 [2024-09-27 15:57:19.149943] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:38.700 [2024-09-27 15:57:19.159309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:38.700 [2024-09-27 15:57:19.159761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.700 [2024-09-27 15:57:19.159774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:38.700 [2024-09-27 15:57:19.159780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:38.700 [2024-09-27 15:57:19.159936] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.700 [2024-09-27 15:57:19.160088] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.700 [2024-09-27 15:57:19.160093] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.700 [2024-09-27 15:57:19.160098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.700 [2024-09-27 15:57:19.162535] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:38.700 9711.00 IOPS, 37.93 MiB/s [2024-09-27 15:57:19.172857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:38.700 [2024-09-27 15:57:19.173453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.700 [2024-09-27 15:57:19.173483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:38.700 [2024-09-27 15:57:19.173492] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:38.700 [2024-09-27 15:57:19.173659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.700 [2024-09-27 15:57:19.173813] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.700 [2024-09-27 15:57:19.173819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.700 [2024-09-27 15:57:19.173825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.700 [2024-09-27 15:57:19.176274] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:38.961 [2024-09-27 15:57:19.185468] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:38.961 [2024-09-27 15:57:19.185950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.961 [2024-09-27 15:57:19.185980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:38.961 [2024-09-27 15:57:19.185989] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:38.961 [2024-09-27 15:57:19.186157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.961 [2024-09-27 15:57:19.186311] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.961 [2024-09-27 15:57:19.186317] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.961 [2024-09-27 15:57:19.186323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.961 [2024-09-27 15:57:19.188772] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:38.961 [2024-09-27 15:57:19.198112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:38.961 [2024-09-27 15:57:19.198689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.961 [2024-09-27 15:57:19.198723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:38.961 [2024-09-27 15:57:19.198731] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:38.961 [2024-09-27 15:57:19.198905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.961 [2024-09-27 15:57:19.199060] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.961 [2024-09-27 15:57:19.199066] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.961 [2024-09-27 15:57:19.199071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.961 [2024-09-27 15:57:19.201515] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:38.961 [2024-09-27 15:57:19.210861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:38.961 [2024-09-27 15:57:19.211434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.961 [2024-09-27 15:57:19.211463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:38.961 [2024-09-27 15:57:19.211472] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:38.961 [2024-09-27 15:57:19.211639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.961 [2024-09-27 15:57:19.211793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.961 [2024-09-27 15:57:19.211799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.961 [2024-09-27 15:57:19.211805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.961 [2024-09-27 15:57:19.214254] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:38.961 [2024-09-27 15:57:19.223591] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:38.961 [2024-09-27 15:57:19.224216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.961 [2024-09-27 15:57:19.224245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:38.961 [2024-09-27 15:57:19.224254] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:38.961 [2024-09-27 15:57:19.224421] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.961 [2024-09-27 15:57:19.224574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.961 [2024-09-27 15:57:19.224581] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.961 [2024-09-27 15:57:19.224586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.961 [2024-09-27 15:57:19.227035] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:38.961 [2024-09-27 15:57:19.236221] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:38.961 [2024-09-27 15:57:19.236797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.961 [2024-09-27 15:57:19.236827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:38.962 [2024-09-27 15:57:19.236835] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:38.962 [2024-09-27 15:57:19.237011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.962 [2024-09-27 15:57:19.237169] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.962 [2024-09-27 15:57:19.237175] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.962 [2024-09-27 15:57:19.237181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.962 [2024-09-27 15:57:19.239622] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:38.962 [2024-09-27 15:57:19.248954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:38.962 [2024-09-27 15:57:19.249527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-09-27 15:57:19.249557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:38.962 [2024-09-27 15:57:19.249566] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:38.962 [2024-09-27 15:57:19.249733] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.962 [2024-09-27 15:57:19.249887] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.962 [2024-09-27 15:57:19.249900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.962 [2024-09-27 15:57:19.249906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.962 [2024-09-27 15:57:19.252350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:38.962 [2024-09-27 15:57:19.261697] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:38.962 [2024-09-27 15:57:19.262260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-09-27 15:57:19.262290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:38.962 [2024-09-27 15:57:19.262299] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:38.962 [2024-09-27 15:57:19.262465] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.962 [2024-09-27 15:57:19.262619] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.962 [2024-09-27 15:57:19.262625] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.962 [2024-09-27 15:57:19.262630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.962 [2024-09-27 15:57:19.265079] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:38.962 [2024-09-27 15:57:19.274409] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:38.962 [2024-09-27 15:57:19.274974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-09-27 15:57:19.275004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:38.962 [2024-09-27 15:57:19.275013] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:38.962 [2024-09-27 15:57:19.275180] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.962 [2024-09-27 15:57:19.275334] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.962 [2024-09-27 15:57:19.275340] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.962 [2024-09-27 15:57:19.275346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.962 [2024-09-27 15:57:19.277799] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:38.962 [2024-09-27 15:57:19.287134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:38.962 [2024-09-27 15:57:19.287583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-09-27 15:57:19.287613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:38.962 [2024-09-27 15:57:19.287622] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:38.962 [2024-09-27 15:57:19.287789] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.962 [2024-09-27 15:57:19.287949] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.962 [2024-09-27 15:57:19.287956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.962 [2024-09-27 15:57:19.287961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.962 [2024-09-27 15:57:19.290405] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:38.962 [2024-09-27 15:57:19.299877] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:38.962 [2024-09-27 15:57:19.300485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-09-27 15:57:19.300515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:38.962 [2024-09-27 15:57:19.300524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:38.962 [2024-09-27 15:57:19.300691] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.962 [2024-09-27 15:57:19.300845] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.962 [2024-09-27 15:57:19.300851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.962 [2024-09-27 15:57:19.300856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.962 [2024-09-27 15:57:19.303305] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:38.962 [2024-09-27 15:57:19.312505] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:38.962 [2024-09-27 15:57:19.313033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-09-27 15:57:19.313062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:38.962 [2024-09-27 15:57:19.313071] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:38.962 [2024-09-27 15:57:19.313237] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.962 [2024-09-27 15:57:19.313391] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.962 [2024-09-27 15:57:19.313397] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.962 [2024-09-27 15:57:19.313403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.962 [2024-09-27 15:57:19.315848] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:38.962 [2024-09-27 15:57:19.325181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:38.962 [2024-09-27 15:57:19.325738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-09-27 15:57:19.325768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:38.962 [2024-09-27 15:57:19.325779] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:38.962 [2024-09-27 15:57:19.325953] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.962 [2024-09-27 15:57:19.326108] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.962 [2024-09-27 15:57:19.326114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.962 [2024-09-27 15:57:19.326119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.962 [2024-09-27 15:57:19.328562] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:38.962 [2024-09-27 15:57:19.337890] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:38.962 [2024-09-27 15:57:19.338470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-09-27 15:57:19.338500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:38.962 [2024-09-27 15:57:19.338508] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:38.962 [2024-09-27 15:57:19.338677] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.962 [2024-09-27 15:57:19.338831] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.962 [2024-09-27 15:57:19.338837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.962 [2024-09-27 15:57:19.338843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.962 [2024-09-27 15:57:19.341292] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:38.962 [2024-09-27 15:57:19.350622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:38.962 [2024-09-27 15:57:19.351127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.962 [2024-09-27 15:57:19.351142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:38.962 [2024-09-27 15:57:19.351148] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:38.962 [2024-09-27 15:57:19.351299] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.962 [2024-09-27 15:57:19.351450] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.962 [2024-09-27 15:57:19.351456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.962 [2024-09-27 15:57:19.351461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.962 [2024-09-27 15:57:19.353901] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:38.963 [2024-09-27 15:57:19.363264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:38.963 [2024-09-27 15:57:19.363712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.963 [2024-09-27 15:57:19.363742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:38.963 [2024-09-27 15:57:19.363750] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:38.963 [2024-09-27 15:57:19.363924] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.963 [2024-09-27 15:57:19.364078] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.963 [2024-09-27 15:57:19.364088] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.963 [2024-09-27 15:57:19.364093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.963 [2024-09-27 15:57:19.366534] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:38.963 [2024-09-27 15:57:19.376009] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:38.963 [2024-09-27 15:57:19.376594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.963 [2024-09-27 15:57:19.376624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:38.963 [2024-09-27 15:57:19.376632] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:38.963 [2024-09-27 15:57:19.376802] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.963 [2024-09-27 15:57:19.376962] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.963 [2024-09-27 15:57:19.376969] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.963 [2024-09-27 15:57:19.376975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.963 [2024-09-27 15:57:19.379417] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:38.963 [2024-09-27 15:57:19.388746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:38.963 [2024-09-27 15:57:19.389309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.963 [2024-09-27 15:57:19.389339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:38.963 [2024-09-27 15:57:19.389347] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:38.963 [2024-09-27 15:57:19.389514] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.963 [2024-09-27 15:57:19.389668] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.963 [2024-09-27 15:57:19.389674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.963 [2024-09-27 15:57:19.389679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.963 [2024-09-27 15:57:19.392129] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:38.963 [2024-09-27 15:57:19.401459] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:38.963 [2024-09-27 15:57:19.401961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.963 [2024-09-27 15:57:19.401977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:38.963 [2024-09-27 15:57:19.401983] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:38.963 [2024-09-27 15:57:19.402134] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.963 [2024-09-27 15:57:19.402285] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.963 [2024-09-27 15:57:19.402291] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.963 [2024-09-27 15:57:19.402296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.963 [2024-09-27 15:57:19.404735] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:38.963 [2024-09-27 15:57:19.414079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:38.963 [2024-09-27 15:57:19.414657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.963 [2024-09-27 15:57:19.414687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:38.963 [2024-09-27 15:57:19.414695] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:38.963 [2024-09-27 15:57:19.414862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.963 [2024-09-27 15:57:19.415023] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.963 [2024-09-27 15:57:19.415030] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.963 [2024-09-27 15:57:19.415035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.963 [2024-09-27 15:57:19.417477] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:38.963 [2024-09-27 15:57:19.426805] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:38.963 [2024-09-27 15:57:19.427389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.963 [2024-09-27 15:57:19.427419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:38.963 [2024-09-27 15:57:19.427428] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:38.963 [2024-09-27 15:57:19.427595] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.963 [2024-09-27 15:57:19.427749] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.963 [2024-09-27 15:57:19.427755] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.963 [2024-09-27 15:57:19.427760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.963 [2024-09-27 15:57:19.430208] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:38.963 [2024-09-27 15:57:19.439535] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:38.963 [2024-09-27 15:57:19.440015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.963 [2024-09-27 15:57:19.440045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:38.963 [2024-09-27 15:57:19.440053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:38.963 [2024-09-27 15:57:19.440220] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:38.963 [2024-09-27 15:57:19.440374] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:38.963 [2024-09-27 15:57:19.440380] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:38.963 [2024-09-27 15:57:19.440385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:38.963 [2024-09-27 15:57:19.442833] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.224 [2024-09-27 15:57:19.452171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.224 [2024-09-27 15:57:19.452764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.224 [2024-09-27 15:57:19.452793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.224 [2024-09-27 15:57:19.452802] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.224 [2024-09-27 15:57:19.452979] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.224 [2024-09-27 15:57:19.453134] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.224 [2024-09-27 15:57:19.453140] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.224 [2024-09-27 15:57:19.453146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.224 [2024-09-27 15:57:19.455591] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.224 [2024-09-27 15:57:19.464791] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.224 [2024-09-27 15:57:19.465346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.224 [2024-09-27 15:57:19.465375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.224 [2024-09-27 15:57:19.465384] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.224 [2024-09-27 15:57:19.465551] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.224 [2024-09-27 15:57:19.465705] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.224 [2024-09-27 15:57:19.465711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.224 [2024-09-27 15:57:19.465716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.224 [2024-09-27 15:57:19.468166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.224 [2024-09-27 15:57:19.477494] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.224 [2024-09-27 15:57:19.478171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.224 [2024-09-27 15:57:19.478202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.224 [2024-09-27 15:57:19.478210] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.224 [2024-09-27 15:57:19.478377] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.224 [2024-09-27 15:57:19.478531] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.224 [2024-09-27 15:57:19.478537] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.224 [2024-09-27 15:57:19.478543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.224 [2024-09-27 15:57:19.480993] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.224 [2024-09-27 15:57:19.490182] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.224 [2024-09-27 15:57:19.490743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.224 [2024-09-27 15:57:19.490773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.224 [2024-09-27 15:57:19.490782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.224 [2024-09-27 15:57:19.490957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.224 [2024-09-27 15:57:19.491112] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.224 [2024-09-27 15:57:19.491118] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.224 [2024-09-27 15:57:19.491127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.224 [2024-09-27 15:57:19.493569] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.224 [2024-09-27 15:57:19.502900] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.224 [2024-09-27 15:57:19.503422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.224 [2024-09-27 15:57:19.503452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.224 [2024-09-27 15:57:19.503460] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.224 [2024-09-27 15:57:19.503627] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.224 [2024-09-27 15:57:19.503781] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.224 [2024-09-27 15:57:19.503787] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.224 [2024-09-27 15:57:19.503792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.224 [2024-09-27 15:57:19.506248] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.224 [2024-09-27 15:57:19.515593] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.224 [2024-09-27 15:57:19.516221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.224 [2024-09-27 15:57:19.516251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.224 [2024-09-27 15:57:19.516259] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.224 [2024-09-27 15:57:19.516426] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.224 [2024-09-27 15:57:19.516580] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.224 [2024-09-27 15:57:19.516586] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.224 [2024-09-27 15:57:19.516592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.224 [2024-09-27 15:57:19.519043] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.224 [2024-09-27 15:57:19.528226] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.224 [2024-09-27 15:57:19.528806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.224 [2024-09-27 15:57:19.528836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.224 [2024-09-27 15:57:19.528845] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.224 [2024-09-27 15:57:19.529019] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.224 [2024-09-27 15:57:19.529174] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.224 [2024-09-27 15:57:19.529180] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.224 [2024-09-27 15:57:19.529185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.225 [2024-09-27 15:57:19.531627] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.225 [2024-09-27 15:57:19.540958] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.225 [2024-09-27 15:57:19.541425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.225 [2024-09-27 15:57:19.541458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.225 [2024-09-27 15:57:19.541467] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.225 [2024-09-27 15:57:19.541634] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.225 [2024-09-27 15:57:19.541788] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.225 [2024-09-27 15:57:19.541794] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.225 [2024-09-27 15:57:19.541799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.225 [2024-09-27 15:57:19.544249] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.225 [2024-09-27 15:57:19.553587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.225 [2024-09-27 15:57:19.554179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.225 [2024-09-27 15:57:19.554209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.225 [2024-09-27 15:57:19.554217] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.225 [2024-09-27 15:57:19.554384] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.225 [2024-09-27 15:57:19.554538] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.225 [2024-09-27 15:57:19.554544] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.225 [2024-09-27 15:57:19.554550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.225 [2024-09-27 15:57:19.557004] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.225 [2024-09-27 15:57:19.566207] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.225 [2024-09-27 15:57:19.566748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.225 [2024-09-27 15:57:19.566778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.225 [2024-09-27 15:57:19.566786] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.225 [2024-09-27 15:57:19.566960] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.225 [2024-09-27 15:57:19.567115] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.225 [2024-09-27 15:57:19.567121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.225 [2024-09-27 15:57:19.567126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.225 [2024-09-27 15:57:19.569566] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.225 [2024-09-27 15:57:19.578921] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.225 [2024-09-27 15:57:19.579499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.225 [2024-09-27 15:57:19.579529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.225 [2024-09-27 15:57:19.579537] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.225 [2024-09-27 15:57:19.579704] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.225 [2024-09-27 15:57:19.579862] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.225 [2024-09-27 15:57:19.579868] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.225 [2024-09-27 15:57:19.579873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.225 [2024-09-27 15:57:19.582321] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.225 [2024-09-27 15:57:19.591652] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.225 [2024-09-27 15:57:19.592215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.225 [2024-09-27 15:57:19.592245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.225 [2024-09-27 15:57:19.592254] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.225 [2024-09-27 15:57:19.592420] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.225 [2024-09-27 15:57:19.592574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.225 [2024-09-27 15:57:19.592580] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.225 [2024-09-27 15:57:19.592586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.225 [2024-09-27 15:57:19.595034] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.225 [2024-09-27 15:57:19.604363] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.225 [2024-09-27 15:57:19.604958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.225 [2024-09-27 15:57:19.604988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.225 [2024-09-27 15:57:19.604997] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.225 [2024-09-27 15:57:19.605164] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.225 [2024-09-27 15:57:19.605318] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.225 [2024-09-27 15:57:19.605323] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.225 [2024-09-27 15:57:19.605329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.225 [2024-09-27 15:57:19.607777] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.225 [2024-09-27 15:57:19.616977] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.225 [2024-09-27 15:57:19.617447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.225 [2024-09-27 15:57:19.617477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.225 [2024-09-27 15:57:19.617486] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.225 [2024-09-27 15:57:19.617653] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.225 [2024-09-27 15:57:19.617807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.225 [2024-09-27 15:57:19.617813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.225 [2024-09-27 15:57:19.617818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.225 [2024-09-27 15:57:19.620273] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.225 [2024-09-27 15:57:19.629604] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.225 [2024-09-27 15:57:19.630306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.225 [2024-09-27 15:57:19.630336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.225 [2024-09-27 15:57:19.630344] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.225 [2024-09-27 15:57:19.630511] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.225 [2024-09-27 15:57:19.630665] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.225 [2024-09-27 15:57:19.630671] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.225 [2024-09-27 15:57:19.630677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.225 [2024-09-27 15:57:19.633126] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.225 [2024-09-27 15:57:19.642313] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.225 [2024-09-27 15:57:19.642889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.225 [2024-09-27 15:57:19.642923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.225 [2024-09-27 15:57:19.642932] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.225 [2024-09-27 15:57:19.643101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.225 [2024-09-27 15:57:19.643255] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.225 [2024-09-27 15:57:19.643261] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.225 [2024-09-27 15:57:19.643266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.225 [2024-09-27 15:57:19.645711] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.225 [2024-09-27 15:57:19.655046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.225 [2024-09-27 15:57:19.655620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.225 [2024-09-27 15:57:19.655651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.225 [2024-09-27 15:57:19.655659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.225 [2024-09-27 15:57:19.655830] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.225 [2024-09-27 15:57:19.655991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.225 [2024-09-27 15:57:19.655998] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.226 [2024-09-27 15:57:19.656004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.226 [2024-09-27 15:57:19.658447] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.226 [2024-09-27 15:57:19.667955] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.226 [2024-09-27 15:57:19.668554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.226 [2024-09-27 15:57:19.668583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.226 [2024-09-27 15:57:19.668595] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.226 [2024-09-27 15:57:19.668762] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.226 [2024-09-27 15:57:19.668923] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.226 [2024-09-27 15:57:19.668930] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.226 [2024-09-27 15:57:19.668935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.226 [2024-09-27 15:57:19.671378] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.226 [2024-09-27 15:57:19.680710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.226 [2024-09-27 15:57:19.681326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.226 [2024-09-27 15:57:19.681356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.226 [2024-09-27 15:57:19.681365] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.226 [2024-09-27 15:57:19.681532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.226 [2024-09-27 15:57:19.681685] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.226 [2024-09-27 15:57:19.681691] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.226 [2024-09-27 15:57:19.681697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.226 [2024-09-27 15:57:19.684144] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.226 [2024-09-27 15:57:19.693333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.226 [2024-09-27 15:57:19.693946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.226 [2024-09-27 15:57:19.693975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.226 [2024-09-27 15:57:19.693984] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.226 [2024-09-27 15:57:19.694151] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.226 [2024-09-27 15:57:19.694304] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.226 [2024-09-27 15:57:19.694310] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.226 [2024-09-27 15:57:19.694316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.226 [2024-09-27 15:57:19.696764] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.226 [2024-09-27 15:57:19.705961] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.226 [2024-09-27 15:57:19.706513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.226 [2024-09-27 15:57:19.706543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.226 [2024-09-27 15:57:19.706552] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.226 [2024-09-27 15:57:19.706718] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.226 [2024-09-27 15:57:19.706872] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.226 [2024-09-27 15:57:19.706881] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.226 [2024-09-27 15:57:19.706887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.226 [2024-09-27 15:57:19.709344] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.487 [2024-09-27 15:57:19.718677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.487 [2024-09-27 15:57:19.719239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.487 [2024-09-27 15:57:19.719269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.487 [2024-09-27 15:57:19.719278] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.487 [2024-09-27 15:57:19.719445] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.487 [2024-09-27 15:57:19.719599] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.487 [2024-09-27 15:57:19.719605] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.487 [2024-09-27 15:57:19.719610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.487 [2024-09-27 15:57:19.722058] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.487 [2024-09-27 15:57:19.731388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.487 [2024-09-27 15:57:19.731883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.487 [2024-09-27 15:57:19.731919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.487 [2024-09-27 15:57:19.731927] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.487 [2024-09-27 15:57:19.732095] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.487 [2024-09-27 15:57:19.732249] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.487 [2024-09-27 15:57:19.732255] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.487 [2024-09-27 15:57:19.732261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.487 [2024-09-27 15:57:19.734707] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.487 [2024-09-27 15:57:19.744044] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.487 [2024-09-27 15:57:19.744652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.487 [2024-09-27 15:57:19.744682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.487 [2024-09-27 15:57:19.744690] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.487 [2024-09-27 15:57:19.744857] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.487 [2024-09-27 15:57:19.745019] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.487 [2024-09-27 15:57:19.745026] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.487 [2024-09-27 15:57:19.745031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.487 [2024-09-27 15:57:19.747474] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.487 [2024-09-27 15:57:19.756670] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.487 [2024-09-27 15:57:19.757252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.487 [2024-09-27 15:57:19.757282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.487 [2024-09-27 15:57:19.757291] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.487 [2024-09-27 15:57:19.757460] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.487 [2024-09-27 15:57:19.757614] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.487 [2024-09-27 15:57:19.757620] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.487 [2024-09-27 15:57:19.757625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.487 [2024-09-27 15:57:19.760072] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.487 [2024-09-27 15:57:19.769407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.487 [2024-09-27 15:57:19.769865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.487 [2024-09-27 15:57:19.769900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.487 [2024-09-27 15:57:19.769908] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.487 [2024-09-27 15:57:19.770075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.487 [2024-09-27 15:57:19.770229] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.487 [2024-09-27 15:57:19.770235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.487 [2024-09-27 15:57:19.770240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.487 [2024-09-27 15:57:19.772682] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.487 [2024-09-27 15:57:19.782037] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.487 [2024-09-27 15:57:19.782599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.487 [2024-09-27 15:57:19.782629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.487 [2024-09-27 15:57:19.782638] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.487 [2024-09-27 15:57:19.782805] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.487 [2024-09-27 15:57:19.782965] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.487 [2024-09-27 15:57:19.782972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.487 [2024-09-27 15:57:19.782977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.487 [2024-09-27 15:57:19.785421] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.487 [2024-09-27 15:57:19.794776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.487 [2024-09-27 15:57:19.795302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.487 [2024-09-27 15:57:19.795332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.487 [2024-09-27 15:57:19.795341] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.487 [2024-09-27 15:57:19.795511] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.487 [2024-09-27 15:57:19.795665] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.488 [2024-09-27 15:57:19.795672] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.488 [2024-09-27 15:57:19.795677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.488 [2024-09-27 15:57:19.798126] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.488 [2024-09-27 15:57:19.807458] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.488 [2024-09-27 15:57:19.807929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.488 [2024-09-27 15:57:19.807960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.488 [2024-09-27 15:57:19.807969] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.488 [2024-09-27 15:57:19.808137] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.488 [2024-09-27 15:57:19.808291] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.488 [2024-09-27 15:57:19.808297] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.488 [2024-09-27 15:57:19.808302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.488 [2024-09-27 15:57:19.810759] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.488 [2024-09-27 15:57:19.820093] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.488 [2024-09-27 15:57:19.820551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.488 [2024-09-27 15:57:19.820581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.488 [2024-09-27 15:57:19.820589] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.488 [2024-09-27 15:57:19.820756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.488 [2024-09-27 15:57:19.820917] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.488 [2024-09-27 15:57:19.820924] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.488 [2024-09-27 15:57:19.820930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.488 [2024-09-27 15:57:19.823373] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.488 [2024-09-27 15:57:19.832845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.488 [2024-09-27 15:57:19.833422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.488 [2024-09-27 15:57:19.833452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.488 [2024-09-27 15:57:19.833461] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.488 [2024-09-27 15:57:19.833628] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.488 [2024-09-27 15:57:19.833782] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.488 [2024-09-27 15:57:19.833788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.488 [2024-09-27 15:57:19.833797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.488 [2024-09-27 15:57:19.836247] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.488 [2024-09-27 15:57:19.845575] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.488 [2024-09-27 15:57:19.846187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.488 [2024-09-27 15:57:19.846217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.488 [2024-09-27 15:57:19.846225] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.488 [2024-09-27 15:57:19.846392] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.488 [2024-09-27 15:57:19.846546] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.488 [2024-09-27 15:57:19.846552] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.488 [2024-09-27 15:57:19.846557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.488 [2024-09-27 15:57:19.849003] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.488 [2024-09-27 15:57:19.858203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.488 [2024-09-27 15:57:19.858704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.488 [2024-09-27 15:57:19.858719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.488 [2024-09-27 15:57:19.858725] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.488 [2024-09-27 15:57:19.858876] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.488 [2024-09-27 15:57:19.859034] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.488 [2024-09-27 15:57:19.859040] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.488 [2024-09-27 15:57:19.859045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.488 [2024-09-27 15:57:19.861489] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.488 [2024-09-27 15:57:19.870818] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.488 [2024-09-27 15:57:19.871309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.488 [2024-09-27 15:57:19.871322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.488 [2024-09-27 15:57:19.871327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.488 [2024-09-27 15:57:19.871478] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.488 [2024-09-27 15:57:19.871629] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.488 [2024-09-27 15:57:19.871635] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.488 [2024-09-27 15:57:19.871640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.488 [2024-09-27 15:57:19.874077] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.488 [2024-09-27 15:57:19.883548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.488 [2024-09-27 15:57:19.884038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.488 [2024-09-27 15:57:19.884072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.488 [2024-09-27 15:57:19.884081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.488 [2024-09-27 15:57:19.884251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.488 [2024-09-27 15:57:19.884405] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.488 [2024-09-27 15:57:19.884412] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.488 [2024-09-27 15:57:19.884417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.488 [2024-09-27 15:57:19.886867] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.488 [2024-09-27 15:57:19.896289] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.488 [2024-09-27 15:57:19.896872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.488 [2024-09-27 15:57:19.896908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.488 [2024-09-27 15:57:19.896917] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.488 [2024-09-27 15:57:19.897086] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.488 [2024-09-27 15:57:19.897240] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.488 [2024-09-27 15:57:19.897247] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.488 [2024-09-27 15:57:19.897252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.488 [2024-09-27 15:57:19.899695] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.488 [2024-09-27 15:57:19.909044] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.488 [2024-09-27 15:57:19.909508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.488 [2024-09-27 15:57:19.909523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.488 [2024-09-27 15:57:19.909528] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.488 [2024-09-27 15:57:19.909679] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.488 [2024-09-27 15:57:19.909830] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.488 [2024-09-27 15:57:19.909836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.488 [2024-09-27 15:57:19.909841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.488 [2024-09-27 15:57:19.912291] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.488 [2024-09-27 15:57:19.921763] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.488 [2024-09-27 15:57:19.922309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.488 [2024-09-27 15:57:19.922339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.488 [2024-09-27 15:57:19.922347] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.488 [2024-09-27 15:57:19.922514] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.488 [2024-09-27 15:57:19.922672] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.489 [2024-09-27 15:57:19.922678] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.489 [2024-09-27 15:57:19.922684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.489 [2024-09-27 15:57:19.925130] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.489 [2024-09-27 15:57:19.934459] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.489 [2024-09-27 15:57:19.934924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.489 [2024-09-27 15:57:19.934940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.489 [2024-09-27 15:57:19.934945] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.489 [2024-09-27 15:57:19.935097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.489 [2024-09-27 15:57:19.935248] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.489 [2024-09-27 15:57:19.935254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.489 [2024-09-27 15:57:19.935259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.489 [2024-09-27 15:57:19.937699] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.489 [2024-09-27 15:57:19.947167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.489 [2024-09-27 15:57:19.947739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.489 [2024-09-27 15:57:19.947768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.489 [2024-09-27 15:57:19.947777] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.489 [2024-09-27 15:57:19.947951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.489 [2024-09-27 15:57:19.948106] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.489 [2024-09-27 15:57:19.948112] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.489 [2024-09-27 15:57:19.948117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.489 [2024-09-27 15:57:19.950559] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.489 [2024-09-27 15:57:19.959901] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.489 [2024-09-27 15:57:19.960494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.489 [2024-09-27 15:57:19.960524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.489 [2024-09-27 15:57:19.960533] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.489 [2024-09-27 15:57:19.960700] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.489 [2024-09-27 15:57:19.960854] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.489 [2024-09-27 15:57:19.960860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.489 [2024-09-27 15:57:19.960866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.489 [2024-09-27 15:57:19.963328] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.489 [2024-09-27 15:57:19.972527] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.489 [2024-09-27 15:57:19.973128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.489 [2024-09-27 15:57:19.973158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.489 [2024-09-27 15:57:19.973167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.489 [2024-09-27 15:57:19.973335] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.489 [2024-09-27 15:57:19.973489] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.489 [2024-09-27 15:57:19.973495] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.489 [2024-09-27 15:57:19.973501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.751 [2024-09-27 15:57:19.975949] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.751 [2024-09-27 15:57:19.985141] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.751 [2024-09-27 15:57:19.985621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.751 [2024-09-27 15:57:19.985636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.751 [2024-09-27 15:57:19.985642] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.751 [2024-09-27 15:57:19.985793] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.751 [2024-09-27 15:57:19.985949] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.751 [2024-09-27 15:57:19.985956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.751 [2024-09-27 15:57:19.985961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.751 [2024-09-27 15:57:19.988421] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.751 [2024-09-27 15:57:19.997758] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.751 [2024-09-27 15:57:19.998253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.751 [2024-09-27 15:57:19.998267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.751 [2024-09-27 15:57:19.998273] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.751 [2024-09-27 15:57:19.998423] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.751 [2024-09-27 15:57:19.998574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.751 [2024-09-27 15:57:19.998580] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.751 [2024-09-27 15:57:19.998585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.751 [2024-09-27 15:57:20.001024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.751 [2024-09-27 15:57:20.010415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.751 [2024-09-27 15:57:20.010940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.751 [2024-09-27 15:57:20.010960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.751 [2024-09-27 15:57:20.010970] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.751 [2024-09-27 15:57:20.011127] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.751 [2024-09-27 15:57:20.011279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.751 [2024-09-27 15:57:20.011285] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.751 [2024-09-27 15:57:20.011290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.751 [2024-09-27 15:57:20.013742] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.751 [2024-09-27 15:57:20.023094] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.751 [2024-09-27 15:57:20.023691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.751 [2024-09-27 15:57:20.023721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.751 [2024-09-27 15:57:20.023730] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.751 [2024-09-27 15:57:20.023904] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.751 [2024-09-27 15:57:20.024058] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.751 [2024-09-27 15:57:20.024065] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.751 [2024-09-27 15:57:20.024070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.751 [2024-09-27 15:57:20.026514] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.751 [2024-09-27 15:57:20.035710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.751 [2024-09-27 15:57:20.036323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.751 [2024-09-27 15:57:20.036354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.751 [2024-09-27 15:57:20.036362] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.751 [2024-09-27 15:57:20.036532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.751 [2024-09-27 15:57:20.036687] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.751 [2024-09-27 15:57:20.036693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.751 [2024-09-27 15:57:20.036698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.751 [2024-09-27 15:57:20.039150] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.752 [2024-09-27 15:57:20.048346] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.752 [2024-09-27 15:57:20.048901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.752 [2024-09-27 15:57:20.048932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.752 [2024-09-27 15:57:20.048944] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.752 [2024-09-27 15:57:20.049114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.752 [2024-09-27 15:57:20.049270] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.752 [2024-09-27 15:57:20.049286] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.752 [2024-09-27 15:57:20.049293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.752 [2024-09-27 15:57:20.051738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.752 [2024-09-27 15:57:20.061090] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.752 [2024-09-27 15:57:20.061474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.752 [2024-09-27 15:57:20.061504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.752 [2024-09-27 15:57:20.061513] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.752 [2024-09-27 15:57:20.061680] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.752 [2024-09-27 15:57:20.061834] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.752 [2024-09-27 15:57:20.061840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.752 [2024-09-27 15:57:20.061846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.752 [2024-09-27 15:57:20.064302] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.752 [2024-09-27 15:57:20.073784] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.752 [2024-09-27 15:57:20.074259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.752 [2024-09-27 15:57:20.074275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.752 [2024-09-27 15:57:20.074281] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.752 [2024-09-27 15:57:20.074432] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.752 [2024-09-27 15:57:20.074583] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.752 [2024-09-27 15:57:20.074589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.752 [2024-09-27 15:57:20.074594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.752 [2024-09-27 15:57:20.077035] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.752 [2024-09-27 15:57:20.086511] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.752 [2024-09-27 15:57:20.086968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.752 [2024-09-27 15:57:20.086981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.752 [2024-09-27 15:57:20.086986] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.752 [2024-09-27 15:57:20.087138] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.752 [2024-09-27 15:57:20.087289] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.752 [2024-09-27 15:57:20.087295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.752 [2024-09-27 15:57:20.087300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.752 [2024-09-27 15:57:20.089736] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.752 [2024-09-27 15:57:20.099207] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.752 [2024-09-27 15:57:20.099524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.752 [2024-09-27 15:57:20.099539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.752 [2024-09-27 15:57:20.099544] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.752 [2024-09-27 15:57:20.099696] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.752 [2024-09-27 15:57:20.099846] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.752 [2024-09-27 15:57:20.099852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.752 [2024-09-27 15:57:20.099857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.752 [2024-09-27 15:57:20.102299] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.752 [2024-09-27 15:57:20.111920] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.752 [2024-09-27 15:57:20.112484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.752 [2024-09-27 15:57:20.112513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.752 [2024-09-27 15:57:20.112522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.752 [2024-09-27 15:57:20.112689] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.752 [2024-09-27 15:57:20.112843] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.752 [2024-09-27 15:57:20.112849] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.752 [2024-09-27 15:57:20.112855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.752 [2024-09-27 15:57:20.115307] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.752 [2024-09-27 15:57:20.124640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.752 [2024-09-27 15:57:20.125271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.752 [2024-09-27 15:57:20.125301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.752 [2024-09-27 15:57:20.125309] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.752 [2024-09-27 15:57:20.125476] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.752 [2024-09-27 15:57:20.125630] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.752 [2024-09-27 15:57:20.125636] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.752 [2024-09-27 15:57:20.125641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.752 [2024-09-27 15:57:20.128092] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.752 [2024-09-27 15:57:20.137279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.752 [2024-09-27 15:57:20.137887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.752 [2024-09-27 15:57:20.137922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.752 [2024-09-27 15:57:20.137931] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.752 [2024-09-27 15:57:20.138104] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.752 [2024-09-27 15:57:20.138258] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.752 [2024-09-27 15:57:20.138263] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.752 [2024-09-27 15:57:20.138269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.752 [2024-09-27 15:57:20.140715] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.752 [2024-09-27 15:57:20.149911] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.752 [2024-09-27 15:57:20.150403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.752 [2024-09-27 15:57:20.150418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.752 [2024-09-27 15:57:20.150424] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.752 [2024-09-27 15:57:20.150575] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.752 [2024-09-27 15:57:20.150726] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.752 [2024-09-27 15:57:20.150732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.752 [2024-09-27 15:57:20.150737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.752 [2024-09-27 15:57:20.153179] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.752 [2024-09-27 15:57:20.162528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.752 [2024-09-27 15:57:20.162961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.752 [2024-09-27 15:57:20.162974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.752 [2024-09-27 15:57:20.162980] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.752 [2024-09-27 15:57:20.163131] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.752 [2024-09-27 15:57:20.163282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.752 [2024-09-27 15:57:20.163288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.753 [2024-09-27 15:57:20.163293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.753 [2024-09-27 15:57:20.165730] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.753 7283.25 IOPS, 28.45 MiB/s [2024-09-27 15:57:20.175341] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.753 [2024-09-27 15:57:20.175830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.753 [2024-09-27 15:57:20.175843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.753 [2024-09-27 15:57:20.175848] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.753 [2024-09-27 15:57:20.176004] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.753 [2024-09-27 15:57:20.176155] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.753 [2024-09-27 15:57:20.176161] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.753 [2024-09-27 15:57:20.176169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.753 [2024-09-27 15:57:20.178605] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.753 [2024-09-27 15:57:20.188086] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.753 [2024-09-27 15:57:20.188652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.753 [2024-09-27 15:57:20.188682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.753 [2024-09-27 15:57:20.188691] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.753 [2024-09-27 15:57:20.188858] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.753 [2024-09-27 15:57:20.189018] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.753 [2024-09-27 15:57:20.189025] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.753 [2024-09-27 15:57:20.189030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.753 [2024-09-27 15:57:20.191473] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.753 [2024-09-27 15:57:20.200703] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.753 [2024-09-27 15:57:20.201184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.753 [2024-09-27 15:57:20.201214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.753 [2024-09-27 15:57:20.201223] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.753 [2024-09-27 15:57:20.201390] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.753 [2024-09-27 15:57:20.201543] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.753 [2024-09-27 15:57:20.201549] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.753 [2024-09-27 15:57:20.201555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.753 [2024-09-27 15:57:20.204006] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.753 [2024-09-27 15:57:20.213360] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.753 [2024-09-27 15:57:20.213846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.753 [2024-09-27 15:57:20.213861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.753 [2024-09-27 15:57:20.213866] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.753 [2024-09-27 15:57:20.214022] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.753 [2024-09-27 15:57:20.214174] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.753 [2024-09-27 15:57:20.214180] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.753 [2024-09-27 15:57:20.214185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.753 [2024-09-27 15:57:20.216622] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:39.753 [2024-09-27 15:57:20.226107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:39.753 [2024-09-27 15:57:20.226600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:39.753 [2024-09-27 15:57:20.226629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:39.753 [2024-09-27 15:57:20.226637] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:39.753 [2024-09-27 15:57:20.226805] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:39.753 [2024-09-27 15:57:20.226964] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:39.753 [2024-09-27 15:57:20.226970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:39.753 [2024-09-27 15:57:20.226976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:39.753 [2024-09-27 15:57:20.229428] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.014 [2024-09-27 15:57:20.238806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.014 [2024-09-27 15:57:20.239330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.014 [2024-09-27 15:57:20.239344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.014 [2024-09-27 15:57:20.239350] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.014 [2024-09-27 15:57:20.239502] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.014 [2024-09-27 15:57:20.239653] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.014 [2024-09-27 15:57:20.239659] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.014 [2024-09-27 15:57:20.239664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.014 [2024-09-27 15:57:20.242104] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.014 [2024-09-27 15:57:20.251439] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.014 [2024-09-27 15:57:20.251980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.014 [2024-09-27 15:57:20.252010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.014 [2024-09-27 15:57:20.252019] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.014 [2024-09-27 15:57:20.252188] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.014 [2024-09-27 15:57:20.252342] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.014 [2024-09-27 15:57:20.252348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.014 [2024-09-27 15:57:20.252354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.014 [2024-09-27 15:57:20.254802] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.014 [2024-09-27 15:57:20.264162] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.014 [2024-09-27 15:57:20.264741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.014 [2024-09-27 15:57:20.264772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.015 [2024-09-27 15:57:20.264780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.015 [2024-09-27 15:57:20.264956] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.015 [2024-09-27 15:57:20.265115] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.015 [2024-09-27 15:57:20.265121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.015 [2024-09-27 15:57:20.265126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.015 [2024-09-27 15:57:20.267569] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.015 [2024-09-27 15:57:20.276841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.015 [2024-09-27 15:57:20.277470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.015 [2024-09-27 15:57:20.277500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.015 [2024-09-27 15:57:20.277508] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.015 [2024-09-27 15:57:20.277678] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.015 [2024-09-27 15:57:20.277833] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.015 [2024-09-27 15:57:20.277839] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.015 [2024-09-27 15:57:20.277845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.015 [2024-09-27 15:57:20.280317] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.015 [2024-09-27 15:57:20.289511] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.015 [2024-09-27 15:57:20.289978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.015 [2024-09-27 15:57:20.289993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.015 [2024-09-27 15:57:20.289999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.015 [2024-09-27 15:57:20.290151] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.015 [2024-09-27 15:57:20.290301] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.015 [2024-09-27 15:57:20.290307] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.015 [2024-09-27 15:57:20.290311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.015 [2024-09-27 15:57:20.292753] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.015 [2024-09-27 15:57:20.302228] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.015 [2024-09-27 15:57:20.302814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.015 [2024-09-27 15:57:20.302843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.015 [2024-09-27 15:57:20.302852] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.015 [2024-09-27 15:57:20.303027] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.015 [2024-09-27 15:57:20.303181] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.015 [2024-09-27 15:57:20.303187] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.015 [2024-09-27 15:57:20.303193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.015 [2024-09-27 15:57:20.305641] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.015 [2024-09-27 15:57:20.314856] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.015 [2024-09-27 15:57:20.315442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.015 [2024-09-27 15:57:20.315472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.015 [2024-09-27 15:57:20.315480] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.015 [2024-09-27 15:57:20.315647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.015 [2024-09-27 15:57:20.315801] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.015 [2024-09-27 15:57:20.315807] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.015 [2024-09-27 15:57:20.315812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.015 [2024-09-27 15:57:20.318262] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.015 [2024-09-27 15:57:20.327610] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.015 [2024-09-27 15:57:20.328101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.015 [2024-09-27 15:57:20.328131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.015 [2024-09-27 15:57:20.328139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.015 [2024-09-27 15:57:20.328306] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.015 [2024-09-27 15:57:20.328460] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.015 [2024-09-27 15:57:20.328466] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.015 [2024-09-27 15:57:20.328472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.015 [2024-09-27 15:57:20.330922] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.015 [2024-09-27 15:57:20.340264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.015 [2024-09-27 15:57:20.340835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.015 [2024-09-27 15:57:20.340865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.015 [2024-09-27 15:57:20.340874] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.015 [2024-09-27 15:57:20.341050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.015 [2024-09-27 15:57:20.341204] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.015 [2024-09-27 15:57:20.341211] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.015 [2024-09-27 15:57:20.341216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.015 [2024-09-27 15:57:20.343658] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.015 [2024-09-27 15:57:20.353000] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.015 [2024-09-27 15:57:20.353569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.015 [2024-09-27 15:57:20.353599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.015 [2024-09-27 15:57:20.353610] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.015 [2024-09-27 15:57:20.353777] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.015 [2024-09-27 15:57:20.353937] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.015 [2024-09-27 15:57:20.353944] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.015 [2024-09-27 15:57:20.353949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.015 [2024-09-27 15:57:20.356390] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.015 [2024-09-27 15:57:20.365749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.015 [2024-09-27 15:57:20.366234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.015 [2024-09-27 15:57:20.366251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.015 [2024-09-27 15:57:20.366256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.015 [2024-09-27 15:57:20.366408] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.015 [2024-09-27 15:57:20.366559] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.015 [2024-09-27 15:57:20.366564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.015 [2024-09-27 15:57:20.366569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.015 [2024-09-27 15:57:20.369011] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.015 [2024-09-27 15:57:20.378493] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.015 [2024-09-27 15:57:20.379116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.015 [2024-09-27 15:57:20.379146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.015 [2024-09-27 15:57:20.379155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.015 [2024-09-27 15:57:20.379322] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.015 [2024-09-27 15:57:20.379476] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.015 [2024-09-27 15:57:20.379482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.015 [2024-09-27 15:57:20.379487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.015 [2024-09-27 15:57:20.381937] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.015 [2024-09-27 15:57:20.391135] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.016 [2024-09-27 15:57:20.391593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.016 [2024-09-27 15:57:20.391624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.016 [2024-09-27 15:57:20.391632] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.016 [2024-09-27 15:57:20.391799] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.016 [2024-09-27 15:57:20.391959] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.016 [2024-09-27 15:57:20.391969] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.016 [2024-09-27 15:57:20.391975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.016 [2024-09-27 15:57:20.394421] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.016 [2024-09-27 15:57:20.403784] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.016 [2024-09-27 15:57:20.404415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.016 [2024-09-27 15:57:20.404445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.016 [2024-09-27 15:57:20.404454] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.016 [2024-09-27 15:57:20.404623] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.016 [2024-09-27 15:57:20.404777] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.016 [2024-09-27 15:57:20.404783] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.016 [2024-09-27 15:57:20.404789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.016 [2024-09-27 15:57:20.407241] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.016 [2024-09-27 15:57:20.416455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.016 [2024-09-27 15:57:20.416955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.016 [2024-09-27 15:57:20.416971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.016 [2024-09-27 15:57:20.416976] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.016 [2024-09-27 15:57:20.417128] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.016 [2024-09-27 15:57:20.417279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.016 [2024-09-27 15:57:20.417285] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.016 [2024-09-27 15:57:20.417289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.016 [2024-09-27 15:57:20.419728] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.016 [2024-09-27 15:57:20.429215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.016 [2024-09-27 15:57:20.429688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.016 [2024-09-27 15:57:20.429700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.016 [2024-09-27 15:57:20.429705] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.016 [2024-09-27 15:57:20.429856] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.016 [2024-09-27 15:57:20.430010] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.016 [2024-09-27 15:57:20.430017] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.016 [2024-09-27 15:57:20.430022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.016 [2024-09-27 15:57:20.432459] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.016 [2024-09-27 15:57:20.441946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.016 [2024-09-27 15:57:20.442400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.016 [2024-09-27 15:57:20.442411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.016 [2024-09-27 15:57:20.442417] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.016 [2024-09-27 15:57:20.442567] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.016 [2024-09-27 15:57:20.442718] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.016 [2024-09-27 15:57:20.442724] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.016 [2024-09-27 15:57:20.442729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.016 [2024-09-27 15:57:20.445168] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.016 [2024-09-27 15:57:20.454648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.016 [2024-09-27 15:57:20.455116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.016 [2024-09-27 15:57:20.455128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.016 [2024-09-27 15:57:20.455134] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.016 [2024-09-27 15:57:20.455285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.016 [2024-09-27 15:57:20.455436] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.016 [2024-09-27 15:57:20.455441] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.016 [2024-09-27 15:57:20.455446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.016 [2024-09-27 15:57:20.457888] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.016 [2024-09-27 15:57:20.467384] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.016 [2024-09-27 15:57:20.467980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.016 [2024-09-27 15:57:20.468011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.016 [2024-09-27 15:57:20.468020] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.016 [2024-09-27 15:57:20.468189] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.016 [2024-09-27 15:57:20.468343] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.016 [2024-09-27 15:57:20.468349] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.016 [2024-09-27 15:57:20.468354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.016 [2024-09-27 15:57:20.470804] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.016 [2024-09-27 15:57:20.480005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.016 [2024-09-27 15:57:20.480572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.016 [2024-09-27 15:57:20.480602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.016 [2024-09-27 15:57:20.480610] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.016 [2024-09-27 15:57:20.480781] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.016 [2024-09-27 15:57:20.480941] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.016 [2024-09-27 15:57:20.480948] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.016 [2024-09-27 15:57:20.480953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.016 [2024-09-27 15:57:20.483397] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.016 [2024-09-27 15:57:20.492740] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.016 [2024-09-27 15:57:20.493123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.016 [2024-09-27 15:57:20.493139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.016 [2024-09-27 15:57:20.493144] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.016 [2024-09-27 15:57:20.493296] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.016 [2024-09-27 15:57:20.493447] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.016 [2024-09-27 15:57:20.493453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.016 [2024-09-27 15:57:20.493458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.016 [2024-09-27 15:57:20.495901] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.277 [2024-09-27 15:57:20.505380] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.278 [2024-09-27 15:57:20.505839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.278 [2024-09-27 15:57:20.505851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.278 [2024-09-27 15:57:20.505857] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.278 [2024-09-27 15:57:20.506012] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.278 [2024-09-27 15:57:20.506164] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.278 [2024-09-27 15:57:20.506170] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.278 [2024-09-27 15:57:20.506175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.278 [2024-09-27 15:57:20.508618] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.278 [2024-09-27 15:57:20.518117] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.278 [2024-09-27 15:57:20.518545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.278 [2024-09-27 15:57:20.518575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.278 [2024-09-27 15:57:20.518584] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.278 [2024-09-27 15:57:20.518751] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.278 [2024-09-27 15:57:20.518911] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.278 [2024-09-27 15:57:20.518918] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.278 [2024-09-27 15:57:20.518926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.278 [2024-09-27 15:57:20.521370] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.278 [2024-09-27 15:57:20.530855] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.278 [2024-09-27 15:57:20.531427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.278 [2024-09-27 15:57:20.531457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.278 [2024-09-27 15:57:20.531465] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.278 [2024-09-27 15:57:20.531634] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.278 [2024-09-27 15:57:20.531788] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.278 [2024-09-27 15:57:20.531794] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.278 [2024-09-27 15:57:20.531799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.278 [2024-09-27 15:57:20.534248] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.278 [2024-09-27 15:57:20.543589] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.278 [2024-09-27 15:57:20.544196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.278 [2024-09-27 15:57:20.544226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.278 [2024-09-27 15:57:20.544234] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.278 [2024-09-27 15:57:20.544401] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.278 [2024-09-27 15:57:20.544555] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.278 [2024-09-27 15:57:20.544561] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.278 [2024-09-27 15:57:20.544567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.278 [2024-09-27 15:57:20.547016] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.278 [2024-09-27 15:57:20.556213] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.278 [2024-09-27 15:57:20.556784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.278 [2024-09-27 15:57:20.556814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.278 [2024-09-27 15:57:20.556823] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.278 [2024-09-27 15:57:20.557000] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.278 [2024-09-27 15:57:20.557156] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.278 [2024-09-27 15:57:20.557162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.278 [2024-09-27 15:57:20.557167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.278 [2024-09-27 15:57:20.559611] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.278 [2024-09-27 15:57:20.568963] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.278 [2024-09-27 15:57:20.569363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.278 [2024-09-27 15:57:20.569377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.278 [2024-09-27 15:57:20.569383] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.278 [2024-09-27 15:57:20.569535] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.278 [2024-09-27 15:57:20.569686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.278 [2024-09-27 15:57:20.569691] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.278 [2024-09-27 15:57:20.569696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.278 [2024-09-27 15:57:20.572139] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.278 [2024-09-27 15:57:20.581619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.278 [2024-09-27 15:57:20.581892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.278 [2024-09-27 15:57:20.581908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.278 [2024-09-27 15:57:20.581914] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.278 [2024-09-27 15:57:20.582064] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.278 [2024-09-27 15:57:20.582215] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.278 [2024-09-27 15:57:20.582221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.278 [2024-09-27 15:57:20.582226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.278 [2024-09-27 15:57:20.584660] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.278 [2024-09-27 15:57:20.594286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.278 [2024-09-27 15:57:20.594738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.278 [2024-09-27 15:57:20.594750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.278 [2024-09-27 15:57:20.594755] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.278 [2024-09-27 15:57:20.594910] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.278 [2024-09-27 15:57:20.595061] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.278 [2024-09-27 15:57:20.595067] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.278 [2024-09-27 15:57:20.595072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.278 [2024-09-27 15:57:20.597508] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.278 [2024-09-27 15:57:20.606992] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.278 [2024-09-27 15:57:20.607545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.278 [2024-09-27 15:57:20.607575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.278 [2024-09-27 15:57:20.607583] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.278 [2024-09-27 15:57:20.607750] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.278 [2024-09-27 15:57:20.607915] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.278 [2024-09-27 15:57:20.607922] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.278 [2024-09-27 15:57:20.607928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.278 [2024-09-27 15:57:20.610372] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.278 [2024-09-27 15:57:20.619604] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.278 [2024-09-27 15:57:20.620189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.278 [2024-09-27 15:57:20.620219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.278 [2024-09-27 15:57:20.620228] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.278 [2024-09-27 15:57:20.620395] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.278 [2024-09-27 15:57:20.620549] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.279 [2024-09-27 15:57:20.620555] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.279 [2024-09-27 15:57:20.620560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.279 [2024-09-27 15:57:20.623011] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.279 [2024-09-27 15:57:20.632355] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.279 [2024-09-27 15:57:20.632974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.279 [2024-09-27 15:57:20.633004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.279 [2024-09-27 15:57:20.633012] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.279 [2024-09-27 15:57:20.633182] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.279 [2024-09-27 15:57:20.633335] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.279 [2024-09-27 15:57:20.633342] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.279 [2024-09-27 15:57:20.633347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.279 [2024-09-27 15:57:20.635793] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.279 [2024-09-27 15:57:20.644995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.279 [2024-09-27 15:57:20.645624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.279 [2024-09-27 15:57:20.645654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.279 [2024-09-27 15:57:20.645663] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.279 [2024-09-27 15:57:20.645831] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.279 [2024-09-27 15:57:20.645991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.279 [2024-09-27 15:57:20.645998] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.279 [2024-09-27 15:57:20.646004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.279 [2024-09-27 15:57:20.648451] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.279 [2024-09-27 15:57:20.657659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.279 [2024-09-27 15:57:20.658130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.279 [2024-09-27 15:57:20.658144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.279 [2024-09-27 15:57:20.658150] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.279 [2024-09-27 15:57:20.658301] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.279 [2024-09-27 15:57:20.658452] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.279 [2024-09-27 15:57:20.658458] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.279 [2024-09-27 15:57:20.658463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.279 [2024-09-27 15:57:20.660905] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.279 [2024-09-27 15:57:20.670417] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.279 [2024-09-27 15:57:20.670799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.279 [2024-09-27 15:57:20.670812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.279 [2024-09-27 15:57:20.670818] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.279 [2024-09-27 15:57:20.670972] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.279 [2024-09-27 15:57:20.671124] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.279 [2024-09-27 15:57:20.671130] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.279 [2024-09-27 15:57:20.671134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.279 [2024-09-27 15:57:20.673573] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.279 [2024-09-27 15:57:20.683054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.279 [2024-09-27 15:57:20.683622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.279 [2024-09-27 15:57:20.683652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.279 [2024-09-27 15:57:20.683660] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.279 [2024-09-27 15:57:20.683830] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.279 [2024-09-27 15:57:20.683990] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.279 [2024-09-27 15:57:20.683997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.279 [2024-09-27 15:57:20.684002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.279 [2024-09-27 15:57:20.686446] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.279 [2024-09-27 15:57:20.695789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.279 [2024-09-27 15:57:20.696278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.279 [2024-09-27 15:57:20.696293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.279 [2024-09-27 15:57:20.696305] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.279 [2024-09-27 15:57:20.696457] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.279 [2024-09-27 15:57:20.696609] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.279 [2024-09-27 15:57:20.696614] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.279 [2024-09-27 15:57:20.696619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.279 [2024-09-27 15:57:20.699061] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.279 [2024-09-27 15:57:20.708546] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.279 [2024-09-27 15:57:20.709002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.279 [2024-09-27 15:57:20.709014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.279 [2024-09-27 15:57:20.709020] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.279 [2024-09-27 15:57:20.709171] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.279 [2024-09-27 15:57:20.709322] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.279 [2024-09-27 15:57:20.709328] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.279 [2024-09-27 15:57:20.709333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.279 [2024-09-27 15:57:20.711769] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.279 [2024-09-27 15:57:20.721260] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.279 [2024-09-27 15:57:20.721735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.279 [2024-09-27 15:57:20.721747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.279 [2024-09-27 15:57:20.721752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.279 [2024-09-27 15:57:20.721908] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.279 [2024-09-27 15:57:20.722059] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.279 [2024-09-27 15:57:20.722065] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.279 [2024-09-27 15:57:20.722070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.279 [2024-09-27 15:57:20.724506] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.279 [2024-09-27 15:57:20.733984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.279 [2024-09-27 15:57:20.734558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.279 [2024-09-27 15:57:20.734588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.279 [2024-09-27 15:57:20.734596] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.279 [2024-09-27 15:57:20.734763] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.279 [2024-09-27 15:57:20.734923] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.279 [2024-09-27 15:57:20.734934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.279 [2024-09-27 15:57:20.734940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.279 [2024-09-27 15:57:20.737384] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.279 [2024-09-27 15:57:20.746721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.279 [2024-09-27 15:57:20.747168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.279 [2024-09-27 15:57:20.747183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.279 [2024-09-27 15:57:20.747189] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.279 [2024-09-27 15:57:20.747340] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.279 [2024-09-27 15:57:20.747491] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.280 [2024-09-27 15:57:20.747496] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.280 [2024-09-27 15:57:20.747501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.280 [2024-09-27 15:57:20.749942] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.280 [2024-09-27 15:57:20.759429] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.280 [2024-09-27 15:57:20.759945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.280 [2024-09-27 15:57:20.759964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.280 [2024-09-27 15:57:20.759970] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.280 [2024-09-27 15:57:20.760126] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.280 [2024-09-27 15:57:20.760278] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.280 [2024-09-27 15:57:20.760283] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.280 [2024-09-27 15:57:20.760289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.280 [2024-09-27 15:57:20.762732] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.542 [2024-09-27 15:57:20.772072] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.542 [2024-09-27 15:57:20.772560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.542 [2024-09-27 15:57:20.772573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.542 [2024-09-27 15:57:20.772578] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.542 [2024-09-27 15:57:20.772729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.542 [2024-09-27 15:57:20.772880] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.542 [2024-09-27 15:57:20.772885] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.542 [2024-09-27 15:57:20.772890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.542 [2024-09-27 15:57:20.775332] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.542 [2024-09-27 15:57:20.784812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.542 [2024-09-27 15:57:20.785392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.542 [2024-09-27 15:57:20.785422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.542 [2024-09-27 15:57:20.785431] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.542 [2024-09-27 15:57:20.785598] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.542 [2024-09-27 15:57:20.785751] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.542 [2024-09-27 15:57:20.785757] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.542 [2024-09-27 15:57:20.785763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.542 [2024-09-27 15:57:20.788213] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.542 [2024-09-27 15:57:20.797553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.542 [2024-09-27 15:57:20.798149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.542 [2024-09-27 15:57:20.798179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.542 [2024-09-27 15:57:20.798188] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.542 [2024-09-27 15:57:20.798355] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.542 [2024-09-27 15:57:20.798508] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.542 [2024-09-27 15:57:20.798514] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.542 [2024-09-27 15:57:20.798520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.542 [2024-09-27 15:57:20.800969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.542 [2024-09-27 15:57:20.810177] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.542 [2024-09-27 15:57:20.810646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.542 [2024-09-27 15:57:20.810661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.542 [2024-09-27 15:57:20.810666] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.542 [2024-09-27 15:57:20.810818] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.542 [2024-09-27 15:57:20.811044] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.542 [2024-09-27 15:57:20.811051] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.542 [2024-09-27 15:57:20.811056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.542 [2024-09-27 15:57:20.813504] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.542 [2024-09-27 15:57:20.822864] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.542 [2024-09-27 15:57:20.823313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.542 [2024-09-27 15:57:20.823327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.542 [2024-09-27 15:57:20.823332] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.542 [2024-09-27 15:57:20.823487] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.542 [2024-09-27 15:57:20.823639] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.542 [2024-09-27 15:57:20.823644] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.542 [2024-09-27 15:57:20.823649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.542 [2024-09-27 15:57:20.826092] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.542 [2024-09-27 15:57:20.835572] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.542 [2024-09-27 15:57:20.836128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.542 [2024-09-27 15:57:20.836158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.542 [2024-09-27 15:57:20.836167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.542 [2024-09-27 15:57:20.836334] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.542 [2024-09-27 15:57:20.836488] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.542 [2024-09-27 15:57:20.836494] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.542 [2024-09-27 15:57:20.836499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.542 [2024-09-27 15:57:20.838946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.542 [2024-09-27 15:57:20.848287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.542 [2024-09-27 15:57:20.848862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.542 [2024-09-27 15:57:20.848892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.542 [2024-09-27 15:57:20.848907] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.542 [2024-09-27 15:57:20.849077] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.542 [2024-09-27 15:57:20.849231] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.542 [2024-09-27 15:57:20.849237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.542 [2024-09-27 15:57:20.849243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.542 [2024-09-27 15:57:20.851683] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.542 [2024-09-27 15:57:20.861045] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.542 [2024-09-27 15:57:20.861671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.542 [2024-09-27 15:57:20.861701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.542 [2024-09-27 15:57:20.861710] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.542 [2024-09-27 15:57:20.861877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.542 [2024-09-27 15:57:20.862037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.542 [2024-09-27 15:57:20.862044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.542 [2024-09-27 15:57:20.862053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.542 [2024-09-27 15:57:20.864504] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.542 [2024-09-27 15:57:20.873698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.542 [2024-09-27 15:57:20.874263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.542 [2024-09-27 15:57:20.874293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.542 [2024-09-27 15:57:20.874301] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.543 [2024-09-27 15:57:20.874468] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.543 [2024-09-27 15:57:20.874622] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.543 [2024-09-27 15:57:20.874630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.543 [2024-09-27 15:57:20.874635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.543 [2024-09-27 15:57:20.877083] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.543 [2024-09-27 15:57:20.886428] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.543 [2024-09-27 15:57:20.886996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.543 [2024-09-27 15:57:20.887027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.543 [2024-09-27 15:57:20.887035] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.543 [2024-09-27 15:57:20.887204] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.543 [2024-09-27 15:57:20.887358] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.543 [2024-09-27 15:57:20.887364] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.543 [2024-09-27 15:57:20.887370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.543 [2024-09-27 15:57:20.889817] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.543 [2024-09-27 15:57:20.899157] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.543 [2024-09-27 15:57:20.899744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.543 [2024-09-27 15:57:20.899774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.543 [2024-09-27 15:57:20.899783] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.543 [2024-09-27 15:57:20.899957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.543 [2024-09-27 15:57:20.900112] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.543 [2024-09-27 15:57:20.900119] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.543 [2024-09-27 15:57:20.900125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.543 [2024-09-27 15:57:20.902568] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.543 [2024-09-27 15:57:20.911773] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.543 [2024-09-27 15:57:20.912223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.543 [2024-09-27 15:57:20.912237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.543 [2024-09-27 15:57:20.912243] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.543 [2024-09-27 15:57:20.912394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.543 [2024-09-27 15:57:20.912545] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.543 [2024-09-27 15:57:20.912551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.543 [2024-09-27 15:57:20.912556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.543 [2024-09-27 15:57:20.915006] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.543 [2024-09-27 15:57:20.924487] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.543 [2024-09-27 15:57:20.925028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.543 [2024-09-27 15:57:20.925059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.543 [2024-09-27 15:57:20.925068] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.543 [2024-09-27 15:57:20.925237] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.543 [2024-09-27 15:57:20.925391] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.543 [2024-09-27 15:57:20.925397] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.543 [2024-09-27 15:57:20.925402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.543 [2024-09-27 15:57:20.927849] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.543 [2024-09-27 15:57:20.937159] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.543 [2024-09-27 15:57:20.937631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.543 [2024-09-27 15:57:20.937647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.543 [2024-09-27 15:57:20.937652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.543 [2024-09-27 15:57:20.937804] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.543 [2024-09-27 15:57:20.937960] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.543 [2024-09-27 15:57:20.937966] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.543 [2024-09-27 15:57:20.937971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.543 [2024-09-27 15:57:20.940413] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.543 [2024-09-27 15:57:20.949911] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.543 [2024-09-27 15:57:20.950391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.543 [2024-09-27 15:57:20.950403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.543 [2024-09-27 15:57:20.950408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.543 [2024-09-27 15:57:20.950559] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.543 [2024-09-27 15:57:20.950714] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.543 [2024-09-27 15:57:20.950720] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.543 [2024-09-27 15:57:20.950725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.543 [2024-09-27 15:57:20.953169] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.543 [2024-09-27 15:57:20.962663] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.543 [2024-09-27 15:57:20.963032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.543 [2024-09-27 15:57:20.963046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.543 [2024-09-27 15:57:20.963051] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.543 [2024-09-27 15:57:20.963202] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.543 [2024-09-27 15:57:20.963353] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.543 [2024-09-27 15:57:20.963359] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.543 [2024-09-27 15:57:20.963364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.543 [2024-09-27 15:57:20.965810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.543 [2024-09-27 15:57:20.975297] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.543 [2024-09-27 15:57:20.975788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.543 [2024-09-27 15:57:20.975800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.543 [2024-09-27 15:57:20.975805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.543 [2024-09-27 15:57:20.975959] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.543 [2024-09-27 15:57:20.976111] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.543 [2024-09-27 15:57:20.976117] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.543 [2024-09-27 15:57:20.976122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.543 [2024-09-27 15:57:20.978558] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.543 [2024-09-27 15:57:20.988037] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.543 [2024-09-27 15:57:20.988584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.543 [2024-09-27 15:57:20.988614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.543 [2024-09-27 15:57:20.988623] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.543 [2024-09-27 15:57:20.988790] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.543 [2024-09-27 15:57:20.988950] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.543 [2024-09-27 15:57:20.988957] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.543 [2024-09-27 15:57:20.988963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.543 [2024-09-27 15:57:20.991410] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.543 [2024-09-27 15:57:21.000750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.543 [2024-09-27 15:57:21.001275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.543 [2024-09-27 15:57:21.001305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.544 [2024-09-27 15:57:21.001314] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.544 [2024-09-27 15:57:21.001481] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.544 [2024-09-27 15:57:21.001635] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.544 [2024-09-27 15:57:21.001641] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.544 [2024-09-27 15:57:21.001647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.544 [2024-09-27 15:57:21.004097] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.544 [2024-09-27 15:57:21.013443] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.544 [2024-09-27 15:57:21.014048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.544 [2024-09-27 15:57:21.014078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.544 [2024-09-27 15:57:21.014087] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.544 [2024-09-27 15:57:21.014254] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.544 [2024-09-27 15:57:21.014408] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.544 [2024-09-27 15:57:21.014414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.544 [2024-09-27 15:57:21.014420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.544 [2024-09-27 15:57:21.016868] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.544 [2024-09-27 15:57:21.026088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.544 [2024-09-27 15:57:21.026659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.544 [2024-09-27 15:57:21.026688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.544 [2024-09-27 15:57:21.026697] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.544 [2024-09-27 15:57:21.026864] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.544 [2024-09-27 15:57:21.027025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.544 [2024-09-27 15:57:21.027032] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.544 [2024-09-27 15:57:21.027038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.805 [2024-09-27 15:57:21.029484] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.805 [2024-09-27 15:57:21.038822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.805 [2024-09-27 15:57:21.039360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.805 [2024-09-27 15:57:21.039389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.805 [2024-09-27 15:57:21.039401] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.805 [2024-09-27 15:57:21.039568] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.805 [2024-09-27 15:57:21.039722] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.805 [2024-09-27 15:57:21.039728] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.805 [2024-09-27 15:57:21.039733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.805 [2024-09-27 15:57:21.042183] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.805 [2024-09-27 15:57:21.051516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.805 [2024-09-27 15:57:21.052124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.805 [2024-09-27 15:57:21.052155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.805 [2024-09-27 15:57:21.052163] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.805 [2024-09-27 15:57:21.052330] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.805 [2024-09-27 15:57:21.052484] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.805 [2024-09-27 15:57:21.052490] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.805 [2024-09-27 15:57:21.052496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.805 [2024-09-27 15:57:21.054945] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.805 [2024-09-27 15:57:21.064151] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.805 [2024-09-27 15:57:21.064628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.805 [2024-09-27 15:57:21.064643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.805 [2024-09-27 15:57:21.064649] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.805 [2024-09-27 15:57:21.064800] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.805 [2024-09-27 15:57:21.064965] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.805 [2024-09-27 15:57:21.064971] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.805 [2024-09-27 15:57:21.064976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.805 [2024-09-27 15:57:21.067416] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.805 [2024-09-27 15:57:21.076905] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.805 [2024-09-27 15:57:21.077256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.805 [2024-09-27 15:57:21.077268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.805 [2024-09-27 15:57:21.077274] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.805 [2024-09-27 15:57:21.077425] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.805 [2024-09-27 15:57:21.077575] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.805 [2024-09-27 15:57:21.077586] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.805 [2024-09-27 15:57:21.077591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.805 [2024-09-27 15:57:21.080035] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.805 [2024-09-27 15:57:21.089517] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.805 [2024-09-27 15:57:21.089966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.805 [2024-09-27 15:57:21.089978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.805 [2024-09-27 15:57:21.089984] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.805 [2024-09-27 15:57:21.090135] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.805 [2024-09-27 15:57:21.090286] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.805 [2024-09-27 15:57:21.090292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.805 [2024-09-27 15:57:21.090297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.805 [2024-09-27 15:57:21.092738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.805 [2024-09-27 15:57:21.102221] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.805 [2024-09-27 15:57:21.102798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.805 [2024-09-27 15:57:21.102828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.805 [2024-09-27 15:57:21.102837] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.805 [2024-09-27 15:57:21.103014] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.805 [2024-09-27 15:57:21.103169] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.805 [2024-09-27 15:57:21.103175] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.805 [2024-09-27 15:57:21.103180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.805 [2024-09-27 15:57:21.105621] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.805 [2024-09-27 15:57:21.114964] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.805 [2024-09-27 15:57:21.115538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.805 [2024-09-27 15:57:21.115568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.806 [2024-09-27 15:57:21.115576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.806 [2024-09-27 15:57:21.115746] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.806 [2024-09-27 15:57:21.115908] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.806 [2024-09-27 15:57:21.115915] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.806 [2024-09-27 15:57:21.115920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.806 [2024-09-27 15:57:21.118364] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.806 [2024-09-27 15:57:21.127708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.806 [2024-09-27 15:57:21.128290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.806 [2024-09-27 15:57:21.128320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.806 [2024-09-27 15:57:21.128329] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.806 [2024-09-27 15:57:21.128496] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.806 [2024-09-27 15:57:21.128650] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.806 [2024-09-27 15:57:21.128656] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.806 [2024-09-27 15:57:21.128661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.806 [2024-09-27 15:57:21.131114] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.806 [2024-09-27 15:57:21.140453] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.806 [2024-09-27 15:57:21.140999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.806 [2024-09-27 15:57:21.141028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.806 [2024-09-27 15:57:21.141037] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.806 [2024-09-27 15:57:21.141205] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.806 [2024-09-27 15:57:21.141359] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.806 [2024-09-27 15:57:21.141365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.806 [2024-09-27 15:57:21.141371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.806 [2024-09-27 15:57:21.143822] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.806 [2024-09-27 15:57:21.153166] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.806 [2024-09-27 15:57:21.153753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.806 [2024-09-27 15:57:21.153783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.806 [2024-09-27 15:57:21.153792] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.806 [2024-09-27 15:57:21.153968] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.806 [2024-09-27 15:57:21.154123] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.806 [2024-09-27 15:57:21.154129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.806 [2024-09-27 15:57:21.154135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.806 [2024-09-27 15:57:21.156580] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.806 [2024-09-27 15:57:21.165798] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.806 [2024-09-27 15:57:21.166430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.806 [2024-09-27 15:57:21.166460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.806 [2024-09-27 15:57:21.166469] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.806 [2024-09-27 15:57:21.166639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.806 [2024-09-27 15:57:21.166794] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.806 [2024-09-27 15:57:21.166799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.806 [2024-09-27 15:57:21.166805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.806 [2024-09-27 15:57:21.169255] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.806 5826.60 IOPS, 22.76 MiB/s [2024-09-27 15:57:21.178436] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.806 [2024-09-27 15:57:21.178885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.806 [2024-09-27 15:57:21.178904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.806 [2024-09-27 15:57:21.178910] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.806 [2024-09-27 15:57:21.179062] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.806 [2024-09-27 15:57:21.179213] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.806 [2024-09-27 15:57:21.179219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.806 [2024-09-27 15:57:21.179224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.806 [2024-09-27 15:57:21.181663] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.806 [2024-09-27 15:57:21.191136] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.806 [2024-09-27 15:57:21.191576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.806 [2024-09-27 15:57:21.191588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.806 [2024-09-27 15:57:21.191594] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.806 [2024-09-27 15:57:21.191744] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.806 [2024-09-27 15:57:21.191900] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.806 [2024-09-27 15:57:21.191906] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.806 [2024-09-27 15:57:21.191911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.806 [2024-09-27 15:57:21.194380] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.806 [2024-09-27 15:57:21.203853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.806 [2024-09-27 15:57:21.204302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.806 [2024-09-27 15:57:21.204314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.806 [2024-09-27 15:57:21.204319] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.806 [2024-09-27 15:57:21.204470] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.806 [2024-09-27 15:57:21.204621] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.806 [2024-09-27 15:57:21.204626] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.806 [2024-09-27 15:57:21.204634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.806 [2024-09-27 15:57:21.207074] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.806 [2024-09-27 15:57:21.216560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.806 [2024-09-27 15:57:21.217022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.806 [2024-09-27 15:57:21.217035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.806 [2024-09-27 15:57:21.217040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.806 [2024-09-27 15:57:21.217191] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.806 [2024-09-27 15:57:21.217342] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.806 [2024-09-27 15:57:21.217348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.806 [2024-09-27 15:57:21.217353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.806 [2024-09-27 15:57:21.219793] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.806 [2024-09-27 15:57:21.229264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.806 [2024-09-27 15:57:21.229701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.806 [2024-09-27 15:57:21.229713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.807 [2024-09-27 15:57:21.229719] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.807 [2024-09-27 15:57:21.229869] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.807 [2024-09-27 15:57:21.230025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.807 [2024-09-27 15:57:21.230031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.807 [2024-09-27 15:57:21.230035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.807 [2024-09-27 15:57:21.232471] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.807 [2024-09-27 15:57:21.241968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.807 [2024-09-27 15:57:21.242551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.807 [2024-09-27 15:57:21.242580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.807 [2024-09-27 15:57:21.242589] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.807 [2024-09-27 15:57:21.242756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.807 [2024-09-27 15:57:21.242918] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.807 [2024-09-27 15:57:21.242924] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.807 [2024-09-27 15:57:21.242930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.807 [2024-09-27 15:57:21.245372] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.807 [2024-09-27 15:57:21.254703] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.807 [2024-09-27 15:57:21.255281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.807 [2024-09-27 15:57:21.255312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.807 [2024-09-27 15:57:21.255320] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.807 [2024-09-27 15:57:21.255487] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.807 [2024-09-27 15:57:21.255641] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.807 [2024-09-27 15:57:21.255647] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.807 [2024-09-27 15:57:21.255652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.807 [2024-09-27 15:57:21.258106] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.807 [2024-09-27 15:57:21.267451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.807 [2024-09-27 15:57:21.268020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.807 [2024-09-27 15:57:21.268049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.807 [2024-09-27 15:57:21.268058] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.807 [2024-09-27 15:57:21.268227] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.807 [2024-09-27 15:57:21.268381] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.807 [2024-09-27 15:57:21.268387] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.807 [2024-09-27 15:57:21.268393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.807 [2024-09-27 15:57:21.270841] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:40.807 [2024-09-27 15:57:21.280172] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:40.807 [2024-09-27 15:57:21.280694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:40.807 [2024-09-27 15:57:21.280724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:40.807 [2024-09-27 15:57:21.280733] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:40.807 [2024-09-27 15:57:21.280907] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:40.807 [2024-09-27 15:57:21.281062] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:40.807 [2024-09-27 15:57:21.281068] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:40.807 [2024-09-27 15:57:21.281073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:40.807 [2024-09-27 15:57:21.283516] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.070 [2024-09-27 15:57:21.292852] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.070 [2024-09-27 15:57:21.293399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.070 [2024-09-27 15:57:21.293429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.070 [2024-09-27 15:57:21.293437] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.070 [2024-09-27 15:57:21.293608] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.070 [2024-09-27 15:57:21.293762] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.070 [2024-09-27 15:57:21.293768] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.070 [2024-09-27 15:57:21.293773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.070 [2024-09-27 15:57:21.296223] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.070 [2024-09-27 15:57:21.305560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.070 [2024-09-27 15:57:21.306045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.070 [2024-09-27 15:57:21.306061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.070 [2024-09-27 15:57:21.306066] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.070 [2024-09-27 15:57:21.306218] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.070 [2024-09-27 15:57:21.306369] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.070 [2024-09-27 15:57:21.306374] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.070 [2024-09-27 15:57:21.306379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.070 [2024-09-27 15:57:21.308824] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.070 [2024-09-27 15:57:21.318306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.070 [2024-09-27 15:57:21.318795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.070 [2024-09-27 15:57:21.318808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.070 [2024-09-27 15:57:21.318813] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.070 [2024-09-27 15:57:21.318969] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.070 [2024-09-27 15:57:21.319121] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.070 [2024-09-27 15:57:21.319126] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.070 [2024-09-27 15:57:21.319131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.070 [2024-09-27 15:57:21.321568] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.070 [2024-09-27 15:57:21.331041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.070 [2024-09-27 15:57:21.331534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.070 [2024-09-27 15:57:21.331546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.070 [2024-09-27 15:57:21.331551] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.070 [2024-09-27 15:57:21.331702] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.070 [2024-09-27 15:57:21.331852] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.070 [2024-09-27 15:57:21.331857] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.070 [2024-09-27 15:57:21.331869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.070 [2024-09-27 15:57:21.334309] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.070 [2024-09-27 15:57:21.343777] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.070 [2024-09-27 15:57:21.344324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.071 [2024-09-27 15:57:21.344354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.071 [2024-09-27 15:57:21.344363] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.071 [2024-09-27 15:57:21.344530] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.071 [2024-09-27 15:57:21.344684] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.071 [2024-09-27 15:57:21.344690] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.071 [2024-09-27 15:57:21.344695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.071 [2024-09-27 15:57:21.347146] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.071 [2024-09-27 15:57:21.356475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.071 [2024-09-27 15:57:21.357056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.071 [2024-09-27 15:57:21.357086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.071 [2024-09-27 15:57:21.357094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.071 [2024-09-27 15:57:21.357262] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.071 [2024-09-27 15:57:21.357416] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.071 [2024-09-27 15:57:21.357422] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.071 [2024-09-27 15:57:21.357427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.071 [2024-09-27 15:57:21.359876] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.071 [2024-09-27 15:57:21.369219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.071 [2024-09-27 15:57:21.369758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.071 [2024-09-27 15:57:21.369788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.071 [2024-09-27 15:57:21.369796] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.071 [2024-09-27 15:57:21.369968] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.071 [2024-09-27 15:57:21.370123] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.071 [2024-09-27 15:57:21.370130] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.071 [2024-09-27 15:57:21.370135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.071 [2024-09-27 15:57:21.372576] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.071 [2024-09-27 15:57:21.381905] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.071 [2024-09-27 15:57:21.382448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.071 [2024-09-27 15:57:21.382482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.071 [2024-09-27 15:57:21.382490] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.071 [2024-09-27 15:57:21.382657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.071 [2024-09-27 15:57:21.382811] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.071 [2024-09-27 15:57:21.382817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.071 [2024-09-27 15:57:21.382823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.071 [2024-09-27 15:57:21.385273] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.071 [2024-09-27 15:57:21.394629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.071 [2024-09-27 15:57:21.395209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.071 [2024-09-27 15:57:21.395239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.071 [2024-09-27 15:57:21.395247] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.071 [2024-09-27 15:57:21.395414] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.071 [2024-09-27 15:57:21.395568] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.071 [2024-09-27 15:57:21.395574] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.071 [2024-09-27 15:57:21.395579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.071 [2024-09-27 15:57:21.398029] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.071 [2024-09-27 15:57:21.407357] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.071 [2024-09-27 15:57:21.407843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.071 [2024-09-27 15:57:21.407859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.071 [2024-09-27 15:57:21.407864] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.071 [2024-09-27 15:57:21.408024] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.071 [2024-09-27 15:57:21.408180] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.071 [2024-09-27 15:57:21.408187] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.071 [2024-09-27 15:57:21.408192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.071 [2024-09-27 15:57:21.410632] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.071 [2024-09-27 15:57:21.419986] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.071 [2024-09-27 15:57:21.420503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.071 [2024-09-27 15:57:21.420533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.071 [2024-09-27 15:57:21.420542] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.071 [2024-09-27 15:57:21.420711] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.071 [2024-09-27 15:57:21.420869] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.071 [2024-09-27 15:57:21.420875] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.071 [2024-09-27 15:57:21.420880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.071 [2024-09-27 15:57:21.423329] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.071 [2024-09-27 15:57:21.432666] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.071 [2024-09-27 15:57:21.433141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.071 [2024-09-27 15:57:21.433157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.071 [2024-09-27 15:57:21.433162] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.071 [2024-09-27 15:57:21.433315] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.071 [2024-09-27 15:57:21.433466] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.071 [2024-09-27 15:57:21.433471] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.071 [2024-09-27 15:57:21.433476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.071 [2024-09-27 15:57:21.435920] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.071 [2024-09-27 15:57:21.445298] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.071 [2024-09-27 15:57:21.445748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.071 [2024-09-27 15:57:21.445778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.071 [2024-09-27 15:57:21.445787] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.071 [2024-09-27 15:57:21.445963] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.071 [2024-09-27 15:57:21.446118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.071 [2024-09-27 15:57:21.446124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.071 [2024-09-27 15:57:21.446129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.071 [2024-09-27 15:57:21.448576] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.071 [2024-09-27 15:57:21.457927] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.071 [2024-09-27 15:57:21.458539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.071 [2024-09-27 15:57:21.458568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.071 [2024-09-27 15:57:21.458577] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.071 [2024-09-27 15:57:21.458744] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.071 [2024-09-27 15:57:21.458905] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.071 [2024-09-27 15:57:21.458911] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.071 [2024-09-27 15:57:21.458917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.072 [2024-09-27 15:57:21.461357] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.072 [2024-09-27 15:57:21.470557] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.072 [2024-09-27 15:57:21.471017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.072 [2024-09-27 15:57:21.471047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.072 [2024-09-27 15:57:21.471055] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.072 [2024-09-27 15:57:21.471222] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.072 [2024-09-27 15:57:21.471376] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.072 [2024-09-27 15:57:21.471382] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.072 [2024-09-27 15:57:21.471387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.072 [2024-09-27 15:57:21.473835] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.072 [2024-09-27 15:57:21.483309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.072 [2024-09-27 15:57:21.483877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.072 [2024-09-27 15:57:21.483911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.072 [2024-09-27 15:57:21.483921] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.072 [2024-09-27 15:57:21.484090] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.072 [2024-09-27 15:57:21.484244] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.072 [2024-09-27 15:57:21.484250] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.072 [2024-09-27 15:57:21.484256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.072 [2024-09-27 15:57:21.486698] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.072 [2024-09-27 15:57:21.496052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.072 [2024-09-27 15:57:21.496606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.072 [2024-09-27 15:57:21.496637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.072 [2024-09-27 15:57:21.496646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.072 [2024-09-27 15:57:21.496813] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.072 [2024-09-27 15:57:21.496971] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.072 [2024-09-27 15:57:21.496978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.072 [2024-09-27 15:57:21.496983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.072 [2024-09-27 15:57:21.499424] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.072 [2024-09-27 15:57:21.508765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.072 [2024-09-27 15:57:21.509299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.072 [2024-09-27 15:57:21.509329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.072 [2024-09-27 15:57:21.509341] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.072 [2024-09-27 15:57:21.509509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.072 [2024-09-27 15:57:21.509663] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.072 [2024-09-27 15:57:21.509669] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.072 [2024-09-27 15:57:21.509675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.072 [2024-09-27 15:57:21.512125] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.072 [2024-09-27 15:57:21.521465] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.072 [2024-09-27 15:57:21.521985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.072 [2024-09-27 15:57:21.522018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.072 [2024-09-27 15:57:21.522027] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.072 [2024-09-27 15:57:21.522196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.072 [2024-09-27 15:57:21.522350] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.072 [2024-09-27 15:57:21.522356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.072 [2024-09-27 15:57:21.522361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.072 [2024-09-27 15:57:21.524809] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.072 [2024-09-27 15:57:21.534138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.072 [2024-09-27 15:57:21.534667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.072 [2024-09-27 15:57:21.534682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.072 [2024-09-27 15:57:21.534687] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.072 [2024-09-27 15:57:21.534839] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.072 [2024-09-27 15:57:21.534994] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.072 [2024-09-27 15:57:21.535001] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.072 [2024-09-27 15:57:21.535005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.072 [2024-09-27 15:57:21.537440] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.072 [2024-09-27 15:57:21.546769] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.072 [2024-09-27 15:57:21.547230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.072 [2024-09-27 15:57:21.547242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.072 [2024-09-27 15:57:21.547248] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.072 [2024-09-27 15:57:21.547398] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.072 [2024-09-27 15:57:21.547549] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.072 [2024-09-27 15:57:21.547558] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.072 [2024-09-27 15:57:21.547563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.072 [2024-09-27 15:57:21.550002] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.335 [2024-09-27 15:57:21.559499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.335 [2024-09-27 15:57:21.559990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.335 [2024-09-27 15:57:21.560003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.335 [2024-09-27 15:57:21.560009] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.335 [2024-09-27 15:57:21.560160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.335 [2024-09-27 15:57:21.560311] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.335 [2024-09-27 15:57:21.560317] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.335 [2024-09-27 15:57:21.560322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.335 [2024-09-27 15:57:21.562763] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.335 [2024-09-27 15:57:21.572244] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.335 [2024-09-27 15:57:21.572806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.335 [2024-09-27 15:57:21.572835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.335 [2024-09-27 15:57:21.572844] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.335 [2024-09-27 15:57:21.573021] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.335 [2024-09-27 15:57:21.573176] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.335 [2024-09-27 15:57:21.573182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.335 [2024-09-27 15:57:21.573188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.335 [2024-09-27 15:57:21.575628] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.335 [2024-09-27 15:57:21.584957] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.335 [2024-09-27 15:57:21.585523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.335 [2024-09-27 15:57:21.585553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.335 [2024-09-27 15:57:21.585561] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.335 [2024-09-27 15:57:21.585728] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.335 [2024-09-27 15:57:21.585882] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.335 [2024-09-27 15:57:21.585889] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.335 [2024-09-27 15:57:21.585901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.335 [2024-09-27 15:57:21.588343] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.335 [2024-09-27 15:57:21.597672] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.335 [2024-09-27 15:57:21.598262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.335 [2024-09-27 15:57:21.598292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.335 [2024-09-27 15:57:21.598301] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.335 [2024-09-27 15:57:21.598468] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.335 [2024-09-27 15:57:21.598622] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.335 [2024-09-27 15:57:21.598628] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.335 [2024-09-27 15:57:21.598633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.335 [2024-09-27 15:57:21.601083] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.335 [2024-09-27 15:57:21.610416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.335 [2024-09-27 15:57:21.610986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.335 [2024-09-27 15:57:21.611016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.335 [2024-09-27 15:57:21.611024] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.335 [2024-09-27 15:57:21.611191] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.335 [2024-09-27 15:57:21.611346] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.335 [2024-09-27 15:57:21.611352] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.335 [2024-09-27 15:57:21.611357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.335 [2024-09-27 15:57:21.613814] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.335 [2024-09-27 15:57:21.623153] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.335 [2024-09-27 15:57:21.623629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.335 [2024-09-27 15:57:21.623643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.335 [2024-09-27 15:57:21.623649] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.335 [2024-09-27 15:57:21.623800] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.335 [2024-09-27 15:57:21.623956] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.335 [2024-09-27 15:57:21.623962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.335 [2024-09-27 15:57:21.623967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.335 [2024-09-27 15:57:21.626404] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.335 [2024-09-27 15:57:21.635877] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.335 [2024-09-27 15:57:21.636340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.335 [2024-09-27 15:57:21.636353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.335 [2024-09-27 15:57:21.636358] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.335 [2024-09-27 15:57:21.636512] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.335 [2024-09-27 15:57:21.636663] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.335 [2024-09-27 15:57:21.636669] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.335 [2024-09-27 15:57:21.636673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.335 [2024-09-27 15:57:21.639113] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.335 [2024-09-27 15:57:21.648577] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.335 [2024-09-27 15:57:21.649035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.335 [2024-09-27 15:57:21.649048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.335 [2024-09-27 15:57:21.649053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.335 [2024-09-27 15:57:21.649204] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.335 [2024-09-27 15:57:21.649355] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.335 [2024-09-27 15:57:21.649360] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.335 [2024-09-27 15:57:21.649365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.335 [2024-09-27 15:57:21.651824] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.335 [2024-09-27 15:57:21.661304] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.335 [2024-09-27 15:57:21.661879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.335 [2024-09-27 15:57:21.661915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.335 [2024-09-27 15:57:21.661923] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.335 [2024-09-27 15:57:21.662090] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.335 [2024-09-27 15:57:21.662244] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.335 [2024-09-27 15:57:21.662250] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.335 [2024-09-27 15:57:21.662255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.335 [2024-09-27 15:57:21.664702] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.335 [2024-09-27 15:57:21.673929] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.335 [2024-09-27 15:57:21.674306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.335 [2024-09-27 15:57:21.674321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.335 [2024-09-27 15:57:21.674327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.335 [2024-09-27 15:57:21.674478] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.335 [2024-09-27 15:57:21.674629] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.336 [2024-09-27 15:57:21.674634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.336 [2024-09-27 15:57:21.674643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.336 [2024-09-27 15:57:21.677087] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.336 [2024-09-27 15:57:21.686554] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.336 [2024-09-27 15:57:21.687099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.336 [2024-09-27 15:57:21.687129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.336 [2024-09-27 15:57:21.687138] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.336 [2024-09-27 15:57:21.687304] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.336 [2024-09-27 15:57:21.687458] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.336 [2024-09-27 15:57:21.687464] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.336 [2024-09-27 15:57:21.687470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.336 [2024-09-27 15:57:21.689917] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.336 [2024-09-27 15:57:21.699244] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.336 [2024-09-27 15:57:21.699805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.336 [2024-09-27 15:57:21.699834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.336 [2024-09-27 15:57:21.699843] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.336 [2024-09-27 15:57:21.700017] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.336 [2024-09-27 15:57:21.700171] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.336 [2024-09-27 15:57:21.700177] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.336 [2024-09-27 15:57:21.700183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.336 [2024-09-27 15:57:21.702623] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.336 [2024-09-27 15:57:21.711962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.336 [2024-09-27 15:57:21.712542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.336 [2024-09-27 15:57:21.712572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.336 [2024-09-27 15:57:21.712581] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.336 [2024-09-27 15:57:21.712748] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.336 [2024-09-27 15:57:21.712913] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.336 [2024-09-27 15:57:21.712920] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.336 [2024-09-27 15:57:21.712926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.336 [2024-09-27 15:57:21.715369] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.336 [2024-09-27 15:57:21.724698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.336 [2024-09-27 15:57:21.725233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.336 [2024-09-27 15:57:21.725251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.336 [2024-09-27 15:57:21.725257] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.336 [2024-09-27 15:57:21.725409] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.336 [2024-09-27 15:57:21.725560] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.336 [2024-09-27 15:57:21.725566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.336 [2024-09-27 15:57:21.725571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.336 [2024-09-27 15:57:21.728011] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.336 [2024-09-27 15:57:21.737339] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.336 [2024-09-27 15:57:21.737830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.336 [2024-09-27 15:57:21.737842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.336 [2024-09-27 15:57:21.737847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.336 [2024-09-27 15:57:21.738003] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.336 [2024-09-27 15:57:21.738154] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.336 [2024-09-27 15:57:21.738160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.336 [2024-09-27 15:57:21.738165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.336 [2024-09-27 15:57:21.740599] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.336 [2024-09-27 15:57:21.750065] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.336 [2024-09-27 15:57:21.750626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.336 [2024-09-27 15:57:21.750655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.336 [2024-09-27 15:57:21.750664] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.336 [2024-09-27 15:57:21.750830] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.336 [2024-09-27 15:57:21.750992] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.336 [2024-09-27 15:57:21.750999] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.336 [2024-09-27 15:57:21.751005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.336 [2024-09-27 15:57:21.753444] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.336 [2024-09-27 15:57:21.762778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.336 [2024-09-27 15:57:21.763373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.336 [2024-09-27 15:57:21.763403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.336 [2024-09-27 15:57:21.763412] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.336 [2024-09-27 15:57:21.763579] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.336 [2024-09-27 15:57:21.763736] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.336 [2024-09-27 15:57:21.763742] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.336 [2024-09-27 15:57:21.763748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.336 [2024-09-27 15:57:21.766197] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.336 [2024-09-27 15:57:21.775393] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.336 [2024-09-27 15:57:21.775964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.336 [2024-09-27 15:57:21.775994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.336 [2024-09-27 15:57:21.776003] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.336 [2024-09-27 15:57:21.776170] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.336 [2024-09-27 15:57:21.776324] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.336 [2024-09-27 15:57:21.776330] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.336 [2024-09-27 15:57:21.776335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.336 [2024-09-27 15:57:21.778782] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.336 [2024-09-27 15:57:21.788116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.336 [2024-09-27 15:57:21.788606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.336 [2024-09-27 15:57:21.788620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.336 [2024-09-27 15:57:21.788626] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.336 [2024-09-27 15:57:21.788777] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.336 [2024-09-27 15:57:21.788933] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.336 [2024-09-27 15:57:21.788939] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.336 [2024-09-27 15:57:21.788944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.336 [2024-09-27 15:57:21.791380] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.336 [2024-09-27 15:57:21.800848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.336 [2024-09-27 15:57:21.801336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.336 [2024-09-27 15:57:21.801348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.337 [2024-09-27 15:57:21.801354] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.337 [2024-09-27 15:57:21.801504] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.337 [2024-09-27 15:57:21.801655] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.337 [2024-09-27 15:57:21.801661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.337 [2024-09-27 15:57:21.801666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.337 [2024-09-27 15:57:21.804104] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.337 [2024-09-27 15:57:21.813587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.337 [2024-09-27 15:57:21.813936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.337 [2024-09-27 15:57:21.813956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.337 [2024-09-27 15:57:21.813962] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.337 [2024-09-27 15:57:21.814117] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.337 [2024-09-27 15:57:21.814270] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.337 [2024-09-27 15:57:21.814275] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.337 [2024-09-27 15:57:21.814280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.337 [2024-09-27 15:57:21.816726] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.599 [2024-09-27 15:57:21.826208] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.599 [2024-09-27 15:57:21.826791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.599 [2024-09-27 15:57:21.826821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.599 [2024-09-27 15:57:21.826830] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.599 [2024-09-27 15:57:21.827005] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.599 [2024-09-27 15:57:21.827160] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.599 [2024-09-27 15:57:21.827166] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.599 [2024-09-27 15:57:21.827171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.599 [2024-09-27 15:57:21.829615] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.599 [2024-09-27 15:57:21.838872] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.599 [2024-09-27 15:57:21.839493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.599 [2024-09-27 15:57:21.839523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.599 [2024-09-27 15:57:21.839532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.599 [2024-09-27 15:57:21.839699] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.599 [2024-09-27 15:57:21.839853] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.599 [2024-09-27 15:57:21.839859] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.599 [2024-09-27 15:57:21.839864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.599 [2024-09-27 15:57:21.842313] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.599 [2024-09-27 15:57:21.851500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.599 [2024-09-27 15:57:21.852115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.599 [2024-09-27 15:57:21.852145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.599 [2024-09-27 15:57:21.852158] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.599 [2024-09-27 15:57:21.852325] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.599 [2024-09-27 15:57:21.852478] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.599 [2024-09-27 15:57:21.852484] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.599 [2024-09-27 15:57:21.852490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.599 [2024-09-27 15:57:21.854939] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.599 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 637663 Killed "${NVMF_APP[@]}" "$@" 00:38:41.599 15:57:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:38:41.600 15:57:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:38:41.600 15:57:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:38:41.600 15:57:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:41.600 15:57:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:41.600 [2024-09-27 15:57:21.864166] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.600 [2024-09-27 15:57:21.864647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.600 [2024-09-27 15:57:21.864676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.600 [2024-09-27 15:57:21.864685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.600 [2024-09-27 15:57:21.864855] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.600 [2024-09-27 15:57:21.865016] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.600 [2024-09-27 15:57:21.865023] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.600 [2024-09-27 15:57:21.865029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.600 [2024-09-27 15:57:21.867480] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.600 15:57:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # nvmfpid=639267 00:38:41.600 15:57:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # waitforlisten 639267 00:38:41.600 15:57:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:38:41.600 15:57:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 639267 ']' 00:38:41.600 15:57:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:41.600 15:57:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:41.600 15:57:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:41.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:41.600 15:57:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:41.600 15:57:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:41.600 [2024-09-27 15:57:21.876824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.600 [2024-09-27 15:57:21.877474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.600 [2024-09-27 15:57:21.877504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.600 [2024-09-27 15:57:21.877516] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.600 [2024-09-27 15:57:21.877683] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.600 [2024-09-27 15:57:21.877837] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.600 [2024-09-27 15:57:21.877844] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.600 [2024-09-27 15:57:21.877849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.600 [2024-09-27 15:57:21.880300] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.600 [2024-09-27 15:57:21.889502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.600 [2024-09-27 15:57:21.890006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.600 [2024-09-27 15:57:21.890021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.600 [2024-09-27 15:57:21.890027] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.600 [2024-09-27 15:57:21.890178] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.600 [2024-09-27 15:57:21.890329] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.600 [2024-09-27 15:57:21.890335] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.600 [2024-09-27 15:57:21.890340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.600 [2024-09-27 15:57:21.892780] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.600 [2024-09-27 15:57:21.902162] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.600 [2024-09-27 15:57:21.902654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.600 [2024-09-27 15:57:21.902667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.600 [2024-09-27 15:57:21.902672] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.600 [2024-09-27 15:57:21.902823] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.600 [2024-09-27 15:57:21.902980] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.600 [2024-09-27 15:57:21.902986] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.600 [2024-09-27 15:57:21.902991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.600 [2024-09-27 15:57:21.905429] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.600 [2024-09-27 15:57:21.914782] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.600 [2024-09-27 15:57:21.915227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.600 [2024-09-27 15:57:21.915240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.600 [2024-09-27 15:57:21.915245] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.600 [2024-09-27 15:57:21.915396] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.600 [2024-09-27 15:57:21.915547] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.600 [2024-09-27 15:57:21.915556] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.600 [2024-09-27 15:57:21.915561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.600 [2024-09-27 15:57:21.918006] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.600 [2024-09-27 15:57:21.927490] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.600 [2024-09-27 15:57:21.927971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.600 [2024-09-27 15:57:21.927984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.600 [2024-09-27 15:57:21.927989] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.600 [2024-09-27 15:57:21.928141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.600 [2024-09-27 15:57:21.928292] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.600 [2024-09-27 15:57:21.928297] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.600 [2024-09-27 15:57:21.928302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.600 [2024-09-27 15:57:21.930740] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.600 [2024-09-27 15:57:21.934456] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:38:41.600 [2024-09-27 15:57:21.934505] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:41.600 [2024-09-27 15:57:21.940226] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.600 [2024-09-27 15:57:21.940721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.600 [2024-09-27 15:57:21.940734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.600 [2024-09-27 15:57:21.940740] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.600 [2024-09-27 15:57:21.940890] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.600 [2024-09-27 15:57:21.941047] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.600 [2024-09-27 15:57:21.941054] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.600 [2024-09-27 15:57:21.941059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.600 [2024-09-27 15:57:21.943498] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.600 [2024-09-27 15:57:21.952844] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.600 [2024-09-27 15:57:21.953338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.600 [2024-09-27 15:57:21.953349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.600 [2024-09-27 15:57:21.953355] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.600 [2024-09-27 15:57:21.953506] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.600 [2024-09-27 15:57:21.953656] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.600 [2024-09-27 15:57:21.953662] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.600 [2024-09-27 15:57:21.953673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.600 [2024-09-27 15:57:21.956120] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.600 [2024-09-27 15:57:21.965552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.600 [2024-09-27 15:57:21.966006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.600 [2024-09-27 15:57:21.966021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.601 [2024-09-27 15:57:21.966027] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.601 [2024-09-27 15:57:21.966179] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.601 [2024-09-27 15:57:21.966330] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.601 [2024-09-27 15:57:21.966336] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.601 [2024-09-27 15:57:21.966342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.601 [2024-09-27 15:57:21.968790] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.601 [2024-09-27 15:57:21.978311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.601 [2024-09-27 15:57:21.978797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.601 [2024-09-27 15:57:21.978810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.601 [2024-09-27 15:57:21.978816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.601 [2024-09-27 15:57:21.978974] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.601 [2024-09-27 15:57:21.979126] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.601 [2024-09-27 15:57:21.979132] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.601 [2024-09-27 15:57:21.979137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.601 [2024-09-27 15:57:21.981576] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.601 [2024-09-27 15:57:21.990931] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.601 [2024-09-27 15:57:21.991375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.601 [2024-09-27 15:57:21.991388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.601 [2024-09-27 15:57:21.991394] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.601 [2024-09-27 15:57:21.991545] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.601 [2024-09-27 15:57:21.991696] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.601 [2024-09-27 15:57:21.991702] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.601 [2024-09-27 15:57:21.991706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.601 [2024-09-27 15:57:21.994152] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.601 [2024-09-27 15:57:22.003642] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.601 [2024-09-27 15:57:22.004119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.601 [2024-09-27 15:57:22.004131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.601 [2024-09-27 15:57:22.004137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.601 [2024-09-27 15:57:22.004290] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.601 [2024-09-27 15:57:22.004441] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.601 [2024-09-27 15:57:22.004447] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.601 [2024-09-27 15:57:22.004452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.601 [2024-09-27 15:57:22.006897] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.601 [2024-09-27 15:57:22.016400] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.601 [2024-09-27 15:57:22.016900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.601 [2024-09-27 15:57:22.016914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.601 [2024-09-27 15:57:22.016922] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.601 [2024-09-27 15:57:22.017075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.601 [2024-09-27 15:57:22.017226] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.601 [2024-09-27 15:57:22.017232] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.601 [2024-09-27 15:57:22.017237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.601 [2024-09-27 15:57:22.018194] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:41.601 [2024-09-27 15:57:22.019677] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.601 [2024-09-27 15:57:22.029038] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.601 [2024-09-27 15:57:22.029647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.601 [2024-09-27 15:57:22.029679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.601 [2024-09-27 15:57:22.029689] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.601 [2024-09-27 15:57:22.029861] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.601 [2024-09-27 15:57:22.030023] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.601 [2024-09-27 15:57:22.030030] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.601 [2024-09-27 15:57:22.030036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.601 [2024-09-27 15:57:22.032483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.601 [2024-09-27 15:57:22.041684] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.601 [2024-09-27 15:57:22.042199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.601 [2024-09-27 15:57:22.042219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.601 [2024-09-27 15:57:22.042228] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.601 [2024-09-27 15:57:22.042386] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.601 [2024-09-27 15:57:22.042540] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.601 [2024-09-27 15:57:22.042546] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.601 [2024-09-27 15:57:22.042552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.601 [2024-09-27 15:57:22.044996] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.601 [2024-09-27 15:57:22.046489] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:41.601 [2024-09-27 15:57:22.046516] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:41.601 [2024-09-27 15:57:22.046525] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:41.601 [2024-09-27 15:57:22.046532] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:41.601 [2024-09-27 15:57:22.046538] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:41.601 [2024-09-27 15:57:22.046710] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:38:41.601 [2024-09-27 15:57:22.046921] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:41.601 [2024-09-27 15:57:22.046921] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:38:41.601 [2024-09-27 15:57:22.054340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.601 [2024-09-27 15:57:22.054715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.601 [2024-09-27 15:57:22.054730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.601 [2024-09-27 15:57:22.054737] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.601 [2024-09-27 15:57:22.054889] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.601 [2024-09-27 15:57:22.055047] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.601 [2024-09-27 15:57:22.055054] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.601 [2024-09-27 15:57:22.055059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.601 [2024-09-27 15:57:22.057497] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.601 [2024-09-27 15:57:22.067038] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.601 [2024-09-27 15:57:22.067670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.601 [2024-09-27 15:57:22.067704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.601 [2024-09-27 15:57:22.067714] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.601 [2024-09-27 15:57:22.067889] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.601 [2024-09-27 15:57:22.068063] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.601 [2024-09-27 15:57:22.068070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.601 [2024-09-27 15:57:22.068076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.601 [2024-09-27 15:57:22.070519] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.601 [2024-09-27 15:57:22.079744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.601 [2024-09-27 15:57:22.080397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.601 [2024-09-27 15:57:22.080429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.602 [2024-09-27 15:57:22.080438] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.602 [2024-09-27 15:57:22.080609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.602 [2024-09-27 15:57:22.080763] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.602 [2024-09-27 15:57:22.080770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.602 [2024-09-27 15:57:22.080776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.602 [2024-09-27 15:57:22.083227] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.864 [2024-09-27 15:57:22.092425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.864 [2024-09-27 15:57:22.092955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.864 [2024-09-27 15:57:22.092971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.864 [2024-09-27 15:57:22.092977] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.864 [2024-09-27 15:57:22.093129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.864 [2024-09-27 15:57:22.093281] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.864 [2024-09-27 15:57:22.093287] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.864 [2024-09-27 15:57:22.093292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.864 [2024-09-27 15:57:22.095729] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.864 [2024-09-27 15:57:22.105068] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.864 [2024-09-27 15:57:22.105569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.864 [2024-09-27 15:57:22.105582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.864 [2024-09-27 15:57:22.105587] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.864 [2024-09-27 15:57:22.105738] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.864 [2024-09-27 15:57:22.105890] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.864 [2024-09-27 15:57:22.105900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.864 [2024-09-27 15:57:22.105906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.864 [2024-09-27 15:57:22.108343] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.864 [2024-09-27 15:57:22.117702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.864 [2024-09-27 15:57:22.118231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.864 [2024-09-27 15:57:22.118262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.864 [2024-09-27 15:57:22.118271] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.864 [2024-09-27 15:57:22.118443] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.864 [2024-09-27 15:57:22.118598] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.864 [2024-09-27 15:57:22.118604] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.864 [2024-09-27 15:57:22.118610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.864 [2024-09-27 15:57:22.121062] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.864 [2024-09-27 15:57:22.130402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.864 [2024-09-27 15:57:22.130914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.864 [2024-09-27 15:57:22.130944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.864 [2024-09-27 15:57:22.130953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.864 [2024-09-27 15:57:22.131125] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.864 [2024-09-27 15:57:22.131279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.864 [2024-09-27 15:57:22.131285] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.864 [2024-09-27 15:57:22.131291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.864 [2024-09-27 15:57:22.133740] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.864 [2024-09-27 15:57:22.143088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.864 [2024-09-27 15:57:22.143697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.864 [2024-09-27 15:57:22.143728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.864 [2024-09-27 15:57:22.143738] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.864 [2024-09-27 15:57:22.143913] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.864 [2024-09-27 15:57:22.144069] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.864 [2024-09-27 15:57:22.144076] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.864 [2024-09-27 15:57:22.144081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.864 [2024-09-27 15:57:22.146524] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.864 [2024-09-27 15:57:22.155726] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.864 [2024-09-27 15:57:22.156347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.864 [2024-09-27 15:57:22.156378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.864 [2024-09-27 15:57:22.156387] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.865 [2024-09-27 15:57:22.156554] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.865 [2024-09-27 15:57:22.156709] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.865 [2024-09-27 15:57:22.156716] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.865 [2024-09-27 15:57:22.156725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.865 [2024-09-27 15:57:22.159182] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.865 [2024-09-27 15:57:22.168402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.865 [2024-09-27 15:57:22.168886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.865 [2024-09-27 15:57:22.168923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.865 [2024-09-27 15:57:22.168932] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.865 [2024-09-27 15:57:22.169101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.865 [2024-09-27 15:57:22.169255] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.865 [2024-09-27 15:57:22.169262] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.865 [2024-09-27 15:57:22.169268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.865 [2024-09-27 15:57:22.171713] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.865 4855.50 IOPS, 18.97 MiB/s [2024-09-27 15:57:22.181054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.865 [2024-09-27 15:57:22.181537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.865 [2024-09-27 15:57:22.181567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.865 [2024-09-27 15:57:22.181576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.865 [2024-09-27 15:57:22.181745] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.865 [2024-09-27 15:57:22.181906] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.865 [2024-09-27 15:57:22.181913] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.865 [2024-09-27 15:57:22.181919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.865 [2024-09-27 15:57:22.184363] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.865 [2024-09-27 15:57:22.193705] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.865 [2024-09-27 15:57:22.194332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.865 [2024-09-27 15:57:22.194363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.865 [2024-09-27 15:57:22.194372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.865 [2024-09-27 15:57:22.194540] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.865 [2024-09-27 15:57:22.194694] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.865 [2024-09-27 15:57:22.194701] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.865 [2024-09-27 15:57:22.194706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.865 [2024-09-27 15:57:22.197155] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.865 [2024-09-27 15:57:22.206354] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.865 [2024-09-27 15:57:22.206869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.865 [2024-09-27 15:57:22.206884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.865 [2024-09-27 15:57:22.206890] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.865 [2024-09-27 15:57:22.207046] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.865 [2024-09-27 15:57:22.207198] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.865 [2024-09-27 15:57:22.207204] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.865 [2024-09-27 15:57:22.207209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.865 [2024-09-27 15:57:22.209653] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.865 [2024-09-27 15:57:22.219051] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.865 [2024-09-27 15:57:22.219388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.865 [2024-09-27 15:57:22.219400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.865 [2024-09-27 15:57:22.219406] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.865 [2024-09-27 15:57:22.219557] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.865 [2024-09-27 15:57:22.219708] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.865 [2024-09-27 15:57:22.219714] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.865 [2024-09-27 15:57:22.219719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.865 [2024-09-27 15:57:22.222162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.865 [2024-09-27 15:57:22.231784] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.865 [2024-09-27 15:57:22.232388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.865 [2024-09-27 15:57:22.232418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.865 [2024-09-27 15:57:22.232427] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.865 [2024-09-27 15:57:22.232594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.865 [2024-09-27 15:57:22.232749] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.865 [2024-09-27 15:57:22.232755] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.865 [2024-09-27 15:57:22.232761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.865 [2024-09-27 15:57:22.235214] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.865 [2024-09-27 15:57:22.244413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.865 [2024-09-27 15:57:22.244934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.865 [2024-09-27 15:57:22.244957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.865 [2024-09-27 15:57:22.244963] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.865 [2024-09-27 15:57:22.245120] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.865 [2024-09-27 15:57:22.245276] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.865 [2024-09-27 15:57:22.245282] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.865 [2024-09-27 15:57:22.245287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.865 [2024-09-27 15:57:22.247729] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.865 [2024-09-27 15:57:22.257072] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.865 [2024-09-27 15:57:22.257681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.865 [2024-09-27 15:57:22.257712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.865 [2024-09-27 15:57:22.257721] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.865 [2024-09-27 15:57:22.257889] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.865 [2024-09-27 15:57:22.258050] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.865 [2024-09-27 15:57:22.258057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.865 [2024-09-27 15:57:22.258063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.865 [2024-09-27 15:57:22.260511] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.865 [2024-09-27 15:57:22.269724] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.865 [2024-09-27 15:57:22.270296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.865 [2024-09-27 15:57:22.270327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.865 [2024-09-27 15:57:22.270337] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.865 [2024-09-27 15:57:22.270504] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.865 [2024-09-27 15:57:22.270659] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.865 [2024-09-27 15:57:22.270665] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.865 [2024-09-27 15:57:22.270670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.865 [2024-09-27 15:57:22.273123] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.865 [2024-09-27 15:57:22.282366] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.865 [2024-09-27 15:57:22.283008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.865 [2024-09-27 15:57:22.283039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.866 [2024-09-27 15:57:22.283048] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.866 [2024-09-27 15:57:22.283216] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.866 [2024-09-27 15:57:22.283371] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.866 [2024-09-27 15:57:22.283378] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.866 [2024-09-27 15:57:22.283383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.866 [2024-09-27 15:57:22.285837] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.866 [2024-09-27 15:57:22.295043] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.866 [2024-09-27 15:57:22.295581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.866 [2024-09-27 15:57:22.295611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.866 [2024-09-27 15:57:22.295620] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.866 [2024-09-27 15:57:22.295787] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.866 [2024-09-27 15:57:22.295948] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.866 [2024-09-27 15:57:22.295955] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.866 [2024-09-27 15:57:22.295961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.866 [2024-09-27 15:57:22.298404] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.866 [2024-09-27 15:57:22.307743] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.866 [2024-09-27 15:57:22.308218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.866 [2024-09-27 15:57:22.308233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.866 [2024-09-27 15:57:22.308239] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.866 [2024-09-27 15:57:22.308392] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.866 [2024-09-27 15:57:22.308543] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.866 [2024-09-27 15:57:22.308549] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.866 [2024-09-27 15:57:22.308554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.866 [2024-09-27 15:57:22.310997] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.866 [2024-09-27 15:57:22.320489] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.866 [2024-09-27 15:57:22.320944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.866 [2024-09-27 15:57:22.320957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.866 [2024-09-27 15:57:22.320963] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.866 [2024-09-27 15:57:22.321115] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.866 [2024-09-27 15:57:22.321266] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.866 [2024-09-27 15:57:22.321271] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.866 [2024-09-27 15:57:22.321277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.866 [2024-09-27 15:57:22.323713] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.866 [2024-09-27 15:57:22.333196] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.866 [2024-09-27 15:57:22.333794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.866 [2024-09-27 15:57:22.333825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.866 [2024-09-27 15:57:22.333837] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.866 [2024-09-27 15:57:22.334011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.866 [2024-09-27 15:57:22.334167] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.866 [2024-09-27 15:57:22.334173] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.866 [2024-09-27 15:57:22.334178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.866 [2024-09-27 15:57:22.336619] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:41.866 [2024-09-27 15:57:22.345816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:41.866 [2024-09-27 15:57:22.346419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.866 [2024-09-27 15:57:22.346450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:41.866 [2024-09-27 15:57:22.346459] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:41.866 [2024-09-27 15:57:22.346626] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:41.866 [2024-09-27 15:57:22.346781] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:41.866 [2024-09-27 15:57:22.346788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:41.866 [2024-09-27 15:57:22.346793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:41.866 [2024-09-27 15:57:22.349242] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:42.128 [2024-09-27 15:57:22.358439] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:42.128 [2024-09-27 15:57:22.358944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.128 [2024-09-27 15:57:22.358959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:42.128 [2024-09-27 15:57:22.358965] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:42.128 [2024-09-27 15:57:22.359117] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:42.128 [2024-09-27 15:57:22.359269] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:42.128 [2024-09-27 15:57:22.359275] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:42.128 [2024-09-27 15:57:22.359280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:42.128 [2024-09-27 15:57:22.361725] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:42.128 [2024-09-27 15:57:22.371075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:42.128 [2024-09-27 15:57:22.371517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.128 [2024-09-27 15:57:22.371548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:42.128 [2024-09-27 15:57:22.371557] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:42.128 [2024-09-27 15:57:22.371724] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:42.128 [2024-09-27 15:57:22.371879] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:42.128 [2024-09-27 15:57:22.371889] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:42.128 [2024-09-27 15:57:22.371901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:42.128 [2024-09-27 15:57:22.374343] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:42.128 [2024-09-27 15:57:22.383824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:42.128 [2024-09-27 15:57:22.384307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.128 [2024-09-27 15:57:22.384322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:42.129 [2024-09-27 15:57:22.384329] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:42.129 [2024-09-27 15:57:22.384480] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:42.129 [2024-09-27 15:57:22.384632] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:42.129 [2024-09-27 15:57:22.384638] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:42.129 [2024-09-27 15:57:22.384643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:42.129 [2024-09-27 15:57:22.387084] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:42.129 [2024-09-27 15:57:22.396564] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:42.129 [2024-09-27 15:57:22.397184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.129 [2024-09-27 15:57:22.397216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:42.129 [2024-09-27 15:57:22.397225] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:42.129 [2024-09-27 15:57:22.397392] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:42.129 [2024-09-27 15:57:22.397546] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:42.129 [2024-09-27 15:57:22.397553] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:42.129 [2024-09-27 15:57:22.397558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:42.129 [2024-09-27 15:57:22.400010] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:42.129 [2024-09-27 15:57:22.409205] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:42.129 [2024-09-27 15:57:22.409778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.129 [2024-09-27 15:57:22.409808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:42.129 [2024-09-27 15:57:22.409817] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:42.129 [2024-09-27 15:57:22.409990] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:42.129 [2024-09-27 15:57:22.410146] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:42.129 [2024-09-27 15:57:22.410152] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:42.129 [2024-09-27 15:57:22.410158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:42.129 [2024-09-27 15:57:22.412600] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:42.129 [2024-09-27 15:57:22.421959] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:42.129 [2024-09-27 15:57:22.422454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.129 [2024-09-27 15:57:22.422484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:42.129 [2024-09-27 15:57:22.422494] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:42.129 [2024-09-27 15:57:22.422664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:42.129 [2024-09-27 15:57:22.422818] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:42.129 [2024-09-27 15:57:22.422824] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:42.129 [2024-09-27 15:57:22.422830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:42.129 [2024-09-27 15:57:22.425278] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:42.129 [2024-09-27 15:57:22.434619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:42.129 [2024-09-27 15:57:22.435215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.129 [2024-09-27 15:57:22.435245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:42.129 [2024-09-27 15:57:22.435255] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:42.129 [2024-09-27 15:57:22.435422] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:42.129 [2024-09-27 15:57:22.435577] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:42.129 [2024-09-27 15:57:22.435583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:42.129 [2024-09-27 15:57:22.435589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:42.129 [2024-09-27 15:57:22.438039] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:42.129 [2024-09-27 15:57:22.447236] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:42.129 [2024-09-27 15:57:22.447735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.129 [2024-09-27 15:57:22.447766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:42.129 [2024-09-27 15:57:22.447775] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:42.129 [2024-09-27 15:57:22.447948] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:42.129 [2024-09-27 15:57:22.448104] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:42.129 [2024-09-27 15:57:22.448110] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:42.129 [2024-09-27 15:57:22.448115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:42.129 [2024-09-27 15:57:22.450559] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:42.129 [2024-09-27 15:57:22.459908] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:42.129 [2024-09-27 15:57:22.460539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.129 [2024-09-27 15:57:22.460570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:42.129 [2024-09-27 15:57:22.460579] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:42.129 [2024-09-27 15:57:22.460750] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:42.129 [2024-09-27 15:57:22.460911] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:42.129 [2024-09-27 15:57:22.460919] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:42.129 [2024-09-27 15:57:22.460924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:42.129 [2024-09-27 15:57:22.463366] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:42.129 [2024-09-27 15:57:22.472574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:42.129 [2024-09-27 15:57:22.472931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.129 [2024-09-27 15:57:22.472946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:42.129 [2024-09-27 15:57:22.472952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:42.129 [2024-09-27 15:57:22.473104] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:42.129 [2024-09-27 15:57:22.473255] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:42.129 [2024-09-27 15:57:22.473262] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:42.129 [2024-09-27 15:57:22.473267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:42.129 [2024-09-27 15:57:22.475705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:42.129 [2024-09-27 15:57:22.485356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:42.129 [2024-09-27 15:57:22.485738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.129 [2024-09-27 15:57:22.485751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:42.129 [2024-09-27 15:57:22.485757] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:42.129 [2024-09-27 15:57:22.485913] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:42.129 [2024-09-27 15:57:22.486065] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:42.129 [2024-09-27 15:57:22.486071] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:42.129 [2024-09-27 15:57:22.486076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:42.129 [2024-09-27 15:57:22.488513] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:42.129 [2024-09-27 15:57:22.497991] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:42.129 [2024-09-27 15:57:22.498556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.129 [2024-09-27 15:57:22.498587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:42.129 [2024-09-27 15:57:22.498596] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:42.129 [2024-09-27 15:57:22.498763] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:42.129 [2024-09-27 15:57:22.498923] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:42.129 [2024-09-27 15:57:22.498931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:42.129 [2024-09-27 15:57:22.498940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:42.129 [2024-09-27 15:57:22.501384] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:42.129 [2024-09-27 15:57:22.510731] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:42.129 [2024-09-27 15:57:22.511308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.130 [2024-09-27 15:57:22.511339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:42.130 [2024-09-27 15:57:22.511348] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:42.130 [2024-09-27 15:57:22.511515] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:42.130 [2024-09-27 15:57:22.511670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:42.130 [2024-09-27 15:57:22.511676] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:42.130 [2024-09-27 15:57:22.511682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:42.130 [2024-09-27 15:57:22.514142] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:42.130 [2024-09-27 15:57:22.523484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:42.130 [2024-09-27 15:57:22.523986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.130 [2024-09-27 15:57:22.524002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:42.130 [2024-09-27 15:57:22.524008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:42.130 [2024-09-27 15:57:22.524160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:42.130 [2024-09-27 15:57:22.524312] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:42.130 [2024-09-27 15:57:22.524318] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:42.130 [2024-09-27 15:57:22.524323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:42.130 [2024-09-27 15:57:22.526765] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:42.130 [2024-09-27 15:57:22.536105] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:42.130 [2024-09-27 15:57:22.536662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.130 [2024-09-27 15:57:22.536692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:42.130 [2024-09-27 15:57:22.536701] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:42.130 [2024-09-27 15:57:22.536868] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:42.130 [2024-09-27 15:57:22.537030] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:42.130 [2024-09-27 15:57:22.537037] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:42.130 [2024-09-27 15:57:22.537043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:42.130 [2024-09-27 15:57:22.539487] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:42.130 [2024-09-27 15:57:22.548831] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:42.130 [2024-09-27 15:57:22.549443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.130 [2024-09-27 15:57:22.549474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:42.130 [2024-09-27 15:57:22.549484] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:42.130 [2024-09-27 15:57:22.549650] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:42.130 [2024-09-27 15:57:22.549805] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:42.130 [2024-09-27 15:57:22.549811] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:42.130 [2024-09-27 15:57:22.549817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:42.130 [2024-09-27 15:57:22.552270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:42.130 [2024-09-27 15:57:22.561474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:42.130 [2024-09-27 15:57:22.561989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.130 [2024-09-27 15:57:22.562004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:42.130 [2024-09-27 15:57:22.562010] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:42.130 [2024-09-27 15:57:22.562163] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:42.130 [2024-09-27 15:57:22.562315] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:42.130 [2024-09-27 15:57:22.562321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:42.130 [2024-09-27 15:57:22.562326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:42.130 [2024-09-27 15:57:22.564767] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:42.130 [2024-09-27 15:57:22.574112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:42.130 [2024-09-27 15:57:22.574664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.130 [2024-09-27 15:57:22.574695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:42.130 [2024-09-27 15:57:22.574704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:42.130 [2024-09-27 15:57:22.574872] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:42.130 [2024-09-27 15:57:22.575033] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:42.130 [2024-09-27 15:57:22.575040] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:42.130 [2024-09-27 15:57:22.575046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:42.130 [2024-09-27 15:57:22.577490] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:42.130 [2024-09-27 15:57:22.586832] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:42.130 [2024-09-27 15:57:22.587445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.130 [2024-09-27 15:57:22.587476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:42.130 [2024-09-27 15:57:22.587485] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:42.130 [2024-09-27 15:57:22.587652] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:42.130 [2024-09-27 15:57:22.587813] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:42.130 [2024-09-27 15:57:22.587820] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:42.130 [2024-09-27 15:57:22.587826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:42.130 [2024-09-27 15:57:22.590276] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:42.130 [2024-09-27 15:57:22.599475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:42.130 [2024-09-27 15:57:22.600010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.130 [2024-09-27 15:57:22.600041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:42.130 [2024-09-27 15:57:22.600050] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:42.130 [2024-09-27 15:57:22.600219] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:42.130 [2024-09-27 15:57:22.600374] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:42.130 [2024-09-27 15:57:22.600380] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:42.130 [2024-09-27 15:57:22.600385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:42.130 [2024-09-27 15:57:22.602834] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:42.130 [2024-09-27 15:57:22.612189] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:42.130 [2024-09-27 15:57:22.612648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.130 [2024-09-27 15:57:22.612679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:42.130 [2024-09-27 15:57:22.612688] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:42.130 [2024-09-27 15:57:22.612855] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:42.130 [2024-09-27 15:57:22.613025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:42.130 [2024-09-27 15:57:22.613033] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:42.130 [2024-09-27 15:57:22.613038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:42.393 [2024-09-27 15:57:22.615482] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:42.393 [2024-09-27 15:57:22.624828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:42.393 [2024-09-27 15:57:22.625448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-09-27 15:57:22.625479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:42.393 [2024-09-27 15:57:22.625488] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:42.393 [2024-09-27 15:57:22.625655] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:42.393 [2024-09-27 15:57:22.625810] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:42.393 [2024-09-27 15:57:22.625816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:42.393 [2024-09-27 15:57:22.625822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:42.393 [2024-09-27 15:57:22.628275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:42.393 [2024-09-27 15:57:22.637475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:42.393 [2024-09-27 15:57:22.638027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-09-27 15:57:22.638058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:42.393 [2024-09-27 15:57:22.638067] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:42.393 [2024-09-27 15:57:22.638235] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:42.393 [2024-09-27 15:57:22.638390] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:42.393 [2024-09-27 15:57:22.638397] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:42.393 [2024-09-27 15:57:22.638403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:42.393 [2024-09-27 15:57:22.640851] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:42.393 [2024-09-27 15:57:22.650195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:42.393 [2024-09-27 15:57:22.650810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-09-27 15:57:22.650840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:42.393 [2024-09-27 15:57:22.650849] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:42.393 [2024-09-27 15:57:22.651025] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:42.393 [2024-09-27 15:57:22.651181] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:42.393 [2024-09-27 15:57:22.651187] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:42.393 [2024-09-27 15:57:22.651193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:42.393 [2024-09-27 15:57:22.653637] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:42.393 [2024-09-27 15:57:22.662843] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:42.393 [2024-09-27 15:57:22.663510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-09-27 15:57:22.663542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:42.393 [2024-09-27 15:57:22.663551] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:42.393 [2024-09-27 15:57:22.663718] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:42.393 [2024-09-27 15:57:22.663873] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:42.393 [2024-09-27 15:57:22.663880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:42.393 [2024-09-27 15:57:22.663885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:42.393 [2024-09-27 15:57:22.666498] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:42.393 [2024-09-27 15:57:22.675573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:42.393 [2024-09-27 15:57:22.676161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-09-27 15:57:22.676192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:42.393 [2024-09-27 15:57:22.676204] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:42.393 [2024-09-27 15:57:22.676372] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:42.393 [2024-09-27 15:57:22.676526] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:42.393 [2024-09-27 15:57:22.676534] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:42.393 [2024-09-27 15:57:22.676540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:42.393 [2024-09-27 15:57:22.678991] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:42.393 [2024-09-27 15:57:22.688333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:42.393 [2024-09-27 15:57:22.688721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-09-27 15:57:22.688736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:42.393 [2024-09-27 15:57:22.688742] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:42.393 [2024-09-27 15:57:22.688898] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:42.393 [2024-09-27 15:57:22.689050] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:42.393 [2024-09-27 15:57:22.689056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:42.393 [2024-09-27 15:57:22.689061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:42.394 [2024-09-27 15:57:22.691520] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:42.394 [2024-09-27 15:57:22.701011] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:42.394 [2024-09-27 15:57:22.701537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.394 [2024-09-27 15:57:22.701568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:42.394 [2024-09-27 15:57:22.701577] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:42.394 [2024-09-27 15:57:22.701744] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:42.394 [2024-09-27 15:57:22.701905] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:42.394 [2024-09-27 15:57:22.701912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:42.394 [2024-09-27 15:57:22.701918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:42.394 [2024-09-27 15:57:22.704360] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:42.394 15:57:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:42.394 15:57:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:38:42.394 15:57:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:38:42.394 15:57:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:42.394 15:57:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:42.394 [2024-09-27 15:57:22.713711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:42.394 [2024-09-27 15:57:22.714213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.394 [2024-09-27 15:57:22.714244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:42.394 [2024-09-27 15:57:22.714257] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:42.394 [2024-09-27 15:57:22.714424] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:42.394 [2024-09-27 15:57:22.714579] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:42.394 [2024-09-27 15:57:22.714585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:42.394 [2024-09-27 15:57:22.714591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:42.394 [2024-09-27 15:57:22.717044] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:42.394 [2024-09-27 15:57:22.726385] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:42.394 [2024-09-27 15:57:22.726888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.394 [2024-09-27 15:57:22.726925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:42.394 [2024-09-27 15:57:22.726934] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:42.394 [2024-09-27 15:57:22.727104] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:42.394 [2024-09-27 15:57:22.727259] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:42.394 [2024-09-27 15:57:22.727266] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:42.394 [2024-09-27 15:57:22.727272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:42.394 [2024-09-27 15:57:22.729716] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:42.394 [2024-09-27 15:57:22.739060] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:42.394 [2024-09-27 15:57:22.739573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.394 [2024-09-27 15:57:22.739587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:42.394 [2024-09-27 15:57:22.739593] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:42.394 [2024-09-27 15:57:22.739745] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:42.394 [2024-09-27 15:57:22.739899] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:42.394 [2024-09-27 15:57:22.739906] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:42.394 [2024-09-27 15:57:22.739911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:42.394 [2024-09-27 15:57:22.742350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:42.394 [2024-09-27 15:57:22.751691] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:42.394 [2024-09-27 15:57:22.752164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.394 [2024-09-27 15:57:22.752177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:42.394 [2024-09-27 15:57:22.752183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:42.394 [2024-09-27 15:57:22.752335] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:42.394 [2024-09-27 15:57:22.752487] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:42.394 [2024-09-27 15:57:22.752497] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:42.394 [2024-09-27 15:57:22.752503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:42.394 15:57:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:42.394 15:57:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:42.394 15:57:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:42.394 [2024-09-27 15:57:22.754946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:42.394 15:57:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:42.394 [2024-09-27 15:57:22.761368] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:42.394 [2024-09-27 15:57:22.764431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:42.394 [2024-09-27 15:57:22.764881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.394 [2024-09-27 15:57:22.764897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:42.394 [2024-09-27 15:57:22.764903] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:42.394 [2024-09-27 15:57:22.765054] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:42.394 [2024-09-27 15:57:22.765206] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:42.394 [2024-09-27 15:57:22.765211] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:42.394 [2024-09-27 15:57:22.765217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:42.394 [2024-09-27 15:57:22.767653] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:42.394 [2024-09-27 15:57:22.777136] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:42.394 [2024-09-27 15:57:22.777643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.394 [2024-09-27 15:57:22.777676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:42.394 [2024-09-27 15:57:22.777685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:42.394 [2024-09-27 15:57:22.777853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:42.394 [2024-09-27 15:57:22.778015] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:42.394 [2024-09-27 15:57:22.778022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:42.394 [2024-09-27 15:57:22.778028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:42.394 15:57:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:42.394 15:57:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:42.394 15:57:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:42.394 [2024-09-27 15:57:22.780472] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:42.394 15:57:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:42.394 [2024-09-27 15:57:22.789810] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:42.394 [2024-09-27 15:57:22.790292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.394 [2024-09-27 15:57:22.790327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:42.394 [2024-09-27 15:57:22.790336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:42.394 [2024-09-27 15:57:22.790503] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:42.394 [2024-09-27 15:57:22.790658] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:42.394 [2024-09-27 15:57:22.790664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:42.394 [2024-09-27 15:57:22.790669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:42.394 [2024-09-27 15:57:22.793120] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:42.394 Malloc0 00:38:42.394 15:57:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:42.394 15:57:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:42.394 15:57:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:42.394 15:57:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:42.394 [2024-09-27 15:57:22.802487] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:42.394 [2024-09-27 15:57:22.802989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.395 [2024-09-27 15:57:22.803020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:42.395 [2024-09-27 15:57:22.803029] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:42.395 [2024-09-27 15:57:22.803199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:42.395 [2024-09-27 15:57:22.803354] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:42.395 [2024-09-27 15:57:22.803360] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:42.395 [2024-09-27 15:57:22.803366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:42.395 [2024-09-27 15:57:22.805815] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:42.395 15:57:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:42.395 15:57:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:42.395 15:57:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:42.395 15:57:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:42.395 [2024-09-27 15:57:22.815173] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:42.395 [2024-09-27 15:57:22.815766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.395 [2024-09-27 15:57:22.815797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:42.395 [2024-09-27 15:57:22.815806] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:42.395 [2024-09-27 15:57:22.815980] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:42.395 [2024-09-27 15:57:22.816135] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:42.395 [2024-09-27 15:57:22.816142] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:42.395 [2024-09-27 15:57:22.816147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:42.395 [2024-09-27 15:57:22.818595] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:42.395 15:57:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:42.395 15:57:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:42.395 15:57:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:42.395 15:57:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:42.395 [2024-09-27 15:57:22.827789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:42.395 [2024-09-27 15:57:22.828338] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:42.395 [2024-09-27 15:57:22.828448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.395 [2024-09-27 15:57:22.828479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ea9d0 with addr=10.0.0.2, port=4420 00:38:42.395 [2024-09-27 15:57:22.828488] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ea9d0 is same with the state(6) to be set 00:38:42.395 [2024-09-27 15:57:22.828656] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ea9d0 (9): Bad file descriptor 00:38:42.395 [2024-09-27 15:57:22.828810] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:42.395 [2024-09-27 15:57:22.828817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:42.395 [2024-09-27 15:57:22.828823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:42.395 [2024-09-27 15:57:22.831272] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:42.395 15:57:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:42.395 15:57:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 638256 00:38:42.395 [2024-09-27 15:57:22.840463] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:42.657 [2024-09-27 15:57:22.911586] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:38:50.895 4679.29 IOPS, 18.28 MiB/s 5717.88 IOPS, 22.34 MiB/s 6517.67 IOPS, 25.46 MiB/s 7166.90 IOPS, 28.00 MiB/s 7698.64 IOPS, 30.07 MiB/s 8129.50 IOPS, 31.76 MiB/s 8509.38 IOPS, 33.24 MiB/s 8822.79 IOPS, 34.46 MiB/s 9100.40 IOPS, 35.55 MiB/s 00:38:50.895 Latency(us) 00:38:50.895 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:50.895 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:38:50.895 Verification LBA range: start 0x0 length 0x4000 00:38:50.895 Nvme1n1 : 15.01 9098.53 35.54 14391.07 0.00 5430.97 552.96 15947.09 00:38:50.895 =================================================================================================================== 00:38:50.895 Total : 9098.53 35.54 14391.07 0.00 5430.97 552.96 15947.09 00:38:50.895 15:57:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:38:50.895 15:57:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:50.895 15:57:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:50.895 15:57:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:50.895 15:57:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:50.895 15:57:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:38:50.895 15:57:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:38:50.895 15:57:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # nvmfcleanup 00:38:50.895 15:57:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:38:50.895 15:57:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:50.895 15:57:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:38:50.895 15:57:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:50.895 15:57:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:50.895 rmmod nvme_tcp 00:38:50.895 rmmod nvme_fabrics 00:38:50.895 rmmod nvme_keyring 00:38:51.156 15:57:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:51.156 15:57:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:38:51.156 15:57:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:38:51.157 15:57:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@513 -- # '[' -n 639267 ']' 00:38:51.157 15:57:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # killprocess 639267 00:38:51.157 15:57:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 639267 ']' 00:38:51.157 15:57:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 639267 00:38:51.157 15:57:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:38:51.157 15:57:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:51.157 15:57:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 639267 00:38:51.157 15:57:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:51.157 15:57:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:51.157 15:57:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 639267' 00:38:51.157 killing process with pid 639267 00:38:51.157 15:57:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 639267 00:38:51.157 15:57:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 639267 00:38:51.157 15:57:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:38:51.157 15:57:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:38:51.157 15:57:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:38:51.157 15:57:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:38:51.157 15:57:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@787 -- # iptables-save 00:38:51.157 15:57:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:38:51.157 15:57:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@787 -- # iptables-restore 00:38:51.157 15:57:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:51.157 15:57:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:51.157 15:57:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:51.157 15:57:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:51.157 15:57:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:53.705 00:38:53.705 real 0m28.484s 00:38:53.705 user 1m3.209s 00:38:53.705 sys 0m7.913s 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:53.705 ************************************ 00:38:53.705 END TEST nvmf_bdevperf 00:38:53.705 ************************************ 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:53.705 ************************************ 00:38:53.705 START TEST nvmf_target_disconnect 00:38:53.705 ************************************ 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:38:53.705 * Looking for test storage... 00:38:53.705 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:53.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:53.705 --rc genhtml_branch_coverage=1 00:38:53.705 --rc genhtml_function_coverage=1 00:38:53.705 --rc genhtml_legend=1 00:38:53.705 --rc geninfo_all_blocks=1 00:38:53.705 --rc geninfo_unexecuted_blocks=1 00:38:53.705 00:38:53.705 ' 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:53.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:53.705 --rc genhtml_branch_coverage=1 00:38:53.705 --rc genhtml_function_coverage=1 00:38:53.705 --rc genhtml_legend=1 00:38:53.705 --rc geninfo_all_blocks=1 00:38:53.705 --rc geninfo_unexecuted_blocks=1 00:38:53.705 00:38:53.705 ' 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:53.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:53.705 --rc genhtml_branch_coverage=1 00:38:53.705 --rc genhtml_function_coverage=1 00:38:53.705 --rc genhtml_legend=1 00:38:53.705 --rc geninfo_all_blocks=1 00:38:53.705 --rc geninfo_unexecuted_blocks=1 00:38:53.705 00:38:53.705 ' 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:53.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:53.705 --rc genhtml_branch_coverage=1 00:38:53.705 --rc genhtml_function_coverage=1 00:38:53.705 --rc genhtml_legend=1 00:38:53.705 --rc geninfo_all_blocks=1 00:38:53.705 --rc geninfo_unexecuted_blocks=1 00:38:53.705 00:38:53.705 ' 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:53.705 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:53.706 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:53.706 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:53.706 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:38:53.706 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:53.706 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:53.706 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:53.706 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:53.706 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:53.706 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:53.706 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:38:53.706 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:53.706 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:38:53.706 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:53.706 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:53.706 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:53.706 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:53.706 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:53.706 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:53.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:53.706 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:53.706 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:53.706 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:53.706 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:38:53.706 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:38:53.706 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:38:53.706 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:38:53.706 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:38:53.706 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:53.706 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # prepare_net_devs 00:38:53.706 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@434 -- # local -g is_hw=no 00:38:53.706 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # remove_spdk_ns 00:38:53.706 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:53.706 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:53.706 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:53.706 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:38:53.706 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:38:53.706 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:38:53.706 15:57:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:39:01.853 Found 0000:31:00.0 (0x8086 - 0x159b) 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:39:01.853 Found 0000:31:00.1 (0x8086 - 0x159b) 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:39:01.853 Found net devices under 0000:31:00.0: cvl_0_0 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:39:01.853 Found net devices under 0000:31:00.1: cvl_0_1 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # is_hw=yes 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:01.853 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:01.854 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:01.854 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:39:01.854 00:39:01.854 --- 10.0.0.2 ping statistics --- 00:39:01.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:01.854 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:01.854 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:01.854 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:39:01.854 00:39:01.854 --- 10.0.0.1 ping statistics --- 00:39:01.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:01.854 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # return 0 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:39:01.854 ************************************ 00:39:01.854 START TEST nvmf_target_disconnect_tc1 00:39:01.854 ************************************ 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:01.854 [2024-09-27 15:57:41.856486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.854 [2024-09-27 15:57:41.856572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3eb60 with addr=10.0.0.2, port=4420 00:39:01.854 [2024-09-27 15:57:41.856616] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:39:01.854 [2024-09-27 15:57:41.856631] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:39:01.854 [2024-09-27 15:57:41.856639] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:39:01.854 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:39:01.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:39:01.854 Initializing NVMe Controllers 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:01.854 00:39:01.854 real 0m0.138s 00:39:01.854 user 0m0.055s 00:39:01.854 sys 0m0.082s 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:39:01.854 ************************************ 00:39:01.854 END TEST nvmf_target_disconnect_tc1 00:39:01.854 ************************************ 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:39:01.854 ************************************ 00:39:01.854 START TEST nvmf_target_disconnect_tc2 00:39:01.854 ************************************ 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # nvmfpid=645381 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # waitforlisten 645381 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 645381 ']' 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:01.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:01.854 15:57:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:01.854 [2024-09-27 15:57:42.016764] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:39:01.854 [2024-09-27 15:57:42.016825] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:01.855 [2024-09-27 15:57:42.106591] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:01.855 [2024-09-27 15:57:42.154507] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:01.855 [2024-09-27 15:57:42.154557] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:01.855 [2024-09-27 15:57:42.154565] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:01.855 [2024-09-27 15:57:42.154573] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:01.855 [2024-09-27 15:57:42.154579] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:01.855 [2024-09-27 15:57:42.154767] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:39:01.855 [2024-09-27 15:57:42.154890] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:39:01.855 [2024-09-27 15:57:42.155185] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:39:01.855 [2024-09-27 15:57:42.154991] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:39:02.428 15:57:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:02.428 15:57:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:39:02.428 15:57:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:39:02.428 15:57:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:02.428 15:57:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:02.429 15:57:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:02.429 15:57:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:02.429 15:57:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.429 15:57:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:02.429 Malloc0 00:39:02.429 15:57:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.429 15:57:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:39:02.429 15:57:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.429 15:57:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:02.690 [2024-09-27 15:57:42.920383] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:02.690 15:57:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.690 15:57:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:02.690 15:57:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.690 15:57:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:02.690 15:57:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.690 15:57:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:02.690 15:57:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.690 15:57:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:02.690 15:57:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.690 15:57:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:02.690 15:57:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.690 15:57:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:02.690 [2024-09-27 15:57:42.960766] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:02.690 15:57:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.690 15:57:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:02.690 15:57:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.690 15:57:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:02.690 15:57:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.690 15:57:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=645729 00:39:02.690 15:57:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:39:02.690 15:57:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:04.611 15:57:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 645381 00:39:04.611 15:57:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:39:04.611 Read completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Read completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Read completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Read completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Read completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Read completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Read completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Read completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Read completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Read completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Read completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Read completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Read completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Write completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Read completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Write completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Read completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Write completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Write completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Read completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Read completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Write completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Write completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Read completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Write completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Write completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Write completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Read completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Write completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Read completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Write completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Write completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 [2024-09-27 15:57:44.999333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:04.611 Read completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Read completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Read completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Read completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Read completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Read completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Read completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Read completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Read completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Read completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Write completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Write completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Write completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Write completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Write completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Write completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Read completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Read completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Read completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Read completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Write completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Write completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Write completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Read completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Read completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Write completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Read completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Write completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Read completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Write completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Write completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 Write completed with error (sct=0, sc=8) 00:39:04.611 starting I/O failed 00:39:04.611 [2024-09-27 15:57:44.999660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:04.611 [2024-09-27 15:57:44.999955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.611 [2024-09-27 15:57:44.999979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.611 qpair failed and we were unable to recover it. 00:39:04.611 [2024-09-27 15:57:45.000219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.611 [2024-09-27 15:57:45.000236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.611 qpair failed and we were unable to recover it. 00:39:04.611 [2024-09-27 15:57:45.000536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.611 [2024-09-27 15:57:45.000547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.611 qpair failed and we were unable to recover it. 00:39:04.611 [2024-09-27 15:57:45.000869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.611 [2024-09-27 15:57:45.000879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.611 qpair failed and we were unable to recover it. 00:39:04.611 [2024-09-27 15:57:45.001381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.611 [2024-09-27 15:57:45.001445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.611 qpair failed and we were unable to recover it. 00:39:04.612 [2024-09-27 15:57:45.001833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.612 [2024-09-27 15:57:45.001845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.612 qpair failed and we were unable to recover it. 00:39:04.612 [2024-09-27 15:57:45.002267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.612 [2024-09-27 15:57:45.002323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.612 qpair failed and we were unable to recover it. 00:39:04.612 [2024-09-27 15:57:45.002731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.612 [2024-09-27 15:57:45.002742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.612 qpair failed and we were unable to recover it. 00:39:04.612 [2024-09-27 15:57:45.003124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.612 [2024-09-27 15:57:45.003181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.612 qpair failed and we were unable to recover it. 00:39:04.612 [2024-09-27 15:57:45.003435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.612 [2024-09-27 15:57:45.003446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.612 qpair failed and we were unable to recover it. 00:39:04.612 [2024-09-27 15:57:45.003801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.612 [2024-09-27 15:57:45.003810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.612 qpair failed and we were unable to recover it. 00:39:04.612 [2024-09-27 15:57:45.004032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.612 [2024-09-27 15:57:45.004042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.612 qpair failed and we were unable to recover it. 00:39:04.612 [2024-09-27 15:57:45.004395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.612 [2024-09-27 15:57:45.004404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.612 qpair failed and we were unable to recover it. 00:39:04.612 [2024-09-27 15:57:45.004727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.612 [2024-09-27 15:57:45.004736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.612 qpair failed and we were unable to recover it. 00:39:04.612 [2024-09-27 15:57:45.005083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.612 [2024-09-27 15:57:45.005093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.612 qpair failed and we were unable to recover it. 00:39:04.612 [2024-09-27 15:57:45.005433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.612 [2024-09-27 15:57:45.005442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.612 qpair failed and we were unable to recover it. 00:39:04.612 [2024-09-27 15:57:45.005760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.612 [2024-09-27 15:57:45.005769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.612 qpair failed and we were unable to recover it. 00:39:04.612 [2024-09-27 15:57:45.005962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.612 [2024-09-27 15:57:45.005971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.612 qpair failed and we were unable to recover it. 00:39:04.612 [2024-09-27 15:57:45.006379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.612 [2024-09-27 15:57:45.006388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.612 qpair failed and we were unable to recover it. 00:39:04.612 [2024-09-27 15:57:45.006696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.612 [2024-09-27 15:57:45.006705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.612 qpair failed and we were unable to recover it. 00:39:04.612 [2024-09-27 15:57:45.006998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.612 [2024-09-27 15:57:45.007008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.612 qpair failed and we were unable to recover it. 00:39:04.612 [2024-09-27 15:57:45.007177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.612 [2024-09-27 15:57:45.007186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.612 qpair failed and we were unable to recover it. 00:39:04.612 [2024-09-27 15:57:45.007391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.612 [2024-09-27 15:57:45.007400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.612 qpair failed and we were unable to recover it. 00:39:04.612 [2024-09-27 15:57:45.007683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.612 [2024-09-27 15:57:45.007692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.612 qpair failed and we were unable to recover it. 00:39:04.612 [2024-09-27 15:57:45.008147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.612 [2024-09-27 15:57:45.008156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.612 qpair failed and we were unable to recover it. 00:39:04.612 [2024-09-27 15:57:45.008393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.612 [2024-09-27 15:57:45.008401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.612 qpair failed and we were unable to recover it. 00:39:04.612 [2024-09-27 15:57:45.008683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.612 [2024-09-27 15:57:45.008692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.612 qpair failed and we were unable to recover it. 00:39:04.612 [2024-09-27 15:57:45.009026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.612 [2024-09-27 15:57:45.009036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.612 qpair failed and we were unable to recover it. 00:39:04.612 [2024-09-27 15:57:45.009212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.612 [2024-09-27 15:57:45.009221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.612 qpair failed and we were unable to recover it. 00:39:04.612 [2024-09-27 15:57:45.009549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.612 [2024-09-27 15:57:45.009559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.612 qpair failed and we were unable to recover it. 00:39:04.612 [2024-09-27 15:57:45.009736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.612 [2024-09-27 15:57:45.009745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.612 qpair failed and we were unable to recover it. 00:39:04.612 [2024-09-27 15:57:45.010047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.612 [2024-09-27 15:57:45.010059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.612 qpair failed and we were unable to recover it. 00:39:04.612 [2024-09-27 15:57:45.010379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.612 [2024-09-27 15:57:45.010388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.612 qpair failed and we were unable to recover it. 00:39:04.612 [2024-09-27 15:57:45.010576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.612 [2024-09-27 15:57:45.010585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.612 qpair failed and we were unable to recover it. 00:39:04.612 [2024-09-27 15:57:45.010871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.612 [2024-09-27 15:57:45.010880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.612 qpair failed and we were unable to recover it. 00:39:04.612 [2024-09-27 15:57:45.011090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.612 [2024-09-27 15:57:45.011099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.612 qpair failed and we were unable to recover it. 00:39:04.612 [2024-09-27 15:57:45.011399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.612 [2024-09-27 15:57:45.011408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.612 qpair failed and we were unable to recover it. 00:39:04.612 [2024-09-27 15:57:45.011748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.612 [2024-09-27 15:57:45.011756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.612 qpair failed and we were unable to recover it. 00:39:04.612 [2024-09-27 15:57:45.012076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.612 [2024-09-27 15:57:45.012085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.612 qpair failed and we were unable to recover it. 00:39:04.612 [2024-09-27 15:57:45.012292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.612 [2024-09-27 15:57:45.012300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.612 qpair failed and we were unable to recover it. 00:39:04.612 [2024-09-27 15:57:45.012612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.612 [2024-09-27 15:57:45.012621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.612 qpair failed and we were unable to recover it. 00:39:04.613 [2024-09-27 15:57:45.012960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.613 [2024-09-27 15:57:45.012969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.613 qpair failed and we were unable to recover it. 00:39:04.613 [2024-09-27 15:57:45.013298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.613 [2024-09-27 15:57:45.013307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.613 qpair failed and we were unable to recover it. 00:39:04.613 [2024-09-27 15:57:45.013605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.613 [2024-09-27 15:57:45.013613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.613 qpair failed and we were unable to recover it. 00:39:04.613 [2024-09-27 15:57:45.013959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.613 [2024-09-27 15:57:45.013968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.613 qpair failed and we were unable to recover it. 00:39:04.613 [2024-09-27 15:57:45.014329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.613 [2024-09-27 15:57:45.014338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.613 qpair failed and we were unable to recover it. 00:39:04.613 [2024-09-27 15:57:45.014660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.613 [2024-09-27 15:57:45.014668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.613 qpair failed and we were unable to recover it. 00:39:04.613 [2024-09-27 15:57:45.015017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.613 [2024-09-27 15:57:45.015027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.613 qpair failed and we were unable to recover it. 00:39:04.613 [2024-09-27 15:57:45.015217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.613 [2024-09-27 15:57:45.015225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.613 qpair failed and we were unable to recover it. 00:39:04.613 [2024-09-27 15:57:45.015380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.613 [2024-09-27 15:57:45.015389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.613 qpair failed and we were unable to recover it. 00:39:04.613 [2024-09-27 15:57:45.015653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.613 [2024-09-27 15:57:45.015661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.613 qpair failed and we were unable to recover it. 00:39:04.613 [2024-09-27 15:57:45.016015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.613 [2024-09-27 15:57:45.016025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.613 qpair failed and we were unable to recover it. 00:39:04.613 [2024-09-27 15:57:45.016351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.613 [2024-09-27 15:57:45.016360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.613 qpair failed and we were unable to recover it. 00:39:04.613 [2024-09-27 15:57:45.016691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.613 [2024-09-27 15:57:45.016699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.613 qpair failed and we were unable to recover it. 00:39:04.613 [2024-09-27 15:57:45.017012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.613 [2024-09-27 15:57:45.017020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.613 qpair failed and we were unable to recover it. 00:39:04.613 [2024-09-27 15:57:45.017359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.613 [2024-09-27 15:57:45.017367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.613 qpair failed and we were unable to recover it. 00:39:04.613 [2024-09-27 15:57:45.017454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.613 [2024-09-27 15:57:45.017460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.613 qpair failed and we were unable to recover it. 00:39:04.613 [2024-09-27 15:57:45.017663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.613 [2024-09-27 15:57:45.017670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.613 qpair failed and we were unable to recover it. 00:39:04.613 [2024-09-27 15:57:45.018013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.613 [2024-09-27 15:57:45.018023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.613 qpair failed and we were unable to recover it. 00:39:04.613 [2024-09-27 15:57:45.018346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.613 [2024-09-27 15:57:45.018353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.613 qpair failed and we were unable to recover it. 00:39:04.613 [2024-09-27 15:57:45.018681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.613 [2024-09-27 15:57:45.018688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.613 qpair failed and we were unable to recover it. 00:39:04.613 [2024-09-27 15:57:45.018955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.613 [2024-09-27 15:57:45.018964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.613 qpair failed and we were unable to recover it. 00:39:04.613 [2024-09-27 15:57:45.019296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.613 [2024-09-27 15:57:45.019303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.613 qpair failed and we were unable to recover it. 00:39:04.613 [2024-09-27 15:57:45.019621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.613 [2024-09-27 15:57:45.019630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.613 qpair failed and we were unable to recover it. 00:39:04.613 [2024-09-27 15:57:45.019934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.613 [2024-09-27 15:57:45.019942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.613 qpair failed and we were unable to recover it. 00:39:04.613 [2024-09-27 15:57:45.020159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.613 [2024-09-27 15:57:45.020166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.613 qpair failed and we were unable to recover it. 00:39:04.613 [2024-09-27 15:57:45.020500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.613 [2024-09-27 15:57:45.020507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.613 qpair failed and we were unable to recover it. 00:39:04.613 [2024-09-27 15:57:45.020842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.613 [2024-09-27 15:57:45.020850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.613 qpair failed and we were unable to recover it. 00:39:04.613 [2024-09-27 15:57:45.021179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.613 [2024-09-27 15:57:45.021186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.613 qpair failed and we were unable to recover it. 00:39:04.613 [2024-09-27 15:57:45.021400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.613 [2024-09-27 15:57:45.021408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.613 qpair failed and we were unable to recover it. 00:39:04.613 [2024-09-27 15:57:45.021705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.613 [2024-09-27 15:57:45.021713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.613 qpair failed and we were unable to recover it. 00:39:04.613 [2024-09-27 15:57:45.022040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.613 [2024-09-27 15:57:45.022048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.613 qpair failed and we were unable to recover it. 00:39:04.613 [2024-09-27 15:57:45.022380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.613 [2024-09-27 15:57:45.022387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.613 qpair failed and we were unable to recover it. 00:39:04.613 [2024-09-27 15:57:45.022702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.613 [2024-09-27 15:57:45.022710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.613 qpair failed and we were unable to recover it. 00:39:04.613 [2024-09-27 15:57:45.023138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.613 [2024-09-27 15:57:45.023145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.613 qpair failed and we were unable to recover it. 00:39:04.613 [2024-09-27 15:57:45.023486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.613 [2024-09-27 15:57:45.023494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.613 qpair failed and we were unable to recover it. 00:39:04.613 [2024-09-27 15:57:45.023806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.613 [2024-09-27 15:57:45.023814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.613 qpair failed and we were unable to recover it. 00:39:04.613 [2024-09-27 15:57:45.024004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.613 [2024-09-27 15:57:45.024014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.614 qpair failed and we were unable to recover it. 00:39:04.614 [2024-09-27 15:57:45.024245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.614 [2024-09-27 15:57:45.024253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.614 qpair failed and we were unable to recover it. 00:39:04.614 [2024-09-27 15:57:45.024610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.614 [2024-09-27 15:57:45.024618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.614 qpair failed and we were unable to recover it. 00:39:04.614 [2024-09-27 15:57:45.024850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.614 [2024-09-27 15:57:45.024858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.614 qpair failed and we were unable to recover it. 00:39:04.614 [2024-09-27 15:57:45.025159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.614 [2024-09-27 15:57:45.025168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.614 qpair failed and we were unable to recover it. 00:39:04.614 [2024-09-27 15:57:45.025433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.614 [2024-09-27 15:57:45.025440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.614 qpair failed and we were unable to recover it. 00:39:04.614 [2024-09-27 15:57:45.025771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.614 [2024-09-27 15:57:45.025778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.614 qpair failed and we were unable to recover it. 00:39:04.614 [2024-09-27 15:57:45.026082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.614 [2024-09-27 15:57:45.026090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.614 qpair failed and we were unable to recover it. 00:39:04.614 [2024-09-27 15:57:45.026428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.614 [2024-09-27 15:57:45.026436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.614 qpair failed and we were unable to recover it. 00:39:04.614 [2024-09-27 15:57:45.026751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.614 [2024-09-27 15:57:45.026759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.614 qpair failed and we were unable to recover it. 00:39:04.614 [2024-09-27 15:57:45.026992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.614 [2024-09-27 15:57:45.026999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.614 qpair failed and we were unable to recover it. 00:39:04.614 [2024-09-27 15:57:45.027333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.614 [2024-09-27 15:57:45.027340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.614 qpair failed and we were unable to recover it. 00:39:04.614 [2024-09-27 15:57:45.027557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.614 [2024-09-27 15:57:45.027565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.614 qpair failed and we were unable to recover it. 00:39:04.614 [2024-09-27 15:57:45.027750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.614 [2024-09-27 15:57:45.027757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.614 qpair failed and we were unable to recover it. 00:39:04.614 [2024-09-27 15:57:45.028072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.614 [2024-09-27 15:57:45.028081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.614 qpair failed and we were unable to recover it. 00:39:04.614 [2024-09-27 15:57:45.028385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.614 [2024-09-27 15:57:45.028392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.614 qpair failed and we were unable to recover it. 00:39:04.614 [2024-09-27 15:57:45.028720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.614 [2024-09-27 15:57:45.028727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.614 qpair failed and we were unable to recover it. 00:39:04.614 [2024-09-27 15:57:45.029066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.614 [2024-09-27 15:57:45.029073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.614 qpair failed and we were unable to recover it. 00:39:04.614 [2024-09-27 15:57:45.029390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.614 [2024-09-27 15:57:45.029398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.614 qpair failed and we were unable to recover it. 00:39:04.614 [2024-09-27 15:57:45.029753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.614 [2024-09-27 15:57:45.029760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.614 qpair failed and we were unable to recover it. 00:39:04.614 [2024-09-27 15:57:45.030103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.614 [2024-09-27 15:57:45.030111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.614 qpair failed and we were unable to recover it. 00:39:04.614 [2024-09-27 15:57:45.030483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.614 [2024-09-27 15:57:45.030490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.614 qpair failed and we were unable to recover it. 00:39:04.614 [2024-09-27 15:57:45.030730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.614 [2024-09-27 15:57:45.030740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.614 qpair failed and we were unable to recover it. 00:39:04.614 [2024-09-27 15:57:45.031028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.614 [2024-09-27 15:57:45.031036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.614 qpair failed and we were unable to recover it. 00:39:04.614 [2024-09-27 15:57:45.031379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.614 [2024-09-27 15:57:45.031387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.614 qpair failed and we were unable to recover it. 00:39:04.614 [2024-09-27 15:57:45.031702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.614 [2024-09-27 15:57:45.031709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.614 qpair failed and we were unable to recover it. 00:39:04.614 [2024-09-27 15:57:45.032003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.614 [2024-09-27 15:57:45.032012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.614 qpair failed and we were unable to recover it. 00:39:04.614 [2024-09-27 15:57:45.032421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.614 [2024-09-27 15:57:45.032428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.614 qpair failed and we were unable to recover it. 00:39:04.614 [2024-09-27 15:57:45.032760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.614 [2024-09-27 15:57:45.032767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.614 qpair failed and we were unable to recover it. 00:39:04.614 [2024-09-27 15:57:45.033087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.614 [2024-09-27 15:57:45.033094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.614 qpair failed and we were unable to recover it. 00:39:04.614 [2024-09-27 15:57:45.033420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.614 [2024-09-27 15:57:45.033427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.614 qpair failed and we were unable to recover it. 00:39:04.614 [2024-09-27 15:57:45.033762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.614 [2024-09-27 15:57:45.033770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.614 qpair failed and we were unable to recover it. 00:39:04.614 [2024-09-27 15:57:45.034082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.614 [2024-09-27 15:57:45.034090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.614 qpair failed and we were unable to recover it. 00:39:04.614 [2024-09-27 15:57:45.034257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.614 [2024-09-27 15:57:45.034267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.614 qpair failed and we were unable to recover it. 00:39:04.614 [2024-09-27 15:57:45.034615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.614 [2024-09-27 15:57:45.034622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.614 qpair failed and we were unable to recover it. 00:39:04.614 [2024-09-27 15:57:45.034963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.614 [2024-09-27 15:57:45.034970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.614 qpair failed and we were unable to recover it. 00:39:04.614 [2024-09-27 15:57:45.035359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.614 [2024-09-27 15:57:45.035366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.614 qpair failed and we were unable to recover it. 00:39:04.614 [2024-09-27 15:57:45.035685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.615 [2024-09-27 15:57:45.035693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.615 qpair failed and we were unable to recover it. 00:39:04.615 [2024-09-27 15:57:45.036014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.615 [2024-09-27 15:57:45.036022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.615 qpair failed and we were unable to recover it. 00:39:04.615 [2024-09-27 15:57:45.036344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.615 [2024-09-27 15:57:45.036352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.615 qpair failed and we were unable to recover it. 00:39:04.615 [2024-09-27 15:57:45.036683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.615 [2024-09-27 15:57:45.036691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.615 qpair failed and we were unable to recover it. 00:39:04.615 [2024-09-27 15:57:45.036915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.615 [2024-09-27 15:57:45.036925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.615 qpair failed and we were unable to recover it. 00:39:04.615 [2024-09-27 15:57:45.037251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.615 [2024-09-27 15:57:45.037259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.615 qpair failed and we were unable to recover it. 00:39:04.615 [2024-09-27 15:57:45.037575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.615 [2024-09-27 15:57:45.037584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.615 qpair failed and we were unable to recover it. 00:39:04.615 [2024-09-27 15:57:45.037914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.615 [2024-09-27 15:57:45.037922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.615 qpair failed and we were unable to recover it. 00:39:04.615 [2024-09-27 15:57:45.038222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.615 [2024-09-27 15:57:45.038230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.615 qpair failed and we were unable to recover it. 00:39:04.615 [2024-09-27 15:57:45.038559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.615 [2024-09-27 15:57:45.038568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.615 qpair failed and we were unable to recover it. 00:39:04.615 [2024-09-27 15:57:45.038909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.615 [2024-09-27 15:57:45.038918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.615 qpair failed and we were unable to recover it. 00:39:04.615 [2024-09-27 15:57:45.039238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.615 [2024-09-27 15:57:45.039246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.615 qpair failed and we were unable to recover it. 00:39:04.615 [2024-09-27 15:57:45.039554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.615 [2024-09-27 15:57:45.039568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.615 qpair failed and we were unable to recover it. 00:39:04.615 [2024-09-27 15:57:45.039949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.615 [2024-09-27 15:57:45.039958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.615 qpair failed and we were unable to recover it. 00:39:04.615 [2024-09-27 15:57:45.040280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.615 [2024-09-27 15:57:45.040288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.615 qpair failed and we were unable to recover it. 00:39:04.615 [2024-09-27 15:57:45.040585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.615 [2024-09-27 15:57:45.040593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.615 qpair failed and we were unable to recover it. 00:39:04.615 [2024-09-27 15:57:45.040918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.615 [2024-09-27 15:57:45.040928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.615 qpair failed and we were unable to recover it. 00:39:04.615 [2024-09-27 15:57:45.041075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.615 [2024-09-27 15:57:45.041082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.615 qpair failed and we were unable to recover it. 00:39:04.615 [2024-09-27 15:57:45.041307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.615 [2024-09-27 15:57:45.041315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.615 qpair failed and we were unable to recover it. 00:39:04.615 [2024-09-27 15:57:45.041668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.615 [2024-09-27 15:57:45.041676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.615 qpair failed and we were unable to recover it. 00:39:04.615 [2024-09-27 15:57:45.041973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.615 [2024-09-27 15:57:45.041981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.615 qpair failed and we were unable to recover it. 00:39:04.615 [2024-09-27 15:57:45.042216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.615 [2024-09-27 15:57:45.042224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.615 qpair failed and we were unable to recover it. 00:39:04.615 [2024-09-27 15:57:45.042540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.615 [2024-09-27 15:57:45.042550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.615 qpair failed and we were unable to recover it. 00:39:04.615 [2024-09-27 15:57:45.042889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.615 [2024-09-27 15:57:45.042920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.615 qpair failed and we were unable to recover it. 00:39:04.615 [2024-09-27 15:57:45.043213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.615 [2024-09-27 15:57:45.043223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.615 qpair failed and we were unable to recover it. 00:39:04.615 [2024-09-27 15:57:45.043537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.615 [2024-09-27 15:57:45.043545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.615 qpair failed and we were unable to recover it. 00:39:04.615 [2024-09-27 15:57:45.043918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.615 [2024-09-27 15:57:45.043927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.615 qpair failed and we were unable to recover it. 00:39:04.615 [2024-09-27 15:57:45.044150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.615 [2024-09-27 15:57:45.044158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.615 qpair failed and we were unable to recover it. 00:39:04.615 [2024-09-27 15:57:45.044483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.615 [2024-09-27 15:57:45.044491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.615 qpair failed and we were unable to recover it. 00:39:04.615 [2024-09-27 15:57:45.044795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.615 [2024-09-27 15:57:45.044802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.615 qpair failed and we were unable to recover it. 00:39:04.615 [2024-09-27 15:57:45.045141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.615 [2024-09-27 15:57:45.045149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.615 qpair failed and we were unable to recover it. 00:39:04.615 [2024-09-27 15:57:45.045485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.615 [2024-09-27 15:57:45.045493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.615 qpair failed and we were unable to recover it. 00:39:04.615 [2024-09-27 15:57:45.045835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.615 [2024-09-27 15:57:45.045843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.615 qpair failed and we were unable to recover it. 00:39:04.615 [2024-09-27 15:57:45.046191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.615 [2024-09-27 15:57:45.046198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.615 qpair failed and we were unable to recover it. 00:39:04.615 [2024-09-27 15:57:45.046510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.615 [2024-09-27 15:57:45.046517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.615 qpair failed and we were unable to recover it. 00:39:04.615 [2024-09-27 15:57:45.046851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.615 [2024-09-27 15:57:45.046859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.615 qpair failed and we were unable to recover it. 00:39:04.615 [2024-09-27 15:57:45.047066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.616 [2024-09-27 15:57:45.047074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.616 qpair failed and we were unable to recover it. 00:39:04.616 [2024-09-27 15:57:45.047412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.616 [2024-09-27 15:57:45.047420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.616 qpair failed and we were unable to recover it. 00:39:04.616 [2024-09-27 15:57:45.047737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.616 [2024-09-27 15:57:45.047752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.616 qpair failed and we were unable to recover it. 00:39:04.616 [2024-09-27 15:57:45.048056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.616 [2024-09-27 15:57:45.048066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.616 qpair failed and we were unable to recover it. 00:39:04.616 [2024-09-27 15:57:45.048355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.616 [2024-09-27 15:57:45.048362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.616 qpair failed and we were unable to recover it. 00:39:04.616 [2024-09-27 15:57:45.048652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.616 [2024-09-27 15:57:45.048661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.616 qpair failed and we were unable to recover it. 00:39:04.616 [2024-09-27 15:57:45.048856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.616 [2024-09-27 15:57:45.048865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.616 qpair failed and we were unable to recover it. 00:39:04.616 [2024-09-27 15:57:45.049226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.616 [2024-09-27 15:57:45.049234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.616 qpair failed and we were unable to recover it. 00:39:04.616 [2024-09-27 15:57:45.049555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.616 [2024-09-27 15:57:45.049563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.616 qpair failed and we were unable to recover it. 00:39:04.616 [2024-09-27 15:57:45.049856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.616 [2024-09-27 15:57:45.049863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.616 qpair failed and we were unable to recover it. 00:39:04.616 [2024-09-27 15:57:45.050193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.616 [2024-09-27 15:57:45.050200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.616 qpair failed and we were unable to recover it. 00:39:04.616 [2024-09-27 15:57:45.050523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.616 [2024-09-27 15:57:45.050532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.616 qpair failed and we were unable to recover it. 00:39:04.616 [2024-09-27 15:57:45.050724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.616 [2024-09-27 15:57:45.050731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.616 qpair failed and we were unable to recover it. 00:39:04.616 [2024-09-27 15:57:45.051035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.616 [2024-09-27 15:57:45.051043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.616 qpair failed and we were unable to recover it. 00:39:04.616 [2024-09-27 15:57:45.051347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.616 [2024-09-27 15:57:45.051355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.616 qpair failed and we were unable to recover it. 00:39:04.616 [2024-09-27 15:57:45.051537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.616 [2024-09-27 15:57:45.051544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.616 qpair failed and we were unable to recover it. 00:39:04.616 [2024-09-27 15:57:45.051912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.616 [2024-09-27 15:57:45.051919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.616 qpair failed and we were unable to recover it. 00:39:04.616 [2024-09-27 15:57:45.052212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.616 [2024-09-27 15:57:45.052222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.616 qpair failed and we were unable to recover it. 00:39:04.616 [2024-09-27 15:57:45.052557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.616 [2024-09-27 15:57:45.052565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.616 qpair failed and we were unable to recover it. 00:39:04.616 [2024-09-27 15:57:45.052886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.616 [2024-09-27 15:57:45.052902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.616 qpair failed and we were unable to recover it. 00:39:04.616 [2024-09-27 15:57:45.053217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.616 [2024-09-27 15:57:45.053224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.616 qpair failed and we were unable to recover it. 00:39:04.616 [2024-09-27 15:57:45.053390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.616 [2024-09-27 15:57:45.053399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.616 qpair failed and we were unable to recover it. 00:39:04.616 [2024-09-27 15:57:45.053722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.616 [2024-09-27 15:57:45.053730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.616 qpair failed and we were unable to recover it. 00:39:04.616 [2024-09-27 15:57:45.054072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.616 [2024-09-27 15:57:45.054082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.616 qpair failed and we were unable to recover it. 00:39:04.616 [2024-09-27 15:57:45.054429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.616 [2024-09-27 15:57:45.054436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.616 qpair failed and we were unable to recover it. 00:39:04.616 [2024-09-27 15:57:45.054758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.616 [2024-09-27 15:57:45.054766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.616 qpair failed and we were unable to recover it. 00:39:04.616 [2024-09-27 15:57:45.055087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.616 [2024-09-27 15:57:45.055095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.616 qpair failed and we were unable to recover it. 00:39:04.616 [2024-09-27 15:57:45.055426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.616 [2024-09-27 15:57:45.055433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.616 qpair failed and we were unable to recover it. 00:39:04.616 [2024-09-27 15:57:45.055752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.616 [2024-09-27 15:57:45.055761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.616 qpair failed and we were unable to recover it. 00:39:04.616 [2024-09-27 15:57:45.056042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.616 [2024-09-27 15:57:45.056050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.616 qpair failed and we were unable to recover it. 00:39:04.616 [2024-09-27 15:57:45.056379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.616 [2024-09-27 15:57:45.056387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.616 qpair failed and we were unable to recover it. 00:39:04.616 [2024-09-27 15:57:45.056681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.616 [2024-09-27 15:57:45.056688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.616 qpair failed and we were unable to recover it. 00:39:04.616 [2024-09-27 15:57:45.057000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.617 [2024-09-27 15:57:45.057008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.617 qpair failed and we were unable to recover it. 00:39:04.617 [2024-09-27 15:57:45.057214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.617 [2024-09-27 15:57:45.057222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.617 qpair failed and we were unable to recover it. 00:39:04.617 [2024-09-27 15:57:45.057554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.617 [2024-09-27 15:57:45.057563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.617 qpair failed and we were unable to recover it. 00:39:04.617 [2024-09-27 15:57:45.057876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.617 [2024-09-27 15:57:45.057885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.617 qpair failed and we were unable to recover it. 00:39:04.617 [2024-09-27 15:57:45.058218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.617 [2024-09-27 15:57:45.058226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.617 qpair failed and we were unable to recover it. 00:39:04.617 [2024-09-27 15:57:45.058569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.617 [2024-09-27 15:57:45.058577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.617 qpair failed and we were unable to recover it. 00:39:04.617 [2024-09-27 15:57:45.058891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.617 [2024-09-27 15:57:45.058906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.617 qpair failed and we were unable to recover it. 00:39:04.617 [2024-09-27 15:57:45.059239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.617 [2024-09-27 15:57:45.059248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.617 qpair failed and we were unable to recover it. 00:39:04.617 [2024-09-27 15:57:45.059530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.617 [2024-09-27 15:57:45.059537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.617 qpair failed and we were unable to recover it. 00:39:04.617 [2024-09-27 15:57:45.059857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.617 [2024-09-27 15:57:45.059864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.617 qpair failed and we were unable to recover it. 00:39:04.617 [2024-09-27 15:57:45.060163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.617 [2024-09-27 15:57:45.060171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.617 qpair failed and we were unable to recover it. 00:39:04.617 [2024-09-27 15:57:45.060395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.617 [2024-09-27 15:57:45.060403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.617 qpair failed and we were unable to recover it. 00:39:04.617 [2024-09-27 15:57:45.060634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.617 [2024-09-27 15:57:45.060642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.617 qpair failed and we were unable to recover it. 00:39:04.617 [2024-09-27 15:57:45.060995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.617 [2024-09-27 15:57:45.061002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.617 qpair failed and we were unable to recover it. 00:39:04.617 [2024-09-27 15:57:45.061301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.617 [2024-09-27 15:57:45.061309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.617 qpair failed and we were unable to recover it. 00:39:04.617 [2024-09-27 15:57:45.061622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.617 [2024-09-27 15:57:45.061631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.617 qpair failed and we were unable to recover it. 00:39:04.617 [2024-09-27 15:57:45.061951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.617 [2024-09-27 15:57:45.061958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.617 qpair failed and we were unable to recover it. 00:39:04.617 [2024-09-27 15:57:45.062269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.617 [2024-09-27 15:57:45.062276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.617 qpair failed and we were unable to recover it. 00:39:04.617 [2024-09-27 15:57:45.062602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.617 [2024-09-27 15:57:45.062610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.617 qpair failed and we were unable to recover it. 00:39:04.617 [2024-09-27 15:57:45.062933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.617 [2024-09-27 15:57:45.062941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.617 qpair failed and we were unable to recover it. 00:39:04.617 [2024-09-27 15:57:45.063151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.617 [2024-09-27 15:57:45.063158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.617 qpair failed and we were unable to recover it. 00:39:04.617 [2024-09-27 15:57:45.063482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.617 [2024-09-27 15:57:45.063489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.617 qpair failed and we were unable to recover it. 00:39:04.617 [2024-09-27 15:57:45.063793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.617 [2024-09-27 15:57:45.063801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.617 qpair failed and we were unable to recover it. 00:39:04.617 [2024-09-27 15:57:45.064129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.617 [2024-09-27 15:57:45.064137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.617 qpair failed and we were unable to recover it. 00:39:04.617 [2024-09-27 15:57:45.064446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.617 [2024-09-27 15:57:45.064454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.617 qpair failed and we were unable to recover it. 00:39:04.617 [2024-09-27 15:57:45.064800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.617 [2024-09-27 15:57:45.064807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.617 qpair failed and we were unable to recover it. 00:39:04.617 [2024-09-27 15:57:45.065016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.617 [2024-09-27 15:57:45.065024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.617 qpair failed and we were unable to recover it. 00:39:04.617 [2024-09-27 15:57:45.065371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.617 [2024-09-27 15:57:45.065378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.617 qpair failed and we were unable to recover it. 00:39:04.617 [2024-09-27 15:57:45.065696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.617 [2024-09-27 15:57:45.065703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.617 qpair failed and we were unable to recover it. 00:39:04.617 [2024-09-27 15:57:45.066023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.617 [2024-09-27 15:57:45.066030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.617 qpair failed and we were unable to recover it. 00:39:04.617 [2024-09-27 15:57:45.066340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.617 [2024-09-27 15:57:45.066348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.617 qpair failed and we were unable to recover it. 00:39:04.617 [2024-09-27 15:57:45.066526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.617 [2024-09-27 15:57:45.066534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.617 qpair failed and we were unable to recover it. 00:39:04.617 [2024-09-27 15:57:45.066830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.617 [2024-09-27 15:57:45.066838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.617 qpair failed and we were unable to recover it. 00:39:04.617 [2024-09-27 15:57:45.067169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.617 [2024-09-27 15:57:45.067177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.617 qpair failed and we were unable to recover it. 00:39:04.617 [2024-09-27 15:57:45.067487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.617 [2024-09-27 15:57:45.067495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.617 qpair failed and we were unable to recover it. 00:39:04.617 [2024-09-27 15:57:45.067809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.617 [2024-09-27 15:57:45.067819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.617 qpair failed and we were unable to recover it. 00:39:04.617 [2024-09-27 15:57:45.068150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.617 [2024-09-27 15:57:45.068159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.617 qpair failed and we were unable to recover it. 00:39:04.617 [2024-09-27 15:57:45.068472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.618 [2024-09-27 15:57:45.068480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.618 qpair failed and we were unable to recover it. 00:39:04.618 [2024-09-27 15:57:45.068795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.618 [2024-09-27 15:57:45.068804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.618 qpair failed and we were unable to recover it. 00:39:04.618 [2024-09-27 15:57:45.068998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.618 [2024-09-27 15:57:45.069009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.618 qpair failed and we were unable to recover it. 00:39:04.618 [2024-09-27 15:57:45.069323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.618 [2024-09-27 15:57:45.069331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.618 qpair failed and we were unable to recover it. 00:39:04.618 [2024-09-27 15:57:45.069658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.618 [2024-09-27 15:57:45.069666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.618 qpair failed and we were unable to recover it. 00:39:04.618 [2024-09-27 15:57:45.069870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.618 [2024-09-27 15:57:45.069879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.618 qpair failed and we were unable to recover it. 00:39:04.618 [2024-09-27 15:57:45.070209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.618 [2024-09-27 15:57:45.070219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.618 qpair failed and we were unable to recover it. 00:39:04.618 [2024-09-27 15:57:45.070452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.618 [2024-09-27 15:57:45.070461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.618 qpair failed and we were unable to recover it. 00:39:04.618 [2024-09-27 15:57:45.070802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.618 [2024-09-27 15:57:45.070810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.618 qpair failed and we were unable to recover it. 00:39:04.618 [2024-09-27 15:57:45.071105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.618 [2024-09-27 15:57:45.071114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.618 qpair failed and we were unable to recover it. 00:39:04.618 [2024-09-27 15:57:45.071413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.618 [2024-09-27 15:57:45.071422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.618 qpair failed and we were unable to recover it. 00:39:04.618 [2024-09-27 15:57:45.071745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.618 [2024-09-27 15:57:45.071755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.618 qpair failed and we were unable to recover it. 00:39:04.618 [2024-09-27 15:57:45.072054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.618 [2024-09-27 15:57:45.072062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.618 qpair failed and we were unable to recover it. 00:39:04.618 [2024-09-27 15:57:45.072377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.618 [2024-09-27 15:57:45.072385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.618 qpair failed and we were unable to recover it. 00:39:04.618 [2024-09-27 15:57:45.072709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.618 [2024-09-27 15:57:45.072717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.618 qpair failed and we were unable to recover it. 00:39:04.618 [2024-09-27 15:57:45.073035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.618 [2024-09-27 15:57:45.073043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.618 qpair failed and we were unable to recover it. 00:39:04.618 [2024-09-27 15:57:45.073346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.618 [2024-09-27 15:57:45.073353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.618 qpair failed and we were unable to recover it. 00:39:04.618 [2024-09-27 15:57:45.073583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.618 [2024-09-27 15:57:45.073591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.618 qpair failed and we were unable to recover it. 00:39:04.618 [2024-09-27 15:57:45.073866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.618 [2024-09-27 15:57:45.073874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.618 qpair failed and we were unable to recover it. 00:39:04.618 [2024-09-27 15:57:45.074209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.618 [2024-09-27 15:57:45.074217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.618 qpair failed and we were unable to recover it. 00:39:04.618 [2024-09-27 15:57:45.074467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.618 [2024-09-27 15:57:45.074474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.618 qpair failed and we were unable to recover it. 00:39:04.618 [2024-09-27 15:57:45.074818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.618 [2024-09-27 15:57:45.074827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.618 qpair failed and we were unable to recover it. 00:39:04.618 [2024-09-27 15:57:45.075179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.618 [2024-09-27 15:57:45.075188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.618 qpair failed and we were unable to recover it. 00:39:04.618 [2024-09-27 15:57:45.075508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.618 [2024-09-27 15:57:45.075515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.618 qpair failed and we were unable to recover it. 00:39:04.618 [2024-09-27 15:57:45.075835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.618 [2024-09-27 15:57:45.075843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.618 qpair failed and we were unable to recover it. 00:39:04.618 [2024-09-27 15:57:45.076179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.618 [2024-09-27 15:57:45.076188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.618 qpair failed and we were unable to recover it. 00:39:04.618 [2024-09-27 15:57:45.076507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.618 [2024-09-27 15:57:45.076514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.618 qpair failed and we were unable to recover it. 00:39:04.618 [2024-09-27 15:57:45.076765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.618 [2024-09-27 15:57:45.076772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.618 qpair failed and we were unable to recover it. 00:39:04.618 [2024-09-27 15:57:45.077103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.618 [2024-09-27 15:57:45.077110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.618 qpair failed and we were unable to recover it. 00:39:04.618 [2024-09-27 15:57:45.077317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.618 [2024-09-27 15:57:45.077327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.618 qpair failed and we were unable to recover it. 00:39:04.618 [2024-09-27 15:57:45.077706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.618 [2024-09-27 15:57:45.077713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.618 qpair failed and we were unable to recover it. 00:39:04.618 [2024-09-27 15:57:45.077813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.618 [2024-09-27 15:57:45.077819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.618 qpair failed and we were unable to recover it. 00:39:04.618 [2024-09-27 15:57:45.078217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.618 [2024-09-27 15:57:45.078226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.618 qpair failed and we were unable to recover it. 00:39:04.618 [2024-09-27 15:57:45.078537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.618 [2024-09-27 15:57:45.078544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.618 qpair failed and we were unable to recover it. 00:39:04.618 [2024-09-27 15:57:45.078854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.618 [2024-09-27 15:57:45.078862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.618 qpair failed and we were unable to recover it. 00:39:04.618 [2024-09-27 15:57:45.079111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.618 [2024-09-27 15:57:45.079119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.618 qpair failed and we were unable to recover it. 00:39:04.618 [2024-09-27 15:57:45.079320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.618 [2024-09-27 15:57:45.079327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.618 qpair failed and we were unable to recover it. 00:39:04.619 [2024-09-27 15:57:45.079674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.619 [2024-09-27 15:57:45.079682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.619 qpair failed and we were unable to recover it. 00:39:04.619 [2024-09-27 15:57:45.080030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.619 [2024-09-27 15:57:45.080038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.619 qpair failed and we were unable to recover it. 00:39:04.619 [2024-09-27 15:57:45.080356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.619 [2024-09-27 15:57:45.080364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.619 qpair failed and we were unable to recover it. 00:39:04.619 [2024-09-27 15:57:45.080677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.619 [2024-09-27 15:57:45.080686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.619 qpair failed and we were unable to recover it. 00:39:04.619 [2024-09-27 15:57:45.081010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.619 [2024-09-27 15:57:45.081018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.619 qpair failed and we were unable to recover it. 00:39:04.619 [2024-09-27 15:57:45.081314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.619 [2024-09-27 15:57:45.081322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.619 qpair failed and we were unable to recover it. 00:39:04.619 [2024-09-27 15:57:45.081639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.619 [2024-09-27 15:57:45.081646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.619 qpair failed and we were unable to recover it. 00:39:04.619 [2024-09-27 15:57:45.081964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.619 [2024-09-27 15:57:45.081972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.619 qpair failed and we were unable to recover it. 00:39:04.619 [2024-09-27 15:57:45.082290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.619 [2024-09-27 15:57:45.082297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.619 qpair failed and we were unable to recover it. 00:39:04.619 [2024-09-27 15:57:45.082605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.619 [2024-09-27 15:57:45.082613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.619 qpair failed and we were unable to recover it. 00:39:04.619 [2024-09-27 15:57:45.082934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.619 [2024-09-27 15:57:45.082943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.619 qpair failed and we were unable to recover it. 00:39:04.619 [2024-09-27 15:57:45.083266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.619 [2024-09-27 15:57:45.083273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.619 qpair failed and we were unable to recover it. 00:39:04.619 [2024-09-27 15:57:45.083595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.619 [2024-09-27 15:57:45.083602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.619 qpair failed and we were unable to recover it. 00:39:04.619 [2024-09-27 15:57:45.083803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.619 [2024-09-27 15:57:45.083811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.619 qpair failed and we were unable to recover it. 00:39:04.619 [2024-09-27 15:57:45.084089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.619 [2024-09-27 15:57:45.084097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.619 qpair failed and we were unable to recover it. 00:39:04.619 [2024-09-27 15:57:45.084418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.619 [2024-09-27 15:57:45.084426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.619 qpair failed and we were unable to recover it. 00:39:04.619 [2024-09-27 15:57:45.084752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.619 [2024-09-27 15:57:45.084759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.619 qpair failed and we were unable to recover it. 00:39:04.619 [2024-09-27 15:57:45.085075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.619 [2024-09-27 15:57:45.085083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.619 qpair failed and we were unable to recover it. 00:39:04.619 [2024-09-27 15:57:45.085409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.619 [2024-09-27 15:57:45.085416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.619 qpair failed and we were unable to recover it. 00:39:04.619 [2024-09-27 15:57:45.085749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.619 [2024-09-27 15:57:45.085759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.619 qpair failed and we were unable to recover it. 00:39:04.619 [2024-09-27 15:57:45.086137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.619 [2024-09-27 15:57:45.086145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.619 qpair failed and we were unable to recover it. 00:39:04.619 [2024-09-27 15:57:45.086454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.619 [2024-09-27 15:57:45.086462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.619 qpair failed and we were unable to recover it. 00:39:04.619 [2024-09-27 15:57:45.086785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.619 [2024-09-27 15:57:45.086792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.619 qpair failed and we were unable to recover it. 00:39:04.619 [2024-09-27 15:57:45.087115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.619 [2024-09-27 15:57:45.087123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.619 qpair failed and we were unable to recover it. 00:39:04.619 [2024-09-27 15:57:45.087458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.619 [2024-09-27 15:57:45.087465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.619 qpair failed and we were unable to recover it. 00:39:04.619 [2024-09-27 15:57:45.087772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.619 [2024-09-27 15:57:45.087780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.619 qpair failed and we were unable to recover it. 00:39:04.619 [2024-09-27 15:57:45.088110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.619 [2024-09-27 15:57:45.088119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.619 qpair failed and we were unable to recover it. 00:39:04.619 [2024-09-27 15:57:45.088438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.619 [2024-09-27 15:57:45.088446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.619 qpair failed and we were unable to recover it. 00:39:04.619 [2024-09-27 15:57:45.088770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.619 [2024-09-27 15:57:45.088777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.619 qpair failed and we were unable to recover it. 00:39:04.619 [2024-09-27 15:57:45.089110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.619 [2024-09-27 15:57:45.089117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.619 qpair failed and we were unable to recover it. 00:39:04.619 [2024-09-27 15:57:45.089421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.619 [2024-09-27 15:57:45.089429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.619 qpair failed and we were unable to recover it. 00:39:04.619 [2024-09-27 15:57:45.089752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.619 [2024-09-27 15:57:45.089759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.619 qpair failed and we were unable to recover it. 00:39:04.619 [2024-09-27 15:57:45.090069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.619 [2024-09-27 15:57:45.090077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.619 qpair failed and we were unable to recover it. 00:39:04.619 [2024-09-27 15:57:45.090407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.619 [2024-09-27 15:57:45.090415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.619 qpair failed and we were unable to recover it. 00:39:04.619 [2024-09-27 15:57:45.090724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.619 [2024-09-27 15:57:45.090732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.619 qpair failed and we were unable to recover it. 00:39:04.619 [2024-09-27 15:57:45.091076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.619 [2024-09-27 15:57:45.091084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.619 qpair failed and we were unable to recover it. 00:39:04.619 [2024-09-27 15:57:45.091304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.619 [2024-09-27 15:57:45.091313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.620 qpair failed and we were unable to recover it. 00:39:04.620 [2024-09-27 15:57:45.091529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.620 [2024-09-27 15:57:45.091537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.620 qpair failed and we were unable to recover it. 00:39:04.620 [2024-09-27 15:57:45.091873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.620 [2024-09-27 15:57:45.091881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.620 qpair failed and we were unable to recover it. 00:39:04.620 [2024-09-27 15:57:45.092196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.620 [2024-09-27 15:57:45.092204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.620 qpair failed and we were unable to recover it. 00:39:04.620 [2024-09-27 15:57:45.092410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.620 [2024-09-27 15:57:45.092419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.620 qpair failed and we were unable to recover it. 00:39:04.620 [2024-09-27 15:57:45.092737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.620 [2024-09-27 15:57:45.092744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.620 qpair failed and we were unable to recover it. 00:39:04.620 [2024-09-27 15:57:45.093075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.620 [2024-09-27 15:57:45.093084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.620 qpair failed and we were unable to recover it. 00:39:04.620 [2024-09-27 15:57:45.093403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.620 [2024-09-27 15:57:45.093410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.620 qpair failed and we were unable to recover it. 00:39:04.620 [2024-09-27 15:57:45.093729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.620 [2024-09-27 15:57:45.093737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.620 qpair failed and we were unable to recover it. 00:39:04.896 [2024-09-27 15:57:45.094055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.896 [2024-09-27 15:57:45.094066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.896 qpair failed and we were unable to recover it. 00:39:04.896 [2024-09-27 15:57:45.094273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.896 [2024-09-27 15:57:45.094283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.896 qpair failed and we were unable to recover it. 00:39:04.896 [2024-09-27 15:57:45.094570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.896 [2024-09-27 15:57:45.094577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.896 qpair failed and we were unable to recover it. 00:39:04.896 [2024-09-27 15:57:45.094906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.896 [2024-09-27 15:57:45.094916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.896 qpair failed and we were unable to recover it. 00:39:04.896 [2024-09-27 15:57:45.095244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.896 [2024-09-27 15:57:45.095251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.896 qpair failed and we were unable to recover it. 00:39:04.896 [2024-09-27 15:57:45.095597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.896 [2024-09-27 15:57:45.095607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.896 qpair failed and we were unable to recover it. 00:39:04.896 [2024-09-27 15:57:45.095836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.896 [2024-09-27 15:57:45.095845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.896 qpair failed and we were unable to recover it. 00:39:04.896 [2024-09-27 15:57:45.096148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.896 [2024-09-27 15:57:45.096156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.896 qpair failed and we were unable to recover it. 00:39:04.897 [2024-09-27 15:57:45.096459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.897 [2024-09-27 15:57:45.096467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.897 qpair failed and we were unable to recover it. 00:39:04.897 [2024-09-27 15:57:45.096820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.897 [2024-09-27 15:57:45.096827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.897 qpair failed and we were unable to recover it. 00:39:04.897 [2024-09-27 15:57:45.097147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.897 [2024-09-27 15:57:45.097155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.897 qpair failed and we were unable to recover it. 00:39:04.897 [2024-09-27 15:57:45.097516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.897 [2024-09-27 15:57:45.097524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.897 qpair failed and we were unable to recover it. 00:39:04.897 [2024-09-27 15:57:45.097837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.897 [2024-09-27 15:57:45.097845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.897 qpair failed and we were unable to recover it. 00:39:04.897 [2024-09-27 15:57:45.098199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.897 [2024-09-27 15:57:45.098208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.897 qpair failed and we were unable to recover it. 00:39:04.897 [2024-09-27 15:57:45.098415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.897 [2024-09-27 15:57:45.098424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.897 qpair failed and we were unable to recover it. 00:39:04.897 [2024-09-27 15:57:45.098739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.897 [2024-09-27 15:57:45.098749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.897 qpair failed and we were unable to recover it. 00:39:04.897 [2024-09-27 15:57:45.099071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.897 [2024-09-27 15:57:45.099079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.897 qpair failed and we were unable to recover it. 00:39:04.897 [2024-09-27 15:57:45.099294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.897 [2024-09-27 15:57:45.099302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.897 qpair failed and we were unable to recover it. 00:39:04.897 [2024-09-27 15:57:45.099600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.897 [2024-09-27 15:57:45.099608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.897 qpair failed and we were unable to recover it. 00:39:04.897 [2024-09-27 15:57:45.099930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.897 [2024-09-27 15:57:45.099938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.897 qpair failed and we were unable to recover it. 00:39:04.897 [2024-09-27 15:57:45.100271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.897 [2024-09-27 15:57:45.100279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.897 qpair failed and we were unable to recover it. 00:39:04.897 [2024-09-27 15:57:45.100598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.897 [2024-09-27 15:57:45.100606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.897 qpair failed and we were unable to recover it. 00:39:04.897 [2024-09-27 15:57:45.100929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.897 [2024-09-27 15:57:45.100937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.897 qpair failed and we were unable to recover it. 00:39:04.897 [2024-09-27 15:57:45.101349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.897 [2024-09-27 15:57:45.101357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.897 qpair failed and we were unable to recover it. 00:39:04.897 [2024-09-27 15:57:45.101662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.897 [2024-09-27 15:57:45.101669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.897 qpair failed and we were unable to recover it. 00:39:04.897 [2024-09-27 15:57:45.101965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.897 [2024-09-27 15:57:45.101973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.897 qpair failed and we were unable to recover it. 00:39:04.897 [2024-09-27 15:57:45.102307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.897 [2024-09-27 15:57:45.102314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.897 qpair failed and we were unable to recover it. 00:39:04.897 [2024-09-27 15:57:45.102640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.897 [2024-09-27 15:57:45.102648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.897 qpair failed and we were unable to recover it. 00:39:04.897 [2024-09-27 15:57:45.103041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.897 [2024-09-27 15:57:45.103049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.897 qpair failed and we were unable to recover it. 00:39:04.897 [2024-09-27 15:57:45.103342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.897 [2024-09-27 15:57:45.103350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.897 qpair failed and we were unable to recover it. 00:39:04.897 [2024-09-27 15:57:45.103683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.897 [2024-09-27 15:57:45.103691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.897 qpair failed and we were unable to recover it. 00:39:04.897 [2024-09-27 15:57:45.104018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.897 [2024-09-27 15:57:45.104025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.897 qpair failed and we were unable to recover it. 00:39:04.897 [2024-09-27 15:57:45.104360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.897 [2024-09-27 15:57:45.104367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.897 qpair failed and we were unable to recover it. 00:39:04.897 [2024-09-27 15:57:45.104563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.897 [2024-09-27 15:57:45.104572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.897 qpair failed and we were unable to recover it. 00:39:04.897 [2024-09-27 15:57:45.104848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.897 [2024-09-27 15:57:45.104856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.897 qpair failed and we were unable to recover it. 00:39:04.897 [2024-09-27 15:57:45.105178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.897 [2024-09-27 15:57:45.105187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.897 qpair failed and we were unable to recover it. 00:39:04.897 [2024-09-27 15:57:45.105495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.897 [2024-09-27 15:57:45.105503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.897 qpair failed and we were unable to recover it. 00:39:04.897 [2024-09-27 15:57:45.105810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.897 [2024-09-27 15:57:45.105819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.897 qpair failed and we were unable to recover it. 00:39:04.897 [2024-09-27 15:57:45.106145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.897 [2024-09-27 15:57:45.106154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.897 qpair failed and we were unable to recover it. 00:39:04.897 [2024-09-27 15:57:45.106337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.897 [2024-09-27 15:57:45.106344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.897 qpair failed and we were unable to recover it. 00:39:04.897 [2024-09-27 15:57:45.106638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.897 [2024-09-27 15:57:45.106646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.897 qpair failed and we were unable to recover it. 00:39:04.897 [2024-09-27 15:57:45.106982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.897 [2024-09-27 15:57:45.106989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.897 qpair failed and we were unable to recover it. 00:39:04.897 [2024-09-27 15:57:45.107185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.897 [2024-09-27 15:57:45.107194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.897 qpair failed and we were unable to recover it. 00:39:04.897 [2024-09-27 15:57:45.107551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.897 [2024-09-27 15:57:45.107559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.897 qpair failed and we were unable to recover it. 00:39:04.897 [2024-09-27 15:57:45.107859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.897 [2024-09-27 15:57:45.107868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.897 qpair failed and we were unable to recover it. 00:39:04.898 [2024-09-27 15:57:45.108210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.898 [2024-09-27 15:57:45.108218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.898 qpair failed and we were unable to recover it. 00:39:04.898 [2024-09-27 15:57:45.108539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.898 [2024-09-27 15:57:45.108546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.898 qpair failed and we were unable to recover it. 00:39:04.898 [2024-09-27 15:57:45.108873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.898 [2024-09-27 15:57:45.108880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.898 qpair failed and we were unable to recover it. 00:39:04.898 [2024-09-27 15:57:45.109031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.898 [2024-09-27 15:57:45.109039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.898 qpair failed and we were unable to recover it. 00:39:04.898 [2024-09-27 15:57:45.109390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.898 [2024-09-27 15:57:45.109398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.898 qpair failed and we were unable to recover it. 00:39:04.898 [2024-09-27 15:57:45.109723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.898 [2024-09-27 15:57:45.109731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.898 qpair failed and we were unable to recover it. 00:39:04.898 [2024-09-27 15:57:45.110047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.898 [2024-09-27 15:57:45.110055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.898 qpair failed and we were unable to recover it. 00:39:04.898 [2024-09-27 15:57:45.110382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.898 [2024-09-27 15:57:45.110389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.898 qpair failed and we were unable to recover it. 00:39:04.898 [2024-09-27 15:57:45.110802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.898 [2024-09-27 15:57:45.110809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.898 qpair failed and we were unable to recover it. 00:39:04.898 [2024-09-27 15:57:45.111122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.898 [2024-09-27 15:57:45.111129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.898 qpair failed and we were unable to recover it. 00:39:04.898 [2024-09-27 15:57:45.111457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.898 [2024-09-27 15:57:45.111466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.898 qpair failed and we were unable to recover it. 00:39:04.898 [2024-09-27 15:57:45.111766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.898 [2024-09-27 15:57:45.111773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.898 qpair failed and we were unable to recover it. 00:39:04.898 [2024-09-27 15:57:45.112098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.898 [2024-09-27 15:57:45.112107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.898 qpair failed and we were unable to recover it. 00:39:04.898 [2024-09-27 15:57:45.112425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.898 [2024-09-27 15:57:45.112432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.898 qpair failed and we were unable to recover it. 00:39:04.898 [2024-09-27 15:57:45.112654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.898 [2024-09-27 15:57:45.112661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.898 qpair failed and we were unable to recover it. 00:39:04.898 [2024-09-27 15:57:45.113038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.898 [2024-09-27 15:57:45.113046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.898 qpair failed and we were unable to recover it. 00:39:04.898 [2024-09-27 15:57:45.113344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.898 [2024-09-27 15:57:45.113351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.898 qpair failed and we were unable to recover it. 00:39:04.898 [2024-09-27 15:57:45.113681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.898 [2024-09-27 15:57:45.113689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.898 qpair failed and we were unable to recover it. 00:39:04.898 [2024-09-27 15:57:45.114035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.898 [2024-09-27 15:57:45.114044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.898 qpair failed and we were unable to recover it. 00:39:04.898 [2024-09-27 15:57:45.114361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.898 [2024-09-27 15:57:45.114370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.898 qpair failed and we were unable to recover it. 00:39:04.898 [2024-09-27 15:57:45.114693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.898 [2024-09-27 15:57:45.114700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.898 qpair failed and we were unable to recover it. 00:39:04.898 [2024-09-27 15:57:45.115021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.898 [2024-09-27 15:57:45.115029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.898 qpair failed and we were unable to recover it. 00:39:04.898 [2024-09-27 15:57:45.115354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.898 [2024-09-27 15:57:45.115361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.898 qpair failed and we were unable to recover it. 00:39:04.898 [2024-09-27 15:57:45.115668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.898 [2024-09-27 15:57:45.115676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.898 qpair failed and we were unable to recover it. 00:39:04.898 [2024-09-27 15:57:45.115913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.898 [2024-09-27 15:57:45.115923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.898 qpair failed and we were unable to recover it. 00:39:04.898 [2024-09-27 15:57:45.116257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.898 [2024-09-27 15:57:45.116264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.898 qpair failed and we were unable to recover it. 00:39:04.898 [2024-09-27 15:57:45.116580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.898 [2024-09-27 15:57:45.116588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.898 qpair failed and we were unable to recover it. 00:39:04.898 [2024-09-27 15:57:45.116892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.898 [2024-09-27 15:57:45.116906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.898 qpair failed and we were unable to recover it. 00:39:04.898 [2024-09-27 15:57:45.117224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.898 [2024-09-27 15:57:45.117232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.898 qpair failed and we were unable to recover it. 00:39:04.898 [2024-09-27 15:57:45.117543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.898 [2024-09-27 15:57:45.117551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.898 qpair failed and we were unable to recover it. 00:39:04.898 [2024-09-27 15:57:45.117874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.898 [2024-09-27 15:57:45.117882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.898 qpair failed and we were unable to recover it. 00:39:04.898 [2024-09-27 15:57:45.118174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.898 [2024-09-27 15:57:45.118183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.898 qpair failed and we were unable to recover it. 00:39:04.898 [2024-09-27 15:57:45.118493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.898 [2024-09-27 15:57:45.118501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.898 qpair failed and we were unable to recover it. 00:39:04.898 [2024-09-27 15:57:45.118818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.898 [2024-09-27 15:57:45.118827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.898 qpair failed and we were unable to recover it. 00:39:04.898 [2024-09-27 15:57:45.119181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.898 [2024-09-27 15:57:45.119189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.898 qpair failed and we were unable to recover it. 00:39:04.898 [2024-09-27 15:57:45.119508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.898 [2024-09-27 15:57:45.119516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.898 qpair failed and we were unable to recover it. 00:39:04.898 [2024-09-27 15:57:45.119830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.898 [2024-09-27 15:57:45.119837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.898 qpair failed and we were unable to recover it. 00:39:04.898 [2024-09-27 15:57:45.120018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.899 [2024-09-27 15:57:45.120026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.899 qpair failed and we were unable to recover it. 00:39:04.899 [2024-09-27 15:57:45.120427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.899 [2024-09-27 15:57:45.120435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.899 qpair failed and we were unable to recover it. 00:39:04.899 [2024-09-27 15:57:45.120822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.899 [2024-09-27 15:57:45.120830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.899 qpair failed and we were unable to recover it. 00:39:04.899 [2024-09-27 15:57:45.121159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.899 [2024-09-27 15:57:45.121167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.899 qpair failed and we were unable to recover it. 00:39:04.899 [2024-09-27 15:57:45.121478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.899 [2024-09-27 15:57:45.121487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.899 qpair failed and we were unable to recover it. 00:39:04.899 [2024-09-27 15:57:45.121807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.899 [2024-09-27 15:57:45.121816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.899 qpair failed and we were unable to recover it. 00:39:04.899 [2024-09-27 15:57:45.122045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.899 [2024-09-27 15:57:45.122053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.899 qpair failed and we were unable to recover it. 00:39:04.899 [2024-09-27 15:57:45.122372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.899 [2024-09-27 15:57:45.122379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.899 qpair failed and we were unable to recover it. 00:39:04.899 [2024-09-27 15:57:45.122684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.899 [2024-09-27 15:57:45.122691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.899 qpair failed and we were unable to recover it. 00:39:04.899 [2024-09-27 15:57:45.122845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.899 [2024-09-27 15:57:45.122854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.899 qpair failed and we were unable to recover it. 00:39:04.899 [2024-09-27 15:57:45.123191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.899 [2024-09-27 15:57:45.123198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.899 qpair failed and we were unable to recover it. 00:39:04.899 [2024-09-27 15:57:45.123513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.899 [2024-09-27 15:57:45.123521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.899 qpair failed and we were unable to recover it. 00:39:04.899 [2024-09-27 15:57:45.123869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.899 [2024-09-27 15:57:45.123878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.899 qpair failed and we were unable to recover it. 00:39:04.899 [2024-09-27 15:57:45.124192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.899 [2024-09-27 15:57:45.124201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.899 qpair failed and we were unable to recover it. 00:39:04.899 [2024-09-27 15:57:45.124518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.899 [2024-09-27 15:57:45.124526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.899 qpair failed and we were unable to recover it. 00:39:04.899 [2024-09-27 15:57:45.124865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.899 [2024-09-27 15:57:45.124875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.899 qpair failed and we were unable to recover it. 00:39:04.899 [2024-09-27 15:57:45.125195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.899 [2024-09-27 15:57:45.125204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.899 qpair failed and we were unable to recover it. 00:39:04.899 [2024-09-27 15:57:45.125522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.899 [2024-09-27 15:57:45.125530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.899 qpair failed and we were unable to recover it. 00:39:04.899 [2024-09-27 15:57:45.125848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.899 [2024-09-27 15:57:45.125857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.899 qpair failed and we were unable to recover it. 00:39:04.899 [2024-09-27 15:57:45.126181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.899 [2024-09-27 15:57:45.126190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.899 qpair failed and we were unable to recover it. 00:39:04.899 [2024-09-27 15:57:45.126422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.899 [2024-09-27 15:57:45.126431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.899 qpair failed and we were unable to recover it. 00:39:04.899 [2024-09-27 15:57:45.126786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.899 [2024-09-27 15:57:45.126795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.899 qpair failed and we were unable to recover it. 00:39:04.899 [2024-09-27 15:57:45.127131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.899 [2024-09-27 15:57:45.127140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.899 qpair failed and we were unable to recover it. 00:39:04.899 [2024-09-27 15:57:45.127459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.899 [2024-09-27 15:57:45.127467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.899 qpair failed and we were unable to recover it. 00:39:04.899 [2024-09-27 15:57:45.127785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.899 [2024-09-27 15:57:45.127794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.899 qpair failed and we were unable to recover it. 00:39:04.899 [2024-09-27 15:57:45.128122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.899 [2024-09-27 15:57:45.128131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.899 qpair failed and we were unable to recover it. 00:39:04.899 [2024-09-27 15:57:45.128332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.899 [2024-09-27 15:57:45.128342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.899 qpair failed and we were unable to recover it. 00:39:04.899 [2024-09-27 15:57:45.128550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.899 [2024-09-27 15:57:45.128559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.899 qpair failed and we were unable to recover it. 00:39:04.899 [2024-09-27 15:57:45.128783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.899 [2024-09-27 15:57:45.128792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.899 qpair failed and we were unable to recover it. 00:39:04.899 [2024-09-27 15:57:45.129111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.899 [2024-09-27 15:57:45.129121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.899 qpair failed and we were unable to recover it. 00:39:04.899 [2024-09-27 15:57:45.129456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.899 [2024-09-27 15:57:45.129465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.899 qpair failed and we were unable to recover it. 00:39:04.899 [2024-09-27 15:57:45.129787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.899 [2024-09-27 15:57:45.129795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.899 qpair failed and we were unable to recover it. 00:39:04.899 [2024-09-27 15:57:45.130136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.899 [2024-09-27 15:57:45.130145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.899 qpair failed and we were unable to recover it. 00:39:04.899 [2024-09-27 15:57:45.130463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.899 [2024-09-27 15:57:45.130472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.899 qpair failed and we were unable to recover it. 00:39:04.899 [2024-09-27 15:57:45.130786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.899 [2024-09-27 15:57:45.130795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.899 qpair failed and we were unable to recover it. 00:39:04.899 [2024-09-27 15:57:45.131122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.899 [2024-09-27 15:57:45.131131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.899 qpair failed and we were unable to recover it. 00:39:04.899 [2024-09-27 15:57:45.131434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.899 [2024-09-27 15:57:45.131443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.899 qpair failed and we were unable to recover it. 00:39:04.899 [2024-09-27 15:57:45.131762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.900 [2024-09-27 15:57:45.131771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.900 qpair failed and we were unable to recover it. 00:39:04.900 [2024-09-27 15:57:45.132083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.900 [2024-09-27 15:57:45.132092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.900 qpair failed and we were unable to recover it. 00:39:04.900 [2024-09-27 15:57:45.132413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.900 [2024-09-27 15:57:45.132421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.900 qpair failed and we were unable to recover it. 00:39:04.900 [2024-09-27 15:57:45.132739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.900 [2024-09-27 15:57:45.132748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.900 qpair failed and we were unable to recover it. 00:39:04.900 [2024-09-27 15:57:45.132937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.900 [2024-09-27 15:57:45.132945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.900 qpair failed and we were unable to recover it. 00:39:04.900 [2024-09-27 15:57:45.133218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.900 [2024-09-27 15:57:45.133225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.900 qpair failed and we were unable to recover it. 00:39:04.900 [2024-09-27 15:57:45.133506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.900 [2024-09-27 15:57:45.133513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.900 qpair failed and we were unable to recover it. 00:39:04.900 [2024-09-27 15:57:45.133721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.900 [2024-09-27 15:57:45.133730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.900 qpair failed and we were unable to recover it. 00:39:04.900 [2024-09-27 15:57:45.134062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.900 [2024-09-27 15:57:45.134070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.900 qpair failed and we were unable to recover it. 00:39:04.900 [2024-09-27 15:57:45.134401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.900 [2024-09-27 15:57:45.134409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.900 qpair failed and we were unable to recover it. 00:39:04.900 [2024-09-27 15:57:45.134769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.900 [2024-09-27 15:57:45.134776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.900 qpair failed and we were unable to recover it. 00:39:04.900 [2024-09-27 15:57:45.135098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.900 [2024-09-27 15:57:45.135106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.900 qpair failed and we were unable to recover it. 00:39:04.900 [2024-09-27 15:57:45.135301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.900 [2024-09-27 15:57:45.135309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.900 qpair failed and we were unable to recover it. 00:39:04.900 [2024-09-27 15:57:45.135646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.900 [2024-09-27 15:57:45.135653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.900 qpair failed and we were unable to recover it. 00:39:04.900 [2024-09-27 15:57:45.135971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.900 [2024-09-27 15:57:45.135979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.900 qpair failed and we were unable to recover it. 00:39:04.900 [2024-09-27 15:57:45.136304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.900 [2024-09-27 15:57:45.136311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.900 qpair failed and we were unable to recover it. 00:39:04.900 [2024-09-27 15:57:45.136655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.900 [2024-09-27 15:57:45.136664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.900 qpair failed and we were unable to recover it. 00:39:04.900 [2024-09-27 15:57:45.136982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.900 [2024-09-27 15:57:45.136990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.900 qpair failed and we were unable to recover it. 00:39:04.900 [2024-09-27 15:57:45.137311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.900 [2024-09-27 15:57:45.137320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.900 qpair failed and we were unable to recover it. 00:39:04.900 [2024-09-27 15:57:45.137660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.900 [2024-09-27 15:57:45.137667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.900 qpair failed and we were unable to recover it. 00:39:04.900 [2024-09-27 15:57:45.137862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.900 [2024-09-27 15:57:45.137870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.900 qpair failed and we were unable to recover it. 00:39:04.900 [2024-09-27 15:57:45.138205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.900 [2024-09-27 15:57:45.138213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.900 qpair failed and we were unable to recover it. 00:39:04.900 [2024-09-27 15:57:45.138503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.900 [2024-09-27 15:57:45.138511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.900 qpair failed and we were unable to recover it. 00:39:04.900 [2024-09-27 15:57:45.138835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.900 [2024-09-27 15:57:45.138842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.900 qpair failed and we were unable to recover it. 00:39:04.900 [2024-09-27 15:57:45.139074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.900 [2024-09-27 15:57:45.139082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.900 qpair failed and we were unable to recover it. 00:39:04.900 [2024-09-27 15:57:45.139399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.900 [2024-09-27 15:57:45.139406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.900 qpair failed and we were unable to recover it. 00:39:04.900 [2024-09-27 15:57:45.139609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.900 [2024-09-27 15:57:45.139616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.900 qpair failed and we were unable to recover it. 00:39:04.900 [2024-09-27 15:57:45.139980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.900 [2024-09-27 15:57:45.139989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.900 qpair failed and we were unable to recover it. 00:39:04.900 [2024-09-27 15:57:45.140299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.900 [2024-09-27 15:57:45.140306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.900 qpair failed and we were unable to recover it. 00:39:04.900 [2024-09-27 15:57:45.140601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.900 [2024-09-27 15:57:45.140608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.900 qpair failed and we were unable to recover it. 00:39:04.900 [2024-09-27 15:57:45.140820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.900 [2024-09-27 15:57:45.140829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.900 qpair failed and we were unable to recover it. 00:39:04.900 [2024-09-27 15:57:45.141143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.900 [2024-09-27 15:57:45.141150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.900 qpair failed and we were unable to recover it. 00:39:04.900 [2024-09-27 15:57:45.141465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.900 [2024-09-27 15:57:45.141473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.900 qpair failed and we were unable to recover it. 00:39:04.900 [2024-09-27 15:57:45.141795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.900 [2024-09-27 15:57:45.141804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.900 qpair failed and we were unable to recover it. 00:39:04.900 [2024-09-27 15:57:45.142159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.900 [2024-09-27 15:57:45.142168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.900 qpair failed and we were unable to recover it. 00:39:04.901 [2024-09-27 15:57:45.142495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.901 [2024-09-27 15:57:45.142502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.901 qpair failed and we were unable to recover it. 00:39:04.901 [2024-09-27 15:57:45.142822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.901 [2024-09-27 15:57:45.142829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.901 qpair failed and we were unable to recover it. 00:39:04.901 [2024-09-27 15:57:45.143142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.901 [2024-09-27 15:57:45.143149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.901 qpair failed and we were unable to recover it. 00:39:04.901 [2024-09-27 15:57:45.143569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.901 [2024-09-27 15:57:45.143578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.901 qpair failed and we were unable to recover it. 00:39:04.901 [2024-09-27 15:57:45.143904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.901 [2024-09-27 15:57:45.143913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.901 qpair failed and we were unable to recover it. 00:39:04.901 [2024-09-27 15:57:45.144224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.901 [2024-09-27 15:57:45.144231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.901 qpair failed and we were unable to recover it. 00:39:04.901 [2024-09-27 15:57:45.144402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.901 [2024-09-27 15:57:45.144410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.901 qpair failed and we were unable to recover it. 00:39:04.901 [2024-09-27 15:57:45.144767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.901 [2024-09-27 15:57:45.144774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.901 qpair failed and we were unable to recover it. 00:39:04.901 [2024-09-27 15:57:45.145090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.901 [2024-09-27 15:57:45.145098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.901 qpair failed and we were unable to recover it. 00:39:04.901 [2024-09-27 15:57:45.145416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.901 [2024-09-27 15:57:45.145424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.901 qpair failed and we were unable to recover it. 00:39:04.901 [2024-09-27 15:57:45.145727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.901 [2024-09-27 15:57:45.145738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.901 qpair failed and we were unable to recover it. 00:39:04.901 [2024-09-27 15:57:45.146096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.901 [2024-09-27 15:57:45.146103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.901 qpair failed and we were unable to recover it. 00:39:04.901 [2024-09-27 15:57:45.146421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.901 [2024-09-27 15:57:45.146429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.901 qpair failed and we were unable to recover it. 00:39:04.901 [2024-09-27 15:57:45.146619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.901 [2024-09-27 15:57:45.146626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.901 qpair failed and we were unable to recover it. 00:39:04.901 [2024-09-27 15:57:45.146838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.901 [2024-09-27 15:57:45.146845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.901 qpair failed and we were unable to recover it. 00:39:04.901 [2024-09-27 15:57:45.147147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.901 [2024-09-27 15:57:45.147155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.901 qpair failed and we were unable to recover it. 00:39:04.901 [2024-09-27 15:57:45.147442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.901 [2024-09-27 15:57:45.147451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.901 qpair failed and we were unable to recover it. 00:39:04.901 [2024-09-27 15:57:45.147869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.901 [2024-09-27 15:57:45.147876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.901 qpair failed and we were unable to recover it. 00:39:04.901 [2024-09-27 15:57:45.148230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.901 [2024-09-27 15:57:45.148237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.901 qpair failed and we were unable to recover it. 00:39:04.901 [2024-09-27 15:57:45.148508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.901 [2024-09-27 15:57:45.148516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.901 qpair failed and we were unable to recover it. 00:39:04.901 [2024-09-27 15:57:45.148849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.901 [2024-09-27 15:57:45.148857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.901 qpair failed and we were unable to recover it. 00:39:04.901 [2024-09-27 15:57:45.149176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.901 [2024-09-27 15:57:45.149184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.901 qpair failed and we were unable to recover it. 00:39:04.901 [2024-09-27 15:57:45.149427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.901 [2024-09-27 15:57:45.149435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.901 qpair failed and we were unable to recover it. 00:39:04.901 [2024-09-27 15:57:45.149746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.901 [2024-09-27 15:57:45.149754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.901 qpair failed and we were unable to recover it. 00:39:04.901 [2024-09-27 15:57:45.150098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.901 [2024-09-27 15:57:45.150105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.901 qpair failed and we were unable to recover it. 00:39:04.901 [2024-09-27 15:57:45.150437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.901 [2024-09-27 15:57:45.150445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.901 qpair failed and we were unable to recover it. 00:39:04.901 [2024-09-27 15:57:45.150770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.901 [2024-09-27 15:57:45.150777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.901 qpair failed and we were unable to recover it. 00:39:04.901 [2024-09-27 15:57:45.151066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.901 [2024-09-27 15:57:45.151074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.901 qpair failed and we were unable to recover it. 00:39:04.901 [2024-09-27 15:57:45.151310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.901 [2024-09-27 15:57:45.151319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.901 qpair failed and we were unable to recover it. 00:39:04.902 [2024-09-27 15:57:45.151639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.902 [2024-09-27 15:57:45.151648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.902 qpair failed and we were unable to recover it. 00:39:04.902 [2024-09-27 15:57:45.151997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.902 [2024-09-27 15:57:45.152005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.902 qpair failed and we were unable to recover it. 00:39:04.902 [2024-09-27 15:57:45.152311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.902 [2024-09-27 15:57:45.152319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.902 qpair failed and we were unable to recover it. 00:39:04.902 [2024-09-27 15:57:45.152630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.902 [2024-09-27 15:57:45.152639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.902 qpair failed and we were unable to recover it. 00:39:04.902 [2024-09-27 15:57:45.152994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.902 [2024-09-27 15:57:45.153002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.902 qpair failed and we were unable to recover it. 00:39:04.902 [2024-09-27 15:57:45.153324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.902 [2024-09-27 15:57:45.153332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.902 qpair failed and we were unable to recover it. 00:39:04.902 [2024-09-27 15:57:45.153675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.902 [2024-09-27 15:57:45.153683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.902 qpair failed and we were unable to recover it. 00:39:04.902 [2024-09-27 15:57:45.154085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.902 [2024-09-27 15:57:45.154096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.902 qpair failed and we were unable to recover it. 00:39:04.902 [2024-09-27 15:57:45.154270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.902 [2024-09-27 15:57:45.154278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.902 qpair failed and we were unable to recover it. 00:39:04.902 [2024-09-27 15:57:45.154584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.902 [2024-09-27 15:57:45.154592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.902 qpair failed and we were unable to recover it. 00:39:04.902 [2024-09-27 15:57:45.154914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.902 [2024-09-27 15:57:45.154922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.902 qpair failed and we were unable to recover it. 00:39:04.902 [2024-09-27 15:57:45.155253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.902 [2024-09-27 15:57:45.155261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.902 qpair failed and we were unable to recover it. 00:39:04.902 [2024-09-27 15:57:45.155605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.902 [2024-09-27 15:57:45.155614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.902 qpair failed and we were unable to recover it. 00:39:04.902 [2024-09-27 15:57:45.155961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.902 [2024-09-27 15:57:45.155969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.902 qpair failed and we were unable to recover it. 00:39:04.902 [2024-09-27 15:57:45.156168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.902 [2024-09-27 15:57:45.156176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.902 qpair failed and we were unable to recover it. 00:39:04.902 [2024-09-27 15:57:45.156489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.902 [2024-09-27 15:57:45.156496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.902 qpair failed and we were unable to recover it. 00:39:04.902 [2024-09-27 15:57:45.156812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.902 [2024-09-27 15:57:45.156820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.902 qpair failed and we were unable to recover it. 00:39:04.902 [2024-09-27 15:57:45.157153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.902 [2024-09-27 15:57:45.157162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.902 qpair failed and we were unable to recover it. 00:39:04.902 [2024-09-27 15:57:45.157485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.902 [2024-09-27 15:57:45.157494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.902 qpair failed and we were unable to recover it. 00:39:04.902 [2024-09-27 15:57:45.157812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.902 [2024-09-27 15:57:45.157820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.902 qpair failed and we were unable to recover it. 00:39:04.902 [2024-09-27 15:57:45.158142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.902 [2024-09-27 15:57:45.158150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.902 qpair failed and we were unable to recover it. 00:39:04.902 [2024-09-27 15:57:45.158362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.902 [2024-09-27 15:57:45.158370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.902 qpair failed and we were unable to recover it. 00:39:04.902 [2024-09-27 15:57:45.158679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.902 [2024-09-27 15:57:45.158687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.902 qpair failed and we were unable to recover it. 00:39:04.902 [2024-09-27 15:57:45.159017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.902 [2024-09-27 15:57:45.159025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.902 qpair failed and we were unable to recover it. 00:39:04.902 [2024-09-27 15:57:45.159356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.902 [2024-09-27 15:57:45.159364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.902 qpair failed and we were unable to recover it. 00:39:04.902 [2024-09-27 15:57:45.159691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.902 [2024-09-27 15:57:45.159698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.902 qpair failed and we were unable to recover it. 00:39:04.902 [2024-09-27 15:57:45.160020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.902 [2024-09-27 15:57:45.160027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.902 qpair failed and we were unable to recover it. 00:39:04.902 [2024-09-27 15:57:45.160363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.902 [2024-09-27 15:57:45.160371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.902 qpair failed and we were unable to recover it. 00:39:04.902 [2024-09-27 15:57:45.160706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.902 [2024-09-27 15:57:45.160713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.902 qpair failed and we were unable to recover it. 00:39:04.902 [2024-09-27 15:57:45.161009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.902 [2024-09-27 15:57:45.161017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.902 qpair failed and we were unable to recover it. 00:39:04.902 [2024-09-27 15:57:45.161312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.903 [2024-09-27 15:57:45.161319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.903 qpair failed and we were unable to recover it. 00:39:04.903 [2024-09-27 15:57:45.161634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.903 [2024-09-27 15:57:45.161644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.903 qpair failed and we were unable to recover it. 00:39:04.903 [2024-09-27 15:57:45.161964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.903 [2024-09-27 15:57:45.161973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.903 qpair failed and we were unable to recover it. 00:39:04.903 [2024-09-27 15:57:45.162155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.903 [2024-09-27 15:57:45.162162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.903 qpair failed and we were unable to recover it. 00:39:04.903 [2024-09-27 15:57:45.162570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.903 [2024-09-27 15:57:45.162577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.903 qpair failed and we were unable to recover it. 00:39:04.903 [2024-09-27 15:57:45.162933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.903 [2024-09-27 15:57:45.162941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.903 qpair failed and we were unable to recover it. 00:39:04.903 [2024-09-27 15:57:45.163281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.903 [2024-09-27 15:57:45.163288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.903 qpair failed and we were unable to recover it. 00:39:04.903 [2024-09-27 15:57:45.163607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.903 [2024-09-27 15:57:45.163615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.903 qpair failed and we were unable to recover it. 00:39:04.903 [2024-09-27 15:57:45.163936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.903 [2024-09-27 15:57:45.163944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.903 qpair failed and we were unable to recover it. 00:39:04.903 [2024-09-27 15:57:45.164329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.903 [2024-09-27 15:57:45.164337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.903 qpair failed and we were unable to recover it. 00:39:04.903 [2024-09-27 15:57:45.164637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.903 [2024-09-27 15:57:45.164645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.903 qpair failed and we were unable to recover it. 00:39:04.903 [2024-09-27 15:57:45.164844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.903 [2024-09-27 15:57:45.164853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.903 qpair failed and we were unable to recover it. 00:39:04.903 [2024-09-27 15:57:45.165111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.903 [2024-09-27 15:57:45.165119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.903 qpair failed and we were unable to recover it. 00:39:04.903 [2024-09-27 15:57:45.165432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.903 [2024-09-27 15:57:45.165440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.903 qpair failed and we were unable to recover it. 00:39:04.903 [2024-09-27 15:57:45.165826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.903 [2024-09-27 15:57:45.165834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.903 qpair failed and we were unable to recover it. 00:39:04.903 [2024-09-27 15:57:45.166035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.903 [2024-09-27 15:57:45.166043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.903 qpair failed and we were unable to recover it. 00:39:04.903 [2024-09-27 15:57:45.166369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.903 [2024-09-27 15:57:45.166377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.903 qpair failed and we were unable to recover it. 00:39:04.903 [2024-09-27 15:57:45.166697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.903 [2024-09-27 15:57:45.166705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.903 qpair failed and we were unable to recover it. 00:39:04.903 [2024-09-27 15:57:45.167026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.903 [2024-09-27 15:57:45.167033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.903 qpair failed and we were unable to recover it. 00:39:04.903 [2024-09-27 15:57:45.167339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.903 [2024-09-27 15:57:45.167353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.903 qpair failed and we were unable to recover it. 00:39:04.903 [2024-09-27 15:57:45.167701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.903 [2024-09-27 15:57:45.167709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.903 qpair failed and we were unable to recover it. 00:39:04.903 [2024-09-27 15:57:45.168033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.903 [2024-09-27 15:57:45.168041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.903 qpair failed and we were unable to recover it. 00:39:04.903 [2024-09-27 15:57:45.168359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.903 [2024-09-27 15:57:45.168368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.903 qpair failed and we were unable to recover it. 00:39:04.903 [2024-09-27 15:57:45.168706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.903 [2024-09-27 15:57:45.168713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.903 qpair failed and we were unable to recover it. 00:39:04.903 [2024-09-27 15:57:45.169038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.903 [2024-09-27 15:57:45.169046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.903 qpair failed and we were unable to recover it. 00:39:04.903 [2024-09-27 15:57:45.169367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.903 [2024-09-27 15:57:45.169375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.903 qpair failed and we were unable to recover it. 00:39:04.903 [2024-09-27 15:57:45.169693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.903 [2024-09-27 15:57:45.169701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.903 qpair failed and we were unable to recover it. 00:39:04.903 [2024-09-27 15:57:45.170021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.903 [2024-09-27 15:57:45.170029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.903 qpair failed and we were unable to recover it. 00:39:04.903 [2024-09-27 15:57:45.170367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.903 [2024-09-27 15:57:45.170374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.903 qpair failed and we were unable to recover it. 00:39:04.903 [2024-09-27 15:57:45.170697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.903 [2024-09-27 15:57:45.170704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.903 qpair failed and we were unable to recover it. 00:39:04.903 [2024-09-27 15:57:45.171038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.903 [2024-09-27 15:57:45.171046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.903 qpair failed and we were unable to recover it. 00:39:04.903 [2024-09-27 15:57:45.171346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.903 [2024-09-27 15:57:45.171354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.903 qpair failed and we were unable to recover it. 00:39:04.903 [2024-09-27 15:57:45.171695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.903 [2024-09-27 15:57:45.171704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.903 qpair failed and we were unable to recover it. 00:39:04.903 [2024-09-27 15:57:45.172028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.903 [2024-09-27 15:57:45.172036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.903 qpair failed and we were unable to recover it. 00:39:04.903 [2024-09-27 15:57:45.172361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.904 [2024-09-27 15:57:45.172369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.904 qpair failed and we were unable to recover it. 00:39:04.904 [2024-09-27 15:57:45.172691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.904 [2024-09-27 15:57:45.172699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.904 qpair failed and we were unable to recover it. 00:39:04.904 [2024-09-27 15:57:45.173009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.904 [2024-09-27 15:57:45.173017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.904 qpair failed and we were unable to recover it. 00:39:04.904 [2024-09-27 15:57:45.173340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.904 [2024-09-27 15:57:45.173349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.904 qpair failed and we were unable to recover it. 00:39:04.904 [2024-09-27 15:57:45.173668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.904 [2024-09-27 15:57:45.173675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.904 qpair failed and we were unable to recover it. 00:39:04.904 [2024-09-27 15:57:45.173887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.904 [2024-09-27 15:57:45.173899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.904 qpair failed and we were unable to recover it. 00:39:04.904 [2024-09-27 15:57:45.174251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.904 [2024-09-27 15:57:45.174258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.904 qpair failed and we were unable to recover it. 00:39:04.904 [2024-09-27 15:57:45.174592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.904 [2024-09-27 15:57:45.174599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.904 qpair failed and we were unable to recover it. 00:39:04.904 [2024-09-27 15:57:45.174957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.904 [2024-09-27 15:57:45.174966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.904 qpair failed and we were unable to recover it. 00:39:04.904 [2024-09-27 15:57:45.175307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.904 [2024-09-27 15:57:45.175314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.904 qpair failed and we were unable to recover it. 00:39:04.904 [2024-09-27 15:57:45.175621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.904 [2024-09-27 15:57:45.175629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.904 qpair failed and we were unable to recover it. 00:39:04.904 [2024-09-27 15:57:45.175989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.904 [2024-09-27 15:57:45.175997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.904 qpair failed and we were unable to recover it. 00:39:04.904 [2024-09-27 15:57:45.176381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.904 [2024-09-27 15:57:45.176390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.904 qpair failed and we were unable to recover it. 00:39:04.904 [2024-09-27 15:57:45.176714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.904 [2024-09-27 15:57:45.176722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.904 qpair failed and we were unable to recover it. 00:39:04.904 [2024-09-27 15:57:45.176933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.904 [2024-09-27 15:57:45.176940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.904 qpair failed and we were unable to recover it. 00:39:04.904 [2024-09-27 15:57:45.177274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.904 [2024-09-27 15:57:45.177282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.904 qpair failed and we were unable to recover it. 00:39:04.904 [2024-09-27 15:57:45.177680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.904 [2024-09-27 15:57:45.177690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.904 qpair failed and we were unable to recover it. 00:39:04.904 [2024-09-27 15:57:45.178010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.904 [2024-09-27 15:57:45.178017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.904 qpair failed and we were unable to recover it. 00:39:04.904 [2024-09-27 15:57:45.178417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.904 [2024-09-27 15:57:45.178427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.904 qpair failed and we were unable to recover it. 00:39:04.904 [2024-09-27 15:57:45.178686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.904 [2024-09-27 15:57:45.178693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.904 qpair failed and we were unable to recover it. 00:39:04.904 [2024-09-27 15:57:45.179016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.904 [2024-09-27 15:57:45.179024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.904 qpair failed and we were unable to recover it. 00:39:04.904 [2024-09-27 15:57:45.179359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.904 [2024-09-27 15:57:45.179366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.904 qpair failed and we were unable to recover it. 00:39:04.904 [2024-09-27 15:57:45.179695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.904 [2024-09-27 15:57:45.179703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.904 qpair failed and we were unable to recover it. 00:39:04.904 [2024-09-27 15:57:45.180018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.904 [2024-09-27 15:57:45.180026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.904 qpair failed and we were unable to recover it. 00:39:04.904 [2024-09-27 15:57:45.180353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.904 [2024-09-27 15:57:45.180361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.904 qpair failed and we were unable to recover it. 00:39:04.904 [2024-09-27 15:57:45.180681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.904 [2024-09-27 15:57:45.180689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.904 qpair failed and we were unable to recover it. 00:39:04.904 [2024-09-27 15:57:45.181017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.904 [2024-09-27 15:57:45.181025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.904 qpair failed and we were unable to recover it. 00:39:04.904 [2024-09-27 15:57:45.181316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.904 [2024-09-27 15:57:45.181323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.904 qpair failed and we were unable to recover it. 00:39:04.904 [2024-09-27 15:57:45.181630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.904 [2024-09-27 15:57:45.181638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.904 qpair failed and we were unable to recover it. 00:39:04.904 [2024-09-27 15:57:45.181972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.904 [2024-09-27 15:57:45.181979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.904 qpair failed and we were unable to recover it. 00:39:04.904 [2024-09-27 15:57:45.182311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.904 [2024-09-27 15:57:45.182319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.904 qpair failed and we were unable to recover it. 00:39:04.904 [2024-09-27 15:57:45.182690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.904 [2024-09-27 15:57:45.182698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.904 qpair failed and we were unable to recover it. 00:39:04.904 [2024-09-27 15:57:45.183001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.904 [2024-09-27 15:57:45.183009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.904 qpair failed and we were unable to recover it. 00:39:04.904 [2024-09-27 15:57:45.183251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.904 [2024-09-27 15:57:45.183258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.904 qpair failed and we were unable to recover it. 00:39:04.904 [2024-09-27 15:57:45.183483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.904 [2024-09-27 15:57:45.183490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.904 qpair failed and we were unable to recover it. 00:39:04.904 [2024-09-27 15:57:45.183821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.904 [2024-09-27 15:57:45.183829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.904 qpair failed and we were unable to recover it. 00:39:04.904 [2024-09-27 15:57:45.184147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.904 [2024-09-27 15:57:45.184155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.904 qpair failed and we were unable to recover it. 00:39:04.904 [2024-09-27 15:57:45.184485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.904 [2024-09-27 15:57:45.184493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.904 qpair failed and we were unable to recover it. 00:39:04.905 [2024-09-27 15:57:45.184664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.905 [2024-09-27 15:57:45.184674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.905 qpair failed and we were unable to recover it. 00:39:04.905 [2024-09-27 15:57:45.184846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.905 [2024-09-27 15:57:45.184857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.905 qpair failed and we were unable to recover it. 00:39:04.905 [2024-09-27 15:57:45.185180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.905 [2024-09-27 15:57:45.185188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.905 qpair failed and we were unable to recover it. 00:39:04.905 [2024-09-27 15:57:45.185492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.905 [2024-09-27 15:57:45.185500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.905 qpair failed and we were unable to recover it. 00:39:04.905 [2024-09-27 15:57:45.185816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.905 [2024-09-27 15:57:45.185823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.905 qpair failed and we were unable to recover it. 00:39:04.905 [2024-09-27 15:57:45.186154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.905 [2024-09-27 15:57:45.186162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.905 qpair failed and we were unable to recover it. 00:39:04.905 [2024-09-27 15:57:45.186484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.905 [2024-09-27 15:57:45.186491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.905 qpair failed and we were unable to recover it. 00:39:04.905 [2024-09-27 15:57:45.186818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.905 [2024-09-27 15:57:45.186826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.905 qpair failed and we were unable to recover it. 00:39:04.905 [2024-09-27 15:57:45.187168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.905 [2024-09-27 15:57:45.187176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.905 qpair failed and we were unable to recover it. 00:39:04.905 [2024-09-27 15:57:45.187506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.905 [2024-09-27 15:57:45.187514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.905 qpair failed and we were unable to recover it. 00:39:04.905 [2024-09-27 15:57:45.187835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.905 [2024-09-27 15:57:45.187842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.905 qpair failed and we were unable to recover it. 00:39:04.905 [2024-09-27 15:57:45.188171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.905 [2024-09-27 15:57:45.188178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.905 qpair failed and we were unable to recover it. 00:39:04.905 [2024-09-27 15:57:45.188521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.905 [2024-09-27 15:57:45.188528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.905 qpair failed and we were unable to recover it. 00:39:04.905 [2024-09-27 15:57:45.188876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.905 [2024-09-27 15:57:45.188882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.905 qpair failed and we were unable to recover it. 00:39:04.905 [2024-09-27 15:57:45.189199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.905 [2024-09-27 15:57:45.189206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.905 qpair failed and we were unable to recover it. 00:39:04.905 [2024-09-27 15:57:45.189396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.905 [2024-09-27 15:57:45.189403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.905 qpair failed and we were unable to recover it. 00:39:04.905 [2024-09-27 15:57:45.189739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.905 [2024-09-27 15:57:45.189746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.905 qpair failed and we were unable to recover it. 00:39:04.905 [2024-09-27 15:57:45.190068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.905 [2024-09-27 15:57:45.190075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.905 qpair failed and we were unable to recover it. 00:39:04.905 [2024-09-27 15:57:45.190390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.905 [2024-09-27 15:57:45.190396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.905 qpair failed and we were unable to recover it. 00:39:04.905 [2024-09-27 15:57:45.190715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.905 [2024-09-27 15:57:45.190721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.905 qpair failed and we were unable to recover it. 00:39:04.905 [2024-09-27 15:57:45.191048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.905 [2024-09-27 15:57:45.191055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.905 qpair failed and we were unable to recover it. 00:39:04.905 [2024-09-27 15:57:45.191381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.905 [2024-09-27 15:57:45.191388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.905 qpair failed and we were unable to recover it. 00:39:04.905 [2024-09-27 15:57:45.191721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.905 [2024-09-27 15:57:45.191728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.905 qpair failed and we were unable to recover it. 00:39:04.905 [2024-09-27 15:57:45.192045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.905 [2024-09-27 15:57:45.192053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.905 qpair failed and we were unable to recover it. 00:39:04.905 [2024-09-27 15:57:45.192429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.905 [2024-09-27 15:57:45.192437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.905 qpair failed and we were unable to recover it. 00:39:04.905 [2024-09-27 15:57:45.192799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.905 [2024-09-27 15:57:45.192808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.905 qpair failed and we were unable to recover it. 00:39:04.905 [2024-09-27 15:57:45.193118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.905 [2024-09-27 15:57:45.193126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.905 qpair failed and we were unable to recover it. 00:39:04.905 [2024-09-27 15:57:45.193446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.905 [2024-09-27 15:57:45.193454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.905 qpair failed and we were unable to recover it. 00:39:04.905 [2024-09-27 15:57:45.193644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.905 [2024-09-27 15:57:45.193653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.905 qpair failed and we were unable to recover it. 00:39:04.905 [2024-09-27 15:57:45.193953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.905 [2024-09-27 15:57:45.193962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.905 qpair failed and we were unable to recover it. 00:39:04.905 [2024-09-27 15:57:45.194272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.905 [2024-09-27 15:57:45.194281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.905 qpair failed and we were unable to recover it. 00:39:04.905 [2024-09-27 15:57:45.194469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.905 [2024-09-27 15:57:45.194478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.905 qpair failed and we were unable to recover it. 00:39:04.905 [2024-09-27 15:57:45.194810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.905 [2024-09-27 15:57:45.194818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.905 qpair failed and we were unable to recover it. 00:39:04.905 [2024-09-27 15:57:45.195157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.905 [2024-09-27 15:57:45.195166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.905 qpair failed and we were unable to recover it. 00:39:04.905 [2024-09-27 15:57:45.195496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.905 [2024-09-27 15:57:45.195505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.905 qpair failed and we were unable to recover it. 00:39:04.905 [2024-09-27 15:57:45.195828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.905 [2024-09-27 15:57:45.195837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.905 qpair failed and we were unable to recover it. 00:39:04.905 [2024-09-27 15:57:45.196163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.905 [2024-09-27 15:57:45.196172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.905 qpair failed and we were unable to recover it. 00:39:04.906 [2024-09-27 15:57:45.196483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.906 [2024-09-27 15:57:45.196492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.906 qpair failed and we were unable to recover it. 00:39:04.906 [2024-09-27 15:57:45.196711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.906 [2024-09-27 15:57:45.196720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.906 qpair failed and we were unable to recover it. 00:39:04.906 [2024-09-27 15:57:45.197042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.906 [2024-09-27 15:57:45.197051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.906 qpair failed and we were unable to recover it. 00:39:04.906 [2024-09-27 15:57:45.197372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.906 [2024-09-27 15:57:45.197381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.906 qpair failed and we were unable to recover it. 00:39:04.906 [2024-09-27 15:57:45.197713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.906 [2024-09-27 15:57:45.197722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.906 qpair failed and we were unable to recover it. 00:39:04.906 [2024-09-27 15:57:45.198073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.906 [2024-09-27 15:57:45.198083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.906 qpair failed and we were unable to recover it. 00:39:04.906 [2024-09-27 15:57:45.198395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.906 [2024-09-27 15:57:45.198404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.906 qpair failed and we were unable to recover it. 00:39:04.906 [2024-09-27 15:57:45.198577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.906 [2024-09-27 15:57:45.198587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.906 qpair failed and we were unable to recover it. 00:39:04.906 [2024-09-27 15:57:45.198883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.906 [2024-09-27 15:57:45.198892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.906 qpair failed and we were unable to recover it. 00:39:04.906 [2024-09-27 15:57:45.199218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.906 [2024-09-27 15:57:45.199227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.906 qpair failed and we were unable to recover it. 00:39:04.906 [2024-09-27 15:57:45.199532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.906 [2024-09-27 15:57:45.199540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.906 qpair failed and we were unable to recover it. 00:39:04.906 [2024-09-27 15:57:45.199859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.906 [2024-09-27 15:57:45.199868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.906 qpair failed and we were unable to recover it. 00:39:04.906 [2024-09-27 15:57:45.200182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.906 [2024-09-27 15:57:45.200191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.906 qpair failed and we were unable to recover it. 00:39:04.906 [2024-09-27 15:57:45.200546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.906 [2024-09-27 15:57:45.200554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.906 qpair failed and we were unable to recover it. 00:39:04.906 [2024-09-27 15:57:45.200879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.906 [2024-09-27 15:57:45.200888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.906 qpair failed and we were unable to recover it. 00:39:04.906 [2024-09-27 15:57:45.201239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.906 [2024-09-27 15:57:45.201247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.906 qpair failed and we were unable to recover it. 00:39:04.906 [2024-09-27 15:57:45.201574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.906 [2024-09-27 15:57:45.201581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.906 qpair failed and we were unable to recover it. 00:39:04.906 [2024-09-27 15:57:45.201942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.906 [2024-09-27 15:57:45.201951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.906 qpair failed and we were unable to recover it. 00:39:04.906 [2024-09-27 15:57:45.202263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.906 [2024-09-27 15:57:45.202271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.906 qpair failed and we were unable to recover it. 00:39:04.906 [2024-09-27 15:57:45.202557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.906 [2024-09-27 15:57:45.202565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.906 qpair failed and we were unable to recover it. 00:39:04.906 [2024-09-27 15:57:45.202879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.906 [2024-09-27 15:57:45.202887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.906 qpair failed and we were unable to recover it. 00:39:04.906 [2024-09-27 15:57:45.203247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.906 [2024-09-27 15:57:45.203256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.906 qpair failed and we were unable to recover it. 00:39:04.906 [2024-09-27 15:57:45.203609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.906 [2024-09-27 15:57:45.203618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.906 qpair failed and we were unable to recover it. 00:39:04.906 [2024-09-27 15:57:45.203939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.906 [2024-09-27 15:57:45.203948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.906 qpair failed and we were unable to recover it. 00:39:04.906 [2024-09-27 15:57:45.204276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.906 [2024-09-27 15:57:45.204284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.906 qpair failed and we were unable to recover it. 00:39:04.906 [2024-09-27 15:57:45.204605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.906 [2024-09-27 15:57:45.204613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.906 qpair failed and we were unable to recover it. 00:39:04.906 [2024-09-27 15:57:45.204932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.906 [2024-09-27 15:57:45.204940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.906 qpair failed and we were unable to recover it. 00:39:04.906 [2024-09-27 15:57:45.205285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.906 [2024-09-27 15:57:45.205293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.906 qpair failed and we were unable to recover it. 00:39:04.906 [2024-09-27 15:57:45.205611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.906 [2024-09-27 15:57:45.205619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.906 qpair failed and we were unable to recover it. 00:39:04.906 [2024-09-27 15:57:45.205980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.906 [2024-09-27 15:57:45.205989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.906 qpair failed and we were unable to recover it. 00:39:04.906 [2024-09-27 15:57:45.206309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.906 [2024-09-27 15:57:45.206318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.906 qpair failed and we were unable to recover it. 00:39:04.906 [2024-09-27 15:57:45.206631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.906 [2024-09-27 15:57:45.206639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.906 qpair failed and we were unable to recover it. 00:39:04.906 [2024-09-27 15:57:45.206956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.907 [2024-09-27 15:57:45.206966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.907 qpair failed and we were unable to recover it. 00:39:04.907 [2024-09-27 15:57:45.207294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.907 [2024-09-27 15:57:45.207302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.907 qpair failed and we were unable to recover it. 00:39:04.907 [2024-09-27 15:57:45.207622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.907 [2024-09-27 15:57:45.207631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.907 qpair failed and we were unable to recover it. 00:39:04.907 [2024-09-27 15:57:45.207949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.907 [2024-09-27 15:57:45.207958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.907 qpair failed and we were unable to recover it. 00:39:04.907 [2024-09-27 15:57:45.208149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.907 [2024-09-27 15:57:45.208158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.907 qpair failed and we were unable to recover it. 00:39:04.907 [2024-09-27 15:57:45.208484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.907 [2024-09-27 15:57:45.208492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.907 qpair failed and we were unable to recover it. 00:39:04.907 [2024-09-27 15:57:45.208811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.907 [2024-09-27 15:57:45.208819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.907 qpair failed and we were unable to recover it. 00:39:04.907 [2024-09-27 15:57:45.209159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.907 [2024-09-27 15:57:45.209167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.907 qpair failed and we were unable to recover it. 00:39:04.907 [2024-09-27 15:57:45.209486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.907 [2024-09-27 15:57:45.209494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.907 qpair failed and we were unable to recover it. 00:39:04.907 [2024-09-27 15:57:45.209817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.907 [2024-09-27 15:57:45.209825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.907 qpair failed and we were unable to recover it. 00:39:04.907 [2024-09-27 15:57:45.210136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.907 [2024-09-27 15:57:45.210146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.907 qpair failed and we were unable to recover it. 00:39:04.907 [2024-09-27 15:57:45.210209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.907 [2024-09-27 15:57:45.210218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.907 qpair failed and we were unable to recover it. 00:39:04.907 [2024-09-27 15:57:45.210377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.907 [2024-09-27 15:57:45.210385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.907 qpair failed and we were unable to recover it. 00:39:04.907 [2024-09-27 15:57:45.210711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.907 [2024-09-27 15:57:45.210719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.907 qpair failed and we were unable to recover it. 00:39:04.907 [2024-09-27 15:57:45.211106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.907 [2024-09-27 15:57:45.211115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.907 qpair failed and we were unable to recover it. 00:39:04.907 [2024-09-27 15:57:45.211467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.907 [2024-09-27 15:57:45.211476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.907 qpair failed and we were unable to recover it. 00:39:04.907 [2024-09-27 15:57:45.211695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.907 [2024-09-27 15:57:45.211704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.907 qpair failed and we were unable to recover it. 00:39:04.907 [2024-09-27 15:57:45.211994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.907 [2024-09-27 15:57:45.212004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.907 qpair failed and we were unable to recover it. 00:39:04.907 [2024-09-27 15:57:45.212322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.907 [2024-09-27 15:57:45.212330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.907 qpair failed and we were unable to recover it. 00:39:04.907 [2024-09-27 15:57:45.212649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.907 [2024-09-27 15:57:45.212657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.907 qpair failed and we were unable to recover it. 00:39:04.907 [2024-09-27 15:57:45.212888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.907 [2024-09-27 15:57:45.212902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.907 qpair failed and we were unable to recover it. 00:39:04.907 [2024-09-27 15:57:45.213095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.907 [2024-09-27 15:57:45.213104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.907 qpair failed and we were unable to recover it. 00:39:04.907 [2024-09-27 15:57:45.213424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.907 [2024-09-27 15:57:45.213433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.907 qpair failed and we were unable to recover it. 00:39:04.907 [2024-09-27 15:57:45.213635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.907 [2024-09-27 15:57:45.213643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.907 qpair failed and we were unable to recover it. 00:39:04.907 [2024-09-27 15:57:45.213963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.907 [2024-09-27 15:57:45.213972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.907 qpair failed and we were unable to recover it. 00:39:04.907 [2024-09-27 15:57:45.214314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.907 [2024-09-27 15:57:45.214321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.907 qpair failed and we were unable to recover it. 00:39:04.907 [2024-09-27 15:57:45.214645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.907 [2024-09-27 15:57:45.214653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.907 qpair failed and we were unable to recover it. 00:39:04.907 [2024-09-27 15:57:45.214983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.907 [2024-09-27 15:57:45.214993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.907 qpair failed and we were unable to recover it. 00:39:04.907 [2024-09-27 15:57:45.215340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.907 [2024-09-27 15:57:45.215348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.907 qpair failed and we were unable to recover it. 00:39:04.907 [2024-09-27 15:57:45.215438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.907 [2024-09-27 15:57:45.215446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.907 qpair failed and we were unable to recover it. 00:39:04.907 [2024-09-27 15:57:45.215756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.907 [2024-09-27 15:57:45.215765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.907 qpair failed and we were unable to recover it. 00:39:04.907 [2024-09-27 15:57:45.216067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.907 [2024-09-27 15:57:45.216075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.907 qpair failed and we were unable to recover it. 00:39:04.907 [2024-09-27 15:57:45.216401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.907 [2024-09-27 15:57:45.216410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.907 qpair failed and we were unable to recover it. 00:39:04.907 [2024-09-27 15:57:45.216652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.907 [2024-09-27 15:57:45.216660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.907 qpair failed and we were unable to recover it. 00:39:04.907 [2024-09-27 15:57:45.217020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.907 [2024-09-27 15:57:45.217029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.907 qpair failed and we were unable to recover it. 00:39:04.907 [2024-09-27 15:57:45.217357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.907 [2024-09-27 15:57:45.217365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.907 qpair failed and we were unable to recover it. 00:39:04.907 [2024-09-27 15:57:45.217693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.907 [2024-09-27 15:57:45.217701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.907 qpair failed and we were unable to recover it. 00:39:04.907 [2024-09-27 15:57:45.217905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.908 [2024-09-27 15:57:45.217913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.908 qpair failed and we were unable to recover it. 00:39:04.908 [2024-09-27 15:57:45.218049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.908 [2024-09-27 15:57:45.218056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.908 qpair failed and we were unable to recover it. 00:39:04.908 [2024-09-27 15:57:45.218410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.908 [2024-09-27 15:57:45.218418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.908 qpair failed and we were unable to recover it. 00:39:04.908 [2024-09-27 15:57:45.218747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.908 [2024-09-27 15:57:45.218755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.908 qpair failed and we were unable to recover it. 00:39:04.908 [2024-09-27 15:57:45.219080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.908 [2024-09-27 15:57:45.219089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.908 qpair failed and we were unable to recover it. 00:39:04.908 [2024-09-27 15:57:45.219422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.908 [2024-09-27 15:57:45.219432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.908 qpair failed and we were unable to recover it. 00:39:04.908 [2024-09-27 15:57:45.219754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.908 [2024-09-27 15:57:45.219764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.908 qpair failed and we were unable to recover it. 00:39:04.908 [2024-09-27 15:57:45.220093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.908 [2024-09-27 15:57:45.220101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.908 qpair failed and we were unable to recover it. 00:39:04.908 [2024-09-27 15:57:45.220303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.908 [2024-09-27 15:57:45.220311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.908 qpair failed and we were unable to recover it. 00:39:04.908 [2024-09-27 15:57:45.220592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.908 [2024-09-27 15:57:45.220600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.908 qpair failed and we were unable to recover it. 00:39:04.908 [2024-09-27 15:57:45.220965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.908 [2024-09-27 15:57:45.220973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.908 qpair failed and we were unable to recover it. 00:39:04.908 [2024-09-27 15:57:45.221080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.908 [2024-09-27 15:57:45.221088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.908 qpair failed and we were unable to recover it. 00:39:04.908 [2024-09-27 15:57:45.221353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.908 [2024-09-27 15:57:45.221362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.908 qpair failed and we were unable to recover it. 00:39:04.908 [2024-09-27 15:57:45.221684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.908 [2024-09-27 15:57:45.221693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.908 qpair failed and we were unable to recover it. 00:39:04.908 [2024-09-27 15:57:45.222063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.908 [2024-09-27 15:57:45.222071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.908 qpair failed and we were unable to recover it. 00:39:04.908 [2024-09-27 15:57:45.222387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.908 [2024-09-27 15:57:45.222395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.908 qpair failed and we were unable to recover it. 00:39:04.908 [2024-09-27 15:57:45.222710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.908 [2024-09-27 15:57:45.222718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.908 qpair failed and we were unable to recover it. 00:39:04.908 [2024-09-27 15:57:45.223025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.908 [2024-09-27 15:57:45.223033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.908 qpair failed and we were unable to recover it. 00:39:04.908 [2024-09-27 15:57:45.223264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.908 [2024-09-27 15:57:45.223273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.908 qpair failed and we were unable to recover it. 00:39:04.908 [2024-09-27 15:57:45.223451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.908 [2024-09-27 15:57:45.223460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.908 qpair failed and we were unable to recover it. 00:39:04.908 [2024-09-27 15:57:45.223794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.908 [2024-09-27 15:57:45.223802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.908 qpair failed and we were unable to recover it. 00:39:04.908 [2024-09-27 15:57:45.224128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.908 [2024-09-27 15:57:45.224137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.908 qpair failed and we were unable to recover it. 00:39:04.908 [2024-09-27 15:57:45.224452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.908 [2024-09-27 15:57:45.224462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.908 qpair failed and we were unable to recover it. 00:39:04.908 [2024-09-27 15:57:45.224823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.908 [2024-09-27 15:57:45.224832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.908 qpair failed and we were unable to recover it. 00:39:04.908 [2024-09-27 15:57:45.225153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.908 [2024-09-27 15:57:45.225162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.908 qpair failed and we were unable to recover it. 00:39:04.908 [2024-09-27 15:57:45.225488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.908 [2024-09-27 15:57:45.225496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.908 qpair failed and we were unable to recover it. 00:39:04.908 [2024-09-27 15:57:45.225675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.908 [2024-09-27 15:57:45.225684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.908 qpair failed and we were unable to recover it. 00:39:04.908 [2024-09-27 15:57:45.226023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.908 [2024-09-27 15:57:45.226031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.908 qpair failed and we were unable to recover it. 00:39:04.908 [2024-09-27 15:57:45.226351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.908 [2024-09-27 15:57:45.226359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.908 qpair failed and we were unable to recover it. 00:39:04.908 [2024-09-27 15:57:45.226470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.908 [2024-09-27 15:57:45.226478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.908 qpair failed and we were unable to recover it. 00:39:04.908 [2024-09-27 15:57:45.226750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.908 [2024-09-27 15:57:45.226758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.908 qpair failed and we were unable to recover it. 00:39:04.908 [2024-09-27 15:57:45.226957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.908 [2024-09-27 15:57:45.226965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.908 qpair failed and we were unable to recover it. 00:39:04.908 [2024-09-27 15:57:45.227262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.908 [2024-09-27 15:57:45.227270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.908 qpair failed and we were unable to recover it. 00:39:04.908 [2024-09-27 15:57:45.227438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.908 [2024-09-27 15:57:45.227446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.908 qpair failed and we were unable to recover it. 00:39:04.908 [2024-09-27 15:57:45.227606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.908 [2024-09-27 15:57:45.227615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.908 qpair failed and we were unable to recover it. 00:39:04.908 [2024-09-27 15:57:45.227901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.908 [2024-09-27 15:57:45.227910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.908 qpair failed and we were unable to recover it. 00:39:04.908 [2024-09-27 15:57:45.228298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.908 [2024-09-27 15:57:45.228307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.908 qpair failed and we were unable to recover it. 00:39:04.908 [2024-09-27 15:57:45.228504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.908 [2024-09-27 15:57:45.228513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.908 qpair failed and we were unable to recover it. 00:39:04.909 [2024-09-27 15:57:45.228717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.909 [2024-09-27 15:57:45.228726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.909 qpair failed and we were unable to recover it. 00:39:04.909 [2024-09-27 15:57:45.229078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.909 [2024-09-27 15:57:45.229085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.909 qpair failed and we were unable to recover it. 00:39:04.909 [2024-09-27 15:57:45.229393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.909 [2024-09-27 15:57:45.229401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.909 qpair failed and we were unable to recover it. 00:39:04.909 [2024-09-27 15:57:45.229723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.909 [2024-09-27 15:57:45.229730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.909 qpair failed and we were unable to recover it. 00:39:04.909 [2024-09-27 15:57:45.229786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.909 [2024-09-27 15:57:45.229792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.909 qpair failed and we were unable to recover it. 00:39:04.909 [2024-09-27 15:57:45.230144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.909 [2024-09-27 15:57:45.230152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.909 qpair failed and we were unable to recover it. 00:39:04.909 [2024-09-27 15:57:45.230474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.909 [2024-09-27 15:57:45.230484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.909 qpair failed and we were unable to recover it. 00:39:04.909 [2024-09-27 15:57:45.230775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.909 [2024-09-27 15:57:45.230784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.909 qpair failed and we were unable to recover it. 00:39:04.909 [2024-09-27 15:57:45.231089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.909 [2024-09-27 15:57:45.231097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.909 qpair failed and we were unable to recover it. 00:39:04.909 [2024-09-27 15:57:45.231328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.909 [2024-09-27 15:57:45.231336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.909 qpair failed and we were unable to recover it. 00:39:04.909 [2024-09-27 15:57:45.231692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.909 [2024-09-27 15:57:45.231701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.909 qpair failed and we were unable to recover it. 00:39:04.909 [2024-09-27 15:57:45.232046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.909 [2024-09-27 15:57:45.232055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.909 qpair failed and we were unable to recover it. 00:39:04.909 [2024-09-27 15:57:45.232403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.909 [2024-09-27 15:57:45.232411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.909 qpair failed and we were unable to recover it. 00:39:04.909 [2024-09-27 15:57:45.232731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.909 [2024-09-27 15:57:45.232739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.909 qpair failed and we were unable to recover it. 00:39:04.909 [2024-09-27 15:57:45.233047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.909 [2024-09-27 15:57:45.233055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.909 qpair failed and we were unable to recover it. 00:39:04.909 [2024-09-27 15:57:45.233355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.909 [2024-09-27 15:57:45.233363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.909 qpair failed and we were unable to recover it. 00:39:04.909 [2024-09-27 15:57:45.233707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.909 [2024-09-27 15:57:45.233715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.909 qpair failed and we were unable to recover it. 00:39:04.909 [2024-09-27 15:57:45.234032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.909 [2024-09-27 15:57:45.234041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.909 qpair failed and we were unable to recover it. 00:39:04.909 [2024-09-27 15:57:45.234324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.909 [2024-09-27 15:57:45.234331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.909 qpair failed and we were unable to recover it. 00:39:04.909 [2024-09-27 15:57:45.234652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.909 [2024-09-27 15:57:45.234659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.909 qpair failed and we were unable to recover it. 00:39:04.909 [2024-09-27 15:57:45.234988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.909 [2024-09-27 15:57:45.234999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.909 qpair failed and we were unable to recover it. 00:39:04.909 [2024-09-27 15:57:45.235320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.909 [2024-09-27 15:57:45.235329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.909 qpair failed and we were unable to recover it. 00:39:04.909 [2024-09-27 15:57:45.235659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.909 [2024-09-27 15:57:45.235668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.909 qpair failed and we were unable to recover it. 00:39:04.909 [2024-09-27 15:57:45.236003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.909 [2024-09-27 15:57:45.236012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.909 qpair failed and we were unable to recover it. 00:39:04.909 [2024-09-27 15:57:45.236366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.909 [2024-09-27 15:57:45.236373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.909 qpair failed and we were unable to recover it. 00:39:04.909 [2024-09-27 15:57:45.236755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.909 [2024-09-27 15:57:45.236762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.909 qpair failed and we were unable to recover it. 00:39:04.909 [2024-09-27 15:57:45.237050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.909 [2024-09-27 15:57:45.237058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.909 qpair failed and we were unable to recover it. 00:39:04.909 [2024-09-27 15:57:45.237221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.909 [2024-09-27 15:57:45.237229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.909 qpair failed and we were unable to recover it. 00:39:04.909 [2024-09-27 15:57:45.237457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.909 [2024-09-27 15:57:45.237465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.909 qpair failed and we were unable to recover it. 00:39:04.909 [2024-09-27 15:57:45.237807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.909 [2024-09-27 15:57:45.237815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.909 qpair failed and we were unable to recover it. 00:39:04.909 [2024-09-27 15:57:45.238117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.909 [2024-09-27 15:57:45.238125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.909 qpair failed and we were unable to recover it. 00:39:04.909 [2024-09-27 15:57:45.238446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.909 [2024-09-27 15:57:45.238454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.909 qpair failed and we were unable to recover it. 00:39:04.909 [2024-09-27 15:57:45.238648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.909 [2024-09-27 15:57:45.238657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.909 qpair failed and we were unable to recover it. 00:39:04.909 [2024-09-27 15:57:45.238975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.909 [2024-09-27 15:57:45.238984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.909 qpair failed and we were unable to recover it. 00:39:04.909 [2024-09-27 15:57:45.239328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.909 [2024-09-27 15:57:45.239337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.909 qpair failed and we were unable to recover it. 00:39:04.909 [2024-09-27 15:57:45.239531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.909 [2024-09-27 15:57:45.239540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.909 qpair failed and we were unable to recover it. 00:39:04.909 [2024-09-27 15:57:45.239943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.909 [2024-09-27 15:57:45.239951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.910 qpair failed and we were unable to recover it. 00:39:04.910 [2024-09-27 15:57:45.240275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.910 [2024-09-27 15:57:45.240282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.910 qpair failed and we were unable to recover it. 00:39:04.910 [2024-09-27 15:57:45.240480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.910 [2024-09-27 15:57:45.240488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.910 qpair failed and we were unable to recover it. 00:39:04.910 [2024-09-27 15:57:45.240851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.910 [2024-09-27 15:57:45.240858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.910 qpair failed and we were unable to recover it. 00:39:04.910 [2024-09-27 15:57:45.241185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.910 [2024-09-27 15:57:45.241193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.910 qpair failed and we were unable to recover it. 00:39:04.910 [2024-09-27 15:57:45.241520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.910 [2024-09-27 15:57:45.241528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.910 qpair failed and we were unable to recover it. 00:39:04.910 [2024-09-27 15:57:45.241855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.910 [2024-09-27 15:57:45.241863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.910 qpair failed and we were unable to recover it. 00:39:04.910 [2024-09-27 15:57:45.242184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.910 [2024-09-27 15:57:45.242192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.910 qpair failed and we were unable to recover it. 00:39:04.910 [2024-09-27 15:57:45.242513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.910 [2024-09-27 15:57:45.242521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.910 qpair failed and we were unable to recover it. 00:39:04.910 [2024-09-27 15:57:45.242892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.910 [2024-09-27 15:57:45.242902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.910 qpair failed and we were unable to recover it. 00:39:04.910 [2024-09-27 15:57:45.243252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.910 [2024-09-27 15:57:45.243259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.910 qpair failed and we were unable to recover it. 00:39:04.910 [2024-09-27 15:57:45.243423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.910 [2024-09-27 15:57:45.243433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.910 qpair failed and we were unable to recover it. 00:39:04.910 [2024-09-27 15:57:45.243813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.910 [2024-09-27 15:57:45.243821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.910 qpair failed and we were unable to recover it. 00:39:04.910 [2024-09-27 15:57:45.244155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.910 [2024-09-27 15:57:45.244162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.910 qpair failed and we were unable to recover it. 00:39:04.910 [2024-09-27 15:57:45.244482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.910 [2024-09-27 15:57:45.244491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.910 qpair failed and we were unable to recover it. 00:39:04.910 [2024-09-27 15:57:45.244848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.910 [2024-09-27 15:57:45.244857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.910 qpair failed and we were unable to recover it. 00:39:04.910 [2024-09-27 15:57:45.245192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.910 [2024-09-27 15:57:45.245202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.910 qpair failed and we were unable to recover it. 00:39:04.910 [2024-09-27 15:57:45.245517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.910 [2024-09-27 15:57:45.245526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.910 qpair failed and we were unable to recover it. 00:39:04.910 [2024-09-27 15:57:45.245856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.910 [2024-09-27 15:57:45.245865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.910 qpair failed and we were unable to recover it. 00:39:04.910 [2024-09-27 15:57:45.246194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.910 [2024-09-27 15:57:45.246203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.910 qpair failed and we were unable to recover it. 00:39:04.910 [2024-09-27 15:57:45.246425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.910 [2024-09-27 15:57:45.246434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.910 qpair failed and we were unable to recover it. 00:39:04.910 [2024-09-27 15:57:45.246748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.910 [2024-09-27 15:57:45.246757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.910 qpair failed and we were unable to recover it. 00:39:04.910 [2024-09-27 15:57:45.247072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.910 [2024-09-27 15:57:45.247079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.910 qpair failed and we were unable to recover it. 00:39:04.910 [2024-09-27 15:57:45.247403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.910 [2024-09-27 15:57:45.247411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.910 qpair failed and we were unable to recover it. 00:39:04.910 [2024-09-27 15:57:45.247723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.910 [2024-09-27 15:57:45.247732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.910 qpair failed and we were unable to recover it. 00:39:04.910 [2024-09-27 15:57:45.248057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.910 [2024-09-27 15:57:45.248065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.910 qpair failed and we were unable to recover it. 00:39:04.910 [2024-09-27 15:57:45.248383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.910 [2024-09-27 15:57:45.248390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.910 qpair failed and we were unable to recover it. 00:39:04.910 [2024-09-27 15:57:45.248718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.910 [2024-09-27 15:57:45.248726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.910 qpair failed and we were unable to recover it. 00:39:04.910 [2024-09-27 15:57:45.249123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.910 [2024-09-27 15:57:45.249132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.910 qpair failed and we were unable to recover it. 00:39:04.910 [2024-09-27 15:57:45.249459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.910 [2024-09-27 15:57:45.249467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.910 qpair failed and we were unable to recover it. 00:39:04.910 [2024-09-27 15:57:45.249639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.910 [2024-09-27 15:57:45.249647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.910 qpair failed and we were unable to recover it. 00:39:04.910 [2024-09-27 15:57:45.249955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.910 [2024-09-27 15:57:45.249963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.910 qpair failed and we were unable to recover it. 00:39:04.910 [2024-09-27 15:57:45.250289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.910 [2024-09-27 15:57:45.250298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.910 qpair failed and we were unable to recover it. 00:39:04.910 [2024-09-27 15:57:45.250624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.910 [2024-09-27 15:57:45.250633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.910 qpair failed and we were unable to recover it. 00:39:04.910 [2024-09-27 15:57:45.250949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.910 [2024-09-27 15:57:45.250958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.910 qpair failed and we were unable to recover it. 00:39:04.910 [2024-09-27 15:57:45.251282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.910 [2024-09-27 15:57:45.251291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.910 qpair failed and we were unable to recover it. 00:39:04.910 [2024-09-27 15:57:45.251628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.910 [2024-09-27 15:57:45.251637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.910 qpair failed and we were unable to recover it. 00:39:04.910 [2024-09-27 15:57:45.251956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.910 [2024-09-27 15:57:45.251965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.910 qpair failed and we were unable to recover it. 00:39:04.910 [2024-09-27 15:57:45.252294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.911 [2024-09-27 15:57:45.252305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.911 qpair failed and we were unable to recover it. 00:39:04.911 [2024-09-27 15:57:45.252583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.911 [2024-09-27 15:57:45.252591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.911 qpair failed and we were unable to recover it. 00:39:04.911 [2024-09-27 15:57:45.252765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.911 [2024-09-27 15:57:45.252774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.911 qpair failed and we were unable to recover it. 00:39:04.911 [2024-09-27 15:57:45.252968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.911 [2024-09-27 15:57:45.252977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.911 qpair failed and we were unable to recover it. 00:39:04.911 [2024-09-27 15:57:45.253250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.911 [2024-09-27 15:57:45.253257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.911 qpair failed and we were unable to recover it. 00:39:04.911 [2024-09-27 15:57:45.253578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.911 [2024-09-27 15:57:45.253588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.911 qpair failed and we were unable to recover it. 00:39:04.911 [2024-09-27 15:57:45.253911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.911 [2024-09-27 15:57:45.253920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.911 qpair failed and we were unable to recover it. 00:39:04.911 [2024-09-27 15:57:45.254238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.911 [2024-09-27 15:57:45.254246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.911 qpair failed and we were unable to recover it. 00:39:04.911 [2024-09-27 15:57:45.254599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.911 [2024-09-27 15:57:45.254607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.911 qpair failed and we were unable to recover it. 00:39:04.911 [2024-09-27 15:57:45.254968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.911 [2024-09-27 15:57:45.254978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.911 qpair failed and we were unable to recover it. 00:39:04.911 [2024-09-27 15:57:45.255099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.911 [2024-09-27 15:57:45.255108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.911 qpair failed and we were unable to recover it. 00:39:04.911 [2024-09-27 15:57:45.255251] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7cb60 is same with the state(6) to be set 00:39:04.911 Read completed with error (sct=0, sc=8) 00:39:04.911 starting I/O failed 00:39:04.911 Read completed with error (sct=0, sc=8) 00:39:04.911 starting I/O failed 00:39:04.911 Read completed with error (sct=0, sc=8) 00:39:04.911 starting I/O failed 00:39:04.911 Read completed with error (sct=0, sc=8) 00:39:04.911 starting I/O failed 00:39:04.911 Read completed with error (sct=0, sc=8) 00:39:04.911 starting I/O failed 00:39:04.911 Read completed with error (sct=0, sc=8) 00:39:04.911 starting I/O failed 00:39:04.911 Read completed with error (sct=0, sc=8) 00:39:04.911 starting I/O failed 00:39:04.911 Read completed with error (sct=0, sc=8) 00:39:04.911 starting I/O failed 00:39:04.911 Read completed with error (sct=0, sc=8) 00:39:04.911 starting I/O failed 00:39:04.911 Read completed with error (sct=0, sc=8) 00:39:04.911 starting I/O failed 00:39:04.911 Read completed with error (sct=0, sc=8) 00:39:04.911 starting I/O failed 00:39:04.911 Read completed with error (sct=0, sc=8) 00:39:04.911 starting I/O failed 00:39:04.911 Write completed with error (sct=0, sc=8) 00:39:04.911 starting I/O failed 00:39:04.911 Write completed with error (sct=0, sc=8) 00:39:04.911 starting I/O failed 00:39:04.911 Read completed with error (sct=0, sc=8) 00:39:04.911 starting I/O failed 00:39:04.911 Read completed with error (sct=0, sc=8) 00:39:04.911 starting I/O failed 00:39:04.911 Read completed with error (sct=0, sc=8) 00:39:04.911 starting I/O failed 00:39:04.911 Read completed with error (sct=0, sc=8) 00:39:04.911 starting I/O failed 00:39:04.911 Read completed with error (sct=0, sc=8) 00:39:04.911 starting I/O failed 00:39:04.911 Write completed with error (sct=0, sc=8) 00:39:04.911 starting I/O failed 00:39:04.911 Write completed with error (sct=0, sc=8) 00:39:04.911 starting I/O failed 00:39:04.911 Write completed with error (sct=0, sc=8) 00:39:04.911 starting I/O failed 00:39:04.911 Write completed with error (sct=0, sc=8) 00:39:04.911 starting I/O failed 00:39:04.911 Read completed with error (sct=0, sc=8) 00:39:04.911 starting I/O failed 00:39:04.911 Read completed with error (sct=0, sc=8) 00:39:04.911 starting I/O failed 00:39:04.911 Read completed with error (sct=0, sc=8) 00:39:04.911 starting I/O failed 00:39:04.911 Read completed with error (sct=0, sc=8) 00:39:04.911 starting I/O failed 00:39:04.911 Write completed with error (sct=0, sc=8) 00:39:04.911 starting I/O failed 00:39:04.911 Read completed with error (sct=0, sc=8) 00:39:04.911 starting I/O failed 00:39:04.911 Write completed with error (sct=0, sc=8) 00:39:04.911 starting I/O failed 00:39:04.911 Write completed with error (sct=0, sc=8) 00:39:04.911 starting I/O failed 00:39:04.911 Write completed with error (sct=0, sc=8) 00:39:04.911 starting I/O failed 00:39:04.911 [2024-09-27 15:57:45.256129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:04.911 [2024-09-27 15:57:45.256584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.911 [2024-09-27 15:57:45.256648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:04.911 qpair failed and we were unable to recover it. 00:39:04.911 [2024-09-27 15:57:45.256969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.911 [2024-09-27 15:57:45.256982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.911 qpair failed and we were unable to recover it. 00:39:04.911 [2024-09-27 15:57:45.257311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.911 [2024-09-27 15:57:45.257318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.911 qpair failed and we were unable to recover it. 00:39:04.911 [2024-09-27 15:57:45.257604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.911 [2024-09-27 15:57:45.257612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.911 qpair failed and we were unable to recover it. 00:39:04.911 [2024-09-27 15:57:45.257943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.911 [2024-09-27 15:57:45.257951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.911 qpair failed and we were unable to recover it. 00:39:04.911 [2024-09-27 15:57:45.258286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.911 [2024-09-27 15:57:45.258293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.911 qpair failed and we were unable to recover it. 00:39:04.911 [2024-09-27 15:57:45.258617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.911 [2024-09-27 15:57:45.258625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.911 qpair failed and we were unable to recover it. 00:39:04.911 [2024-09-27 15:57:45.258819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.911 [2024-09-27 15:57:45.258827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.911 qpair failed and we were unable to recover it. 00:39:04.911 [2024-09-27 15:57:45.258996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.911 [2024-09-27 15:57:45.259004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.911 qpair failed and we were unable to recover it. 00:39:04.911 [2024-09-27 15:57:45.259346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.911 [2024-09-27 15:57:45.259355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.911 qpair failed and we were unable to recover it. 00:39:04.911 [2024-09-27 15:57:45.259657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.911 [2024-09-27 15:57:45.259666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.911 qpair failed and we were unable to recover it. 00:39:04.911 [2024-09-27 15:57:45.259861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.911 [2024-09-27 15:57:45.259870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.911 qpair failed and we were unable to recover it. 00:39:04.911 [2024-09-27 15:57:45.260301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.912 [2024-09-27 15:57:45.260309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.912 qpair failed and we were unable to recover it. 00:39:04.912 [2024-09-27 15:57:45.260617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.912 [2024-09-27 15:57:45.260625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.912 qpair failed and we were unable to recover it. 00:39:04.912 [2024-09-27 15:57:45.260950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.912 [2024-09-27 15:57:45.260958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.912 qpair failed and we were unable to recover it. 00:39:04.912 [2024-09-27 15:57:45.261294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.912 [2024-09-27 15:57:45.261301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.912 qpair failed and we were unable to recover it. 00:39:04.912 [2024-09-27 15:57:45.261626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.912 [2024-09-27 15:57:45.261633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.912 qpair failed and we were unable to recover it. 00:39:04.912 [2024-09-27 15:57:45.261954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.912 [2024-09-27 15:57:45.261962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.912 qpair failed and we were unable to recover it. 00:39:04.912 [2024-09-27 15:57:45.262286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.912 [2024-09-27 15:57:45.262294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.912 qpair failed and we were unable to recover it. 00:39:04.912 [2024-09-27 15:57:45.262622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.912 [2024-09-27 15:57:45.262630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.912 qpair failed and we were unable to recover it. 00:39:04.912 [2024-09-27 15:57:45.262909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.912 [2024-09-27 15:57:45.262917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.912 qpair failed and we were unable to recover it. 00:39:04.912 [2024-09-27 15:57:45.263252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.912 [2024-09-27 15:57:45.263261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.912 qpair failed and we were unable to recover it. 00:39:04.912 [2024-09-27 15:57:45.263582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.912 [2024-09-27 15:57:45.263591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.912 qpair failed and we were unable to recover it. 00:39:04.912 [2024-09-27 15:57:45.263924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.912 [2024-09-27 15:57:45.263933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.912 qpair failed and we were unable to recover it. 00:39:04.912 [2024-09-27 15:57:45.264265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.912 [2024-09-27 15:57:45.264273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.912 qpair failed and we were unable to recover it. 00:39:04.912 [2024-09-27 15:57:45.264578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.912 [2024-09-27 15:57:45.264586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.912 qpair failed and we were unable to recover it. 00:39:04.912 [2024-09-27 15:57:45.264905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.912 [2024-09-27 15:57:45.264913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.912 qpair failed and we were unable to recover it. 00:39:04.912 [2024-09-27 15:57:45.265111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.912 [2024-09-27 15:57:45.265119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.912 qpair failed and we were unable to recover it. 00:39:04.912 [2024-09-27 15:57:45.265480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.912 [2024-09-27 15:57:45.265488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.912 qpair failed and we were unable to recover it. 00:39:04.912 [2024-09-27 15:57:45.265794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.912 [2024-09-27 15:57:45.265802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.912 qpair failed and we were unable to recover it. 00:39:04.912 [2024-09-27 15:57:45.266105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.912 [2024-09-27 15:57:45.266114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.912 qpair failed and we were unable to recover it. 00:39:04.912 [2024-09-27 15:57:45.266370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.912 [2024-09-27 15:57:45.266378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.912 qpair failed and we were unable to recover it. 00:39:04.912 [2024-09-27 15:57:45.266724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.912 [2024-09-27 15:57:45.266732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.912 qpair failed and we were unable to recover it. 00:39:04.912 [2024-09-27 15:57:45.267042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.912 [2024-09-27 15:57:45.267050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.912 qpair failed and we were unable to recover it. 00:39:04.912 [2024-09-27 15:57:45.267404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.912 [2024-09-27 15:57:45.267412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.912 qpair failed and we were unable to recover it. 00:39:04.912 [2024-09-27 15:57:45.267733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.912 [2024-09-27 15:57:45.267740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.912 qpair failed and we were unable to recover it. 00:39:04.912 [2024-09-27 15:57:45.268075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.912 [2024-09-27 15:57:45.268085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.912 qpair failed and we were unable to recover it. 00:39:04.912 [2024-09-27 15:57:45.268262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.912 [2024-09-27 15:57:45.268270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.912 qpair failed and we were unable to recover it. 00:39:04.912 [2024-09-27 15:57:45.268468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.912 [2024-09-27 15:57:45.268476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.912 qpair failed and we were unable to recover it. 00:39:04.912 [2024-09-27 15:57:45.268824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.912 [2024-09-27 15:57:45.268831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.912 qpair failed and we were unable to recover it. 00:39:04.912 [2024-09-27 15:57:45.269154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.912 [2024-09-27 15:57:45.269162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.912 qpair failed and we were unable to recover it. 00:39:04.912 [2024-09-27 15:57:45.269484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.912 [2024-09-27 15:57:45.269491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.912 qpair failed and we were unable to recover it. 00:39:04.912 [2024-09-27 15:57:45.269852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.912 [2024-09-27 15:57:45.269859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.912 qpair failed and we were unable to recover it. 00:39:04.912 [2024-09-27 15:57:45.270172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.912 [2024-09-27 15:57:45.270180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.912 qpair failed and we were unable to recover it. 00:39:04.912 [2024-09-27 15:57:45.270520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.912 [2024-09-27 15:57:45.270528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.912 qpair failed and we were unable to recover it. 00:39:04.912 [2024-09-27 15:57:45.270859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.912 [2024-09-27 15:57:45.270867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.912 qpair failed and we were unable to recover it. 00:39:04.912 [2024-09-27 15:57:45.271195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.912 [2024-09-27 15:57:45.271202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.912 qpair failed and we were unable to recover it. 00:39:04.912 [2024-09-27 15:57:45.271526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.912 [2024-09-27 15:57:45.271534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.912 qpair failed and we were unable to recover it. 00:39:04.912 [2024-09-27 15:57:45.271733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.912 [2024-09-27 15:57:45.271741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.912 qpair failed and we were unable to recover it. 00:39:04.912 [2024-09-27 15:57:45.272020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.912 [2024-09-27 15:57:45.272029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.912 qpair failed and we were unable to recover it. 00:39:04.913 [2024-09-27 15:57:45.272335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.913 [2024-09-27 15:57:45.272344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.913 qpair failed and we were unable to recover it. 00:39:04.913 [2024-09-27 15:57:45.272530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.913 [2024-09-27 15:57:45.272539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.913 qpair failed and we were unable to recover it. 00:39:04.913 [2024-09-27 15:57:45.272868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.913 [2024-09-27 15:57:45.272875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.913 qpair failed and we were unable to recover it. 00:39:04.913 [2024-09-27 15:57:45.273206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.913 [2024-09-27 15:57:45.273214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.913 qpair failed and we were unable to recover it. 00:39:04.913 [2024-09-27 15:57:45.273532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.913 [2024-09-27 15:57:45.273539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.913 qpair failed and we were unable to recover it. 00:39:04.913 [2024-09-27 15:57:45.273863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.913 [2024-09-27 15:57:45.273870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.913 qpair failed and we were unable to recover it. 00:39:04.913 [2024-09-27 15:57:45.274219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.913 [2024-09-27 15:57:45.274227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.913 qpair failed and we were unable to recover it. 00:39:04.913 [2024-09-27 15:57:45.274543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.913 [2024-09-27 15:57:45.274551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.913 qpair failed and we were unable to recover it. 00:39:04.913 [2024-09-27 15:57:45.274744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.913 [2024-09-27 15:57:45.274752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.913 qpair failed and we were unable to recover it. 00:39:04.913 [2024-09-27 15:57:45.275087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.913 [2024-09-27 15:57:45.275094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.913 qpair failed and we were unable to recover it. 00:39:04.913 [2024-09-27 15:57:45.275413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.913 [2024-09-27 15:57:45.275420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.913 qpair failed and we were unable to recover it. 00:39:04.913 [2024-09-27 15:57:45.275635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.913 [2024-09-27 15:57:45.275644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.913 qpair failed and we were unable to recover it. 00:39:04.913 [2024-09-27 15:57:45.275892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.913 [2024-09-27 15:57:45.275904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.913 qpair failed and we were unable to recover it. 00:39:04.913 [2024-09-27 15:57:45.276249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.913 [2024-09-27 15:57:45.276258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.913 qpair failed and we were unable to recover it. 00:39:04.913 [2024-09-27 15:57:45.276559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.913 [2024-09-27 15:57:45.276567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.913 qpair failed and we were unable to recover it. 00:39:04.913 [2024-09-27 15:57:45.276903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.913 [2024-09-27 15:57:45.276910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.913 qpair failed and we were unable to recover it. 00:39:04.913 [2024-09-27 15:57:45.277227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.913 [2024-09-27 15:57:45.277235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.913 qpair failed and we were unable to recover it. 00:39:04.913 [2024-09-27 15:57:45.277557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.913 [2024-09-27 15:57:45.277564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.913 qpair failed and we were unable to recover it. 00:39:04.913 [2024-09-27 15:57:45.277890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.913 [2024-09-27 15:57:45.277905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.913 qpair failed and we were unable to recover it. 00:39:04.913 [2024-09-27 15:57:45.278258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.913 [2024-09-27 15:57:45.278267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.913 qpair failed and we were unable to recover it. 00:39:04.913 [2024-09-27 15:57:45.278590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.913 [2024-09-27 15:57:45.278599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.913 qpair failed and we were unable to recover it. 00:39:04.913 [2024-09-27 15:57:45.278918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.913 [2024-09-27 15:57:45.278927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.913 qpair failed and we were unable to recover it. 00:39:04.913 [2024-09-27 15:57:45.279249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.913 [2024-09-27 15:57:45.279256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.913 qpair failed and we were unable to recover it. 00:39:04.913 [2024-09-27 15:57:45.279580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.913 [2024-09-27 15:57:45.279588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.913 qpair failed and we were unable to recover it. 00:39:04.913 [2024-09-27 15:57:45.279731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.913 [2024-09-27 15:57:45.279749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.913 qpair failed and we were unable to recover it. 00:39:04.913 [2024-09-27 15:57:45.280099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.913 [2024-09-27 15:57:45.280107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.913 qpair failed and we were unable to recover it. 00:39:04.913 [2024-09-27 15:57:45.280411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.913 [2024-09-27 15:57:45.280419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.913 qpair failed and we were unable to recover it. 00:39:04.913 [2024-09-27 15:57:45.280748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.913 [2024-09-27 15:57:45.280756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.913 qpair failed and we were unable to recover it. 00:39:04.913 [2024-09-27 15:57:45.281067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.913 [2024-09-27 15:57:45.281075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.913 qpair failed and we were unable to recover it. 00:39:04.913 [2024-09-27 15:57:45.281398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.913 [2024-09-27 15:57:45.281406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.913 qpair failed and we were unable to recover it. 00:39:04.913 [2024-09-27 15:57:45.281575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.913 [2024-09-27 15:57:45.281583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.913 qpair failed and we were unable to recover it. 00:39:04.913 [2024-09-27 15:57:45.281905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.913 [2024-09-27 15:57:45.281913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.913 qpair failed and we were unable to recover it. 00:39:04.913 [2024-09-27 15:57:45.282236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.913 [2024-09-27 15:57:45.282244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.913 qpair failed and we were unable to recover it. 00:39:04.913 [2024-09-27 15:57:45.282567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.913 [2024-09-27 15:57:45.282575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.913 qpair failed and we were unable to recover it. 00:39:04.913 [2024-09-27 15:57:45.282941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.913 [2024-09-27 15:57:45.282949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.913 qpair failed and we were unable to recover it. 00:39:04.913 [2024-09-27 15:57:45.283240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.913 [2024-09-27 15:57:45.283248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.913 qpair failed and we were unable to recover it. 00:39:04.913 [2024-09-27 15:57:45.283565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.913 [2024-09-27 15:57:45.283572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.913 qpair failed and we were unable to recover it. 00:39:04.913 [2024-09-27 15:57:45.283889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.914 [2024-09-27 15:57:45.283901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.914 qpair failed and we were unable to recover it. 00:39:04.914 [2024-09-27 15:57:45.284109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.914 [2024-09-27 15:57:45.284117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.914 qpair failed and we were unable to recover it. 00:39:04.914 [2024-09-27 15:57:45.284429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.914 [2024-09-27 15:57:45.284437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.914 qpair failed and we were unable to recover it. 00:39:04.914 [2024-09-27 15:57:45.284790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.914 [2024-09-27 15:57:45.284797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.914 qpair failed and we were unable to recover it. 00:39:04.914 [2024-09-27 15:57:45.285124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.914 [2024-09-27 15:57:45.285132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.914 qpair failed and we were unable to recover it. 00:39:04.914 [2024-09-27 15:57:45.285420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.914 [2024-09-27 15:57:45.285427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.914 qpair failed and we were unable to recover it. 00:39:04.914 [2024-09-27 15:57:45.285749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.914 [2024-09-27 15:57:45.285757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.914 qpair failed and we were unable to recover it. 00:39:04.914 [2024-09-27 15:57:45.286083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.914 [2024-09-27 15:57:45.286090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.914 qpair failed and we were unable to recover it. 00:39:04.914 [2024-09-27 15:57:45.286397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.914 [2024-09-27 15:57:45.286405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.914 qpair failed and we were unable to recover it. 00:39:04.914 [2024-09-27 15:57:45.286722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.914 [2024-09-27 15:57:45.286729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.914 qpair failed and we were unable to recover it. 00:39:04.914 [2024-09-27 15:57:45.287062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.914 [2024-09-27 15:57:45.287070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.914 qpair failed and we were unable to recover it. 00:39:04.914 [2024-09-27 15:57:45.287396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.914 [2024-09-27 15:57:45.287403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.914 qpair failed and we were unable to recover it. 00:39:04.914 [2024-09-27 15:57:45.287717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.914 [2024-09-27 15:57:45.287724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.914 qpair failed and we were unable to recover it. 00:39:04.914 [2024-09-27 15:57:45.288040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.914 [2024-09-27 15:57:45.288047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.914 qpair failed and we were unable to recover it. 00:39:04.914 [2024-09-27 15:57:45.288447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.914 [2024-09-27 15:57:45.288456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.914 qpair failed and we were unable to recover it. 00:39:04.914 [2024-09-27 15:57:45.288770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.914 [2024-09-27 15:57:45.288778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.914 qpair failed and we were unable to recover it. 00:39:04.914 [2024-09-27 15:57:45.289104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.914 [2024-09-27 15:57:45.289112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.914 qpair failed and we were unable to recover it. 00:39:04.914 [2024-09-27 15:57:45.289415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.914 [2024-09-27 15:57:45.289423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.914 qpair failed and we were unable to recover it. 00:39:04.914 [2024-09-27 15:57:45.289745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.914 [2024-09-27 15:57:45.289753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.914 qpair failed and we were unable to recover it. 00:39:04.914 [2024-09-27 15:57:45.290081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.914 [2024-09-27 15:57:45.290089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.914 qpair failed and we were unable to recover it. 00:39:04.914 [2024-09-27 15:57:45.290261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.914 [2024-09-27 15:57:45.290269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.914 qpair failed and we were unable to recover it. 00:39:04.914 [2024-09-27 15:57:45.290649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.914 [2024-09-27 15:57:45.290656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.914 qpair failed and we were unable to recover it. 00:39:04.914 [2024-09-27 15:57:45.290995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.914 [2024-09-27 15:57:45.291003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.914 qpair failed and we were unable to recover it. 00:39:04.914 [2024-09-27 15:57:45.291325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.914 [2024-09-27 15:57:45.291332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.914 qpair failed and we were unable to recover it. 00:39:04.914 [2024-09-27 15:57:45.291658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.914 [2024-09-27 15:57:45.291665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.914 qpair failed and we were unable to recover it. 00:39:04.914 [2024-09-27 15:57:45.291964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.914 [2024-09-27 15:57:45.291971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.914 qpair failed and we were unable to recover it. 00:39:04.914 [2024-09-27 15:57:45.292342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.914 [2024-09-27 15:57:45.292349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.914 qpair failed and we were unable to recover it. 00:39:04.914 [2024-09-27 15:57:45.292667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.914 [2024-09-27 15:57:45.292674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.914 qpair failed and we were unable to recover it. 00:39:04.914 [2024-09-27 15:57:45.292959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.914 [2024-09-27 15:57:45.292967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.914 qpair failed and we were unable to recover it. 00:39:04.914 [2024-09-27 15:57:45.293251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.914 [2024-09-27 15:57:45.293259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.914 qpair failed and we were unable to recover it. 00:39:04.914 [2024-09-27 15:57:45.293498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.914 [2024-09-27 15:57:45.293506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.914 qpair failed and we were unable to recover it. 00:39:04.914 [2024-09-27 15:57:45.293778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.914 [2024-09-27 15:57:45.293787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.914 qpair failed and we were unable to recover it. 00:39:04.914 [2024-09-27 15:57:45.294176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.914 [2024-09-27 15:57:45.294185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.914 qpair failed and we were unable to recover it. 00:39:04.914 [2024-09-27 15:57:45.294461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.914 [2024-09-27 15:57:45.294470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.914 qpair failed and we were unable to recover it. 00:39:04.914 [2024-09-27 15:57:45.294790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.914 [2024-09-27 15:57:45.294799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.914 qpair failed and we were unable to recover it. 00:39:04.914 [2024-09-27 15:57:45.295107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.914 [2024-09-27 15:57:45.295115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.914 qpair failed and we were unable to recover it. 00:39:04.914 [2024-09-27 15:57:45.295435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.914 [2024-09-27 15:57:45.295442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.914 qpair failed and we were unable to recover it. 00:39:04.914 [2024-09-27 15:57:45.295749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.915 [2024-09-27 15:57:45.295756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.915 qpair failed and we were unable to recover it. 00:39:04.915 [2024-09-27 15:57:45.295923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.915 [2024-09-27 15:57:45.295931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.915 qpair failed and we were unable to recover it. 00:39:04.915 [2024-09-27 15:57:45.296290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.915 [2024-09-27 15:57:45.296299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.915 qpair failed and we were unable to recover it. 00:39:04.915 [2024-09-27 15:57:45.296614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.915 [2024-09-27 15:57:45.296622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.915 qpair failed and we were unable to recover it. 00:39:04.915 [2024-09-27 15:57:45.296948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.915 [2024-09-27 15:57:45.296956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.915 qpair failed and we were unable to recover it. 00:39:04.915 [2024-09-27 15:57:45.297272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.915 [2024-09-27 15:57:45.297280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.915 qpair failed and we were unable to recover it. 00:39:04.915 [2024-09-27 15:57:45.297613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.915 [2024-09-27 15:57:45.297621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.915 qpair failed and we were unable to recover it. 00:39:04.915 [2024-09-27 15:57:45.297936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.915 [2024-09-27 15:57:45.297950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.915 qpair failed and we were unable to recover it. 00:39:04.915 [2024-09-27 15:57:45.298219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.915 [2024-09-27 15:57:45.298226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.915 qpair failed and we were unable to recover it. 00:39:04.915 [2024-09-27 15:57:45.298559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.915 [2024-09-27 15:57:45.298566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.915 qpair failed and we were unable to recover it. 00:39:04.915 [2024-09-27 15:57:45.298900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.915 [2024-09-27 15:57:45.298908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.915 qpair failed and we were unable to recover it. 00:39:04.915 [2024-09-27 15:57:45.299230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.915 [2024-09-27 15:57:45.299238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.915 qpair failed and we were unable to recover it. 00:39:04.915 [2024-09-27 15:57:45.299566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.915 [2024-09-27 15:57:45.299573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.915 qpair failed and we were unable to recover it. 00:39:04.915 [2024-09-27 15:57:45.299905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.915 [2024-09-27 15:57:45.299913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.915 qpair failed and we were unable to recover it. 00:39:04.915 [2024-09-27 15:57:45.300228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.915 [2024-09-27 15:57:45.300236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.915 qpair failed and we were unable to recover it. 00:39:04.915 [2024-09-27 15:57:45.300583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.915 [2024-09-27 15:57:45.300592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.915 qpair failed and we were unable to recover it. 00:39:04.915 [2024-09-27 15:57:45.300917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.915 [2024-09-27 15:57:45.300924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.915 qpair failed and we were unable to recover it. 00:39:04.915 [2024-09-27 15:57:45.301273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.915 [2024-09-27 15:57:45.301281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.915 qpair failed and we were unable to recover it. 00:39:04.915 [2024-09-27 15:57:45.301603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.915 [2024-09-27 15:57:45.301611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.915 qpair failed and we were unable to recover it. 00:39:04.915 [2024-09-27 15:57:45.301932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.915 [2024-09-27 15:57:45.301940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.915 qpair failed and we were unable to recover it. 00:39:04.915 [2024-09-27 15:57:45.302154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.915 [2024-09-27 15:57:45.302162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.915 qpair failed and we were unable to recover it. 00:39:04.915 [2024-09-27 15:57:45.302449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.915 [2024-09-27 15:57:45.302456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.915 qpair failed and we were unable to recover it. 00:39:04.915 [2024-09-27 15:57:45.302789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.915 [2024-09-27 15:57:45.302796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.915 qpair failed and we were unable to recover it. 00:39:04.915 [2024-09-27 15:57:45.303097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.915 [2024-09-27 15:57:45.303107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.915 qpair failed and we were unable to recover it. 00:39:04.915 [2024-09-27 15:57:45.303425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.915 [2024-09-27 15:57:45.303434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.915 qpair failed and we were unable to recover it. 00:39:04.915 [2024-09-27 15:57:45.303753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.915 [2024-09-27 15:57:45.303762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.915 qpair failed and we were unable to recover it. 00:39:04.915 [2024-09-27 15:57:45.304075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.915 [2024-09-27 15:57:45.304084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.915 qpair failed and we were unable to recover it. 00:39:04.915 [2024-09-27 15:57:45.304401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.915 [2024-09-27 15:57:45.304408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.915 qpair failed and we were unable to recover it. 00:39:04.915 [2024-09-27 15:57:45.304726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.915 [2024-09-27 15:57:45.304734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.915 qpair failed and we were unable to recover it. 00:39:04.915 [2024-09-27 15:57:45.304968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.915 [2024-09-27 15:57:45.304976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.915 qpair failed and we were unable to recover it. 00:39:04.915 [2024-09-27 15:57:45.305298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.915 [2024-09-27 15:57:45.305306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.915 qpair failed and we were unable to recover it. 00:39:04.915 [2024-09-27 15:57:45.305509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.915 [2024-09-27 15:57:45.305516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.915 qpair failed and we were unable to recover it. 00:39:04.915 [2024-09-27 15:57:45.305872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.915 [2024-09-27 15:57:45.305880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.915 qpair failed and we were unable to recover it. 00:39:04.915 [2024-09-27 15:57:45.306205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.915 [2024-09-27 15:57:45.306213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.915 qpair failed and we were unable to recover it. 00:39:04.915 [2024-09-27 15:57:45.306501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.915 [2024-09-27 15:57:45.306511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.915 qpair failed and we were unable to recover it. 00:39:04.915 [2024-09-27 15:57:45.306692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.915 [2024-09-27 15:57:45.306700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.916 qpair failed and we were unable to recover it. 00:39:04.916 [2024-09-27 15:57:45.307041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.916 [2024-09-27 15:57:45.307049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.916 qpair failed and we were unable to recover it. 00:39:04.916 [2024-09-27 15:57:45.307372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.916 [2024-09-27 15:57:45.307380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.916 qpair failed and we were unable to recover it. 00:39:04.916 [2024-09-27 15:57:45.307693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.916 [2024-09-27 15:57:45.307700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.916 qpair failed and we were unable to recover it. 00:39:04.916 [2024-09-27 15:57:45.307923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.916 [2024-09-27 15:57:45.307930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.916 qpair failed and we were unable to recover it. 00:39:04.916 [2024-09-27 15:57:45.308270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.916 [2024-09-27 15:57:45.308278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.916 qpair failed and we were unable to recover it. 00:39:04.916 [2024-09-27 15:57:45.308623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.916 [2024-09-27 15:57:45.308631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.916 qpair failed and we were unable to recover it. 00:39:04.916 [2024-09-27 15:57:45.308828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.916 [2024-09-27 15:57:45.308835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.916 qpair failed and we were unable to recover it. 00:39:04.916 [2024-09-27 15:57:45.309167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.916 [2024-09-27 15:57:45.309175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.916 qpair failed and we were unable to recover it. 00:39:04.916 [2024-09-27 15:57:45.309544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.916 [2024-09-27 15:57:45.309552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.916 qpair failed and we were unable to recover it. 00:39:04.916 [2024-09-27 15:57:45.309873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.916 [2024-09-27 15:57:45.309880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.916 qpair failed and we were unable to recover it. 00:39:04.916 [2024-09-27 15:57:45.310202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.916 [2024-09-27 15:57:45.310210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.916 qpair failed and we were unable to recover it. 00:39:04.916 [2024-09-27 15:57:45.310544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.916 [2024-09-27 15:57:45.310552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.916 qpair failed and we were unable to recover it. 00:39:04.916 [2024-09-27 15:57:45.310768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.916 [2024-09-27 15:57:45.310776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.916 qpair failed and we were unable to recover it. 00:39:04.916 [2024-09-27 15:57:45.311125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.916 [2024-09-27 15:57:45.311135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.916 qpair failed and we were unable to recover it. 00:39:04.916 [2024-09-27 15:57:45.311452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.916 [2024-09-27 15:57:45.311459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.916 qpair failed and we were unable to recover it. 00:39:04.916 [2024-09-27 15:57:45.311779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.916 [2024-09-27 15:57:45.311787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.916 qpair failed and we were unable to recover it. 00:39:04.916 [2024-09-27 15:57:45.312105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.916 [2024-09-27 15:57:45.312112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.916 qpair failed and we were unable to recover it. 00:39:04.916 [2024-09-27 15:57:45.312432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.916 [2024-09-27 15:57:45.312440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.916 qpair failed and we were unable to recover it. 00:39:04.916 [2024-09-27 15:57:45.312624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.916 [2024-09-27 15:57:45.312632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.916 qpair failed and we were unable to recover it. 00:39:04.916 [2024-09-27 15:57:45.312920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.916 [2024-09-27 15:57:45.312928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.916 qpair failed and we were unable to recover it. 00:39:04.916 [2024-09-27 15:57:45.313239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.916 [2024-09-27 15:57:45.313247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.916 qpair failed and we were unable to recover it. 00:39:04.916 [2024-09-27 15:57:45.313563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.916 [2024-09-27 15:57:45.313570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.916 qpair failed and we were unable to recover it. 00:39:04.916 [2024-09-27 15:57:45.313902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.916 [2024-09-27 15:57:45.313910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.916 qpair failed and we were unable to recover it. 00:39:04.916 [2024-09-27 15:57:45.314244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.916 [2024-09-27 15:57:45.314251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.916 qpair failed and we were unable to recover it. 00:39:04.916 [2024-09-27 15:57:45.314555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.916 [2024-09-27 15:57:45.314563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.916 qpair failed and we were unable to recover it. 00:39:04.916 [2024-09-27 15:57:45.314886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.916 [2024-09-27 15:57:45.314899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.916 qpair failed and we were unable to recover it. 00:39:04.916 [2024-09-27 15:57:45.315217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.916 [2024-09-27 15:57:45.315225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.916 qpair failed and we were unable to recover it. 00:39:04.916 [2024-09-27 15:57:45.315564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.916 [2024-09-27 15:57:45.315571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.916 qpair failed and we were unable to recover it. 00:39:04.916 [2024-09-27 15:57:45.315892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.916 [2024-09-27 15:57:45.315905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.916 qpair failed and we were unable to recover it. 00:39:04.916 [2024-09-27 15:57:45.316260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.916 [2024-09-27 15:57:45.316267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.916 qpair failed and we were unable to recover it. 00:39:04.916 [2024-09-27 15:57:45.316624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.916 [2024-09-27 15:57:45.316633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.916 qpair failed and we were unable to recover it. 00:39:04.916 [2024-09-27 15:57:45.316852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.916 [2024-09-27 15:57:45.316861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.916 qpair failed and we were unable to recover it. 00:39:04.916 [2024-09-27 15:57:45.317223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.916 [2024-09-27 15:57:45.317231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.916 qpair failed and we were unable to recover it. 00:39:04.916 [2024-09-27 15:57:45.317547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.916 [2024-09-27 15:57:45.317555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.916 qpair failed and we were unable to recover it. 00:39:04.916 [2024-09-27 15:57:45.317871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.917 [2024-09-27 15:57:45.317879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.917 qpair failed and we were unable to recover it. 00:39:04.917 [2024-09-27 15:57:45.318198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.917 [2024-09-27 15:57:45.318208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.917 qpair failed and we were unable to recover it. 00:39:04.917 [2024-09-27 15:57:45.318526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.917 [2024-09-27 15:57:45.318534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.917 qpair failed and we were unable to recover it. 00:39:04.917 [2024-09-27 15:57:45.318853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.917 [2024-09-27 15:57:45.318862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.917 qpair failed and we were unable to recover it. 00:39:04.917 [2024-09-27 15:57:45.319168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.917 [2024-09-27 15:57:45.319177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.917 qpair failed and we were unable to recover it. 00:39:04.917 [2024-09-27 15:57:45.319502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.917 [2024-09-27 15:57:45.319511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.917 qpair failed and we were unable to recover it. 00:39:04.917 [2024-09-27 15:57:45.319707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.917 [2024-09-27 15:57:45.319716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.917 qpair failed and we were unable to recover it. 00:39:04.917 [2024-09-27 15:57:45.320028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.917 [2024-09-27 15:57:45.320035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.917 qpair failed and we were unable to recover it. 00:39:04.917 [2024-09-27 15:57:45.320371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.917 [2024-09-27 15:57:45.320379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.917 qpair failed and we were unable to recover it. 00:39:04.917 [2024-09-27 15:57:45.320698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.917 [2024-09-27 15:57:45.320705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.917 qpair failed and we were unable to recover it. 00:39:04.917 [2024-09-27 15:57:45.321025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.917 [2024-09-27 15:57:45.321033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.917 qpair failed and we were unable to recover it. 00:39:04.917 [2024-09-27 15:57:45.321358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.917 [2024-09-27 15:57:45.321365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.917 qpair failed and we were unable to recover it. 00:39:04.917 [2024-09-27 15:57:45.321683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.917 [2024-09-27 15:57:45.321690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.917 qpair failed and we were unable to recover it. 00:39:04.917 [2024-09-27 15:57:45.322005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.917 [2024-09-27 15:57:45.322012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.917 qpair failed and we were unable to recover it. 00:39:04.917 [2024-09-27 15:57:45.322327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.917 [2024-09-27 15:57:45.322335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.917 qpair failed and we were unable to recover it. 00:39:04.917 [2024-09-27 15:57:45.322701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.917 [2024-09-27 15:57:45.322708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.917 qpair failed and we were unable to recover it. 00:39:04.917 [2024-09-27 15:57:45.323027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.917 [2024-09-27 15:57:45.323035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.917 qpair failed and we were unable to recover it. 00:39:04.917 [2024-09-27 15:57:45.323400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.917 [2024-09-27 15:57:45.323407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.917 qpair failed and we were unable to recover it. 00:39:04.917 [2024-09-27 15:57:45.323713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.917 [2024-09-27 15:57:45.323720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.917 qpair failed and we were unable to recover it. 00:39:04.917 [2024-09-27 15:57:45.324035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.917 [2024-09-27 15:57:45.324043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.917 qpair failed and we were unable to recover it. 00:39:04.917 [2024-09-27 15:57:45.324326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.917 [2024-09-27 15:57:45.324334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.917 qpair failed and we were unable to recover it. 00:39:04.917 [2024-09-27 15:57:45.324620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.917 [2024-09-27 15:57:45.324629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.917 qpair failed and we were unable to recover it. 00:39:04.917 [2024-09-27 15:57:45.324980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.917 [2024-09-27 15:57:45.324990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.917 qpair failed and we were unable to recover it. 00:39:04.917 [2024-09-27 15:57:45.325318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.917 [2024-09-27 15:57:45.325327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.917 qpair failed and we were unable to recover it. 00:39:04.917 [2024-09-27 15:57:45.325643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.917 [2024-09-27 15:57:45.325651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.917 qpair failed and we were unable to recover it. 00:39:04.917 [2024-09-27 15:57:45.325969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.917 [2024-09-27 15:57:45.325977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.917 qpair failed and we were unable to recover it. 00:39:04.917 [2024-09-27 15:57:45.326297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.917 [2024-09-27 15:57:45.326304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.917 qpair failed and we were unable to recover it. 00:39:04.917 [2024-09-27 15:57:45.326610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.917 [2024-09-27 15:57:45.326618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.917 qpair failed and we were unable to recover it. 00:39:04.917 [2024-09-27 15:57:45.326976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.917 [2024-09-27 15:57:45.326984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.917 qpair failed and we were unable to recover it. 00:39:04.917 [2024-09-27 15:57:45.327324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.917 [2024-09-27 15:57:45.327331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.917 qpair failed and we were unable to recover it. 00:39:04.917 [2024-09-27 15:57:45.327661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.917 [2024-09-27 15:57:45.327669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.917 qpair failed and we were unable to recover it. 00:39:04.917 [2024-09-27 15:57:45.327980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.917 [2024-09-27 15:57:45.327987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.917 qpair failed and we were unable to recover it. 00:39:04.917 [2024-09-27 15:57:45.328403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.917 [2024-09-27 15:57:45.328412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.917 qpair failed and we were unable to recover it. 00:39:04.917 [2024-09-27 15:57:45.328728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.917 [2024-09-27 15:57:45.328736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.917 qpair failed and we were unable to recover it. 00:39:04.917 [2024-09-27 15:57:45.329062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.917 [2024-09-27 15:57:45.329071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.917 qpair failed and we were unable to recover it. 00:39:04.917 [2024-09-27 15:57:45.329393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.917 [2024-09-27 15:57:45.329402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.917 qpair failed and we were unable to recover it. 00:39:04.917 [2024-09-27 15:57:45.329726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.917 [2024-09-27 15:57:45.329734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.917 qpair failed and we were unable to recover it. 00:39:04.917 [2024-09-27 15:57:45.330045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.917 [2024-09-27 15:57:45.330054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.917 qpair failed and we were unable to recover it. 00:39:04.917 [2024-09-27 15:57:45.330292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.918 [2024-09-27 15:57:45.330300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.918 qpair failed and we were unable to recover it. 00:39:04.918 [2024-09-27 15:57:45.330633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.918 [2024-09-27 15:57:45.330641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.918 qpair failed and we were unable to recover it. 00:39:04.918 [2024-09-27 15:57:45.330961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.918 [2024-09-27 15:57:45.330968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.918 qpair failed and we were unable to recover it. 00:39:04.918 [2024-09-27 15:57:45.331293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.918 [2024-09-27 15:57:45.331303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.918 qpair failed and we were unable to recover it. 00:39:04.918 [2024-09-27 15:57:45.331632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.918 [2024-09-27 15:57:45.331640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.918 qpair failed and we were unable to recover it. 00:39:04.918 [2024-09-27 15:57:45.331967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.918 [2024-09-27 15:57:45.331975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.918 qpair failed and we were unable to recover it. 00:39:04.918 [2024-09-27 15:57:45.332188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.918 [2024-09-27 15:57:45.332197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.918 qpair failed and we were unable to recover it. 00:39:04.918 [2024-09-27 15:57:45.332478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.918 [2024-09-27 15:57:45.332487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.918 qpair failed and we were unable to recover it. 00:39:04.918 [2024-09-27 15:57:45.332833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.918 [2024-09-27 15:57:45.332842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.918 qpair failed and we were unable to recover it. 00:39:04.918 [2024-09-27 15:57:45.333172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.918 [2024-09-27 15:57:45.333180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.918 qpair failed and we were unable to recover it. 00:39:04.918 [2024-09-27 15:57:45.333485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.918 [2024-09-27 15:57:45.333493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.918 qpair failed and we were unable to recover it. 00:39:04.918 [2024-09-27 15:57:45.333795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.918 [2024-09-27 15:57:45.333804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.918 qpair failed and we were unable to recover it. 00:39:04.918 [2024-09-27 15:57:45.334104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.918 [2024-09-27 15:57:45.334111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.918 qpair failed and we were unable to recover it. 00:39:04.918 [2024-09-27 15:57:45.334435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.918 [2024-09-27 15:57:45.334443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.918 qpair failed and we were unable to recover it. 00:39:04.918 [2024-09-27 15:57:45.334806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.918 [2024-09-27 15:57:45.334815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.918 qpair failed and we were unable to recover it. 00:39:04.918 [2024-09-27 15:57:45.335137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.918 [2024-09-27 15:57:45.335146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.918 qpair failed and we were unable to recover it. 00:39:04.918 [2024-09-27 15:57:45.335472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.918 [2024-09-27 15:57:45.335480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.918 qpair failed and we were unable to recover it. 00:39:04.918 [2024-09-27 15:57:45.335798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.918 [2024-09-27 15:57:45.335806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.918 qpair failed and we were unable to recover it. 00:39:04.918 [2024-09-27 15:57:45.336122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.918 [2024-09-27 15:57:45.336131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.918 qpair failed and we were unable to recover it. 00:39:04.918 [2024-09-27 15:57:45.336480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.918 [2024-09-27 15:57:45.336489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.918 qpair failed and we were unable to recover it. 00:39:04.918 [2024-09-27 15:57:45.336833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.918 [2024-09-27 15:57:45.336842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.918 qpair failed and we were unable to recover it. 00:39:04.918 [2024-09-27 15:57:45.337170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.918 [2024-09-27 15:57:45.337181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.918 qpair failed and we were unable to recover it. 00:39:04.918 [2024-09-27 15:57:45.337229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.918 [2024-09-27 15:57:45.337237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.918 qpair failed and we were unable to recover it. 00:39:04.918 [2024-09-27 15:57:45.337552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.918 [2024-09-27 15:57:45.337561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.918 qpair failed and we were unable to recover it. 00:39:04.918 [2024-09-27 15:57:45.337883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.918 [2024-09-27 15:57:45.337892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.918 qpair failed and we were unable to recover it. 00:39:04.918 [2024-09-27 15:57:45.338213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.918 [2024-09-27 15:57:45.338222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.918 qpair failed and we were unable to recover it. 00:39:04.918 [2024-09-27 15:57:45.338541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.918 [2024-09-27 15:57:45.338550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.918 qpair failed and we were unable to recover it. 00:39:04.918 [2024-09-27 15:57:45.338737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.918 [2024-09-27 15:57:45.338746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.918 qpair failed and we were unable to recover it. 00:39:04.918 [2024-09-27 15:57:45.339092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.918 [2024-09-27 15:57:45.339101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.918 qpair failed and we were unable to recover it. 00:39:04.918 [2024-09-27 15:57:45.339299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.918 [2024-09-27 15:57:45.339308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.918 qpair failed and we were unable to recover it. 00:39:04.918 [2024-09-27 15:57:45.339633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.918 [2024-09-27 15:57:45.339642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.918 qpair failed and we were unable to recover it. 00:39:04.918 [2024-09-27 15:57:45.339962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.918 [2024-09-27 15:57:45.339971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.918 qpair failed and we were unable to recover it. 00:39:04.918 [2024-09-27 15:57:45.340309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.918 [2024-09-27 15:57:45.340318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.918 qpair failed and we were unable to recover it. 00:39:04.918 [2024-09-27 15:57:45.340633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.918 [2024-09-27 15:57:45.340642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.918 qpair failed and we were unable to recover it. 00:39:04.918 [2024-09-27 15:57:45.340858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.918 [2024-09-27 15:57:45.340867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.918 qpair failed and we were unable to recover it. 00:39:04.918 [2024-09-27 15:57:45.341200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.918 [2024-09-27 15:57:45.341209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.918 qpair failed and we were unable to recover it. 00:39:04.918 [2024-09-27 15:57:45.341404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.918 [2024-09-27 15:57:45.341413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.918 qpair failed and we were unable to recover it. 00:39:04.918 [2024-09-27 15:57:45.341725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.918 [2024-09-27 15:57:45.341734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.918 qpair failed and we were unable to recover it. 00:39:04.919 [2024-09-27 15:57:45.342056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.919 [2024-09-27 15:57:45.342066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.919 qpair failed and we were unable to recover it. 00:39:04.919 [2024-09-27 15:57:45.342270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.919 [2024-09-27 15:57:45.342279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.919 qpair failed and we were unable to recover it. 00:39:04.919 [2024-09-27 15:57:45.342450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.919 [2024-09-27 15:57:45.342459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.919 qpair failed and we were unable to recover it. 00:39:04.919 [2024-09-27 15:57:45.342836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.919 [2024-09-27 15:57:45.342844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.919 qpair failed and we were unable to recover it. 00:39:04.919 [2024-09-27 15:57:45.343132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.919 [2024-09-27 15:57:45.343141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.919 qpair failed and we were unable to recover it. 00:39:04.919 [2024-09-27 15:57:45.343458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.919 [2024-09-27 15:57:45.343467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.919 qpair failed and we were unable to recover it. 00:39:04.919 [2024-09-27 15:57:45.343631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.919 [2024-09-27 15:57:45.343640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.919 qpair failed and we were unable to recover it. 00:39:04.919 [2024-09-27 15:57:45.343948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.919 [2024-09-27 15:57:45.343957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.919 qpair failed and we were unable to recover it. 00:39:04.919 [2024-09-27 15:57:45.344368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.919 [2024-09-27 15:57:45.344378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.919 qpair failed and we were unable to recover it. 00:39:04.919 [2024-09-27 15:57:45.344701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.919 [2024-09-27 15:57:45.344710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.919 qpair failed and we were unable to recover it. 00:39:04.919 [2024-09-27 15:57:45.345028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.919 [2024-09-27 15:57:45.345040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.919 qpair failed and we were unable to recover it. 00:39:04.919 [2024-09-27 15:57:45.345352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.919 [2024-09-27 15:57:45.345361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.919 qpair failed and we were unable to recover it. 00:39:04.919 [2024-09-27 15:57:45.345683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.919 [2024-09-27 15:57:45.345692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.919 qpair failed and we were unable to recover it. 00:39:04.919 [2024-09-27 15:57:45.346015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.919 [2024-09-27 15:57:45.346024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.919 qpair failed and we were unable to recover it. 00:39:04.919 [2024-09-27 15:57:45.346350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.919 [2024-09-27 15:57:45.346359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.919 qpair failed and we were unable to recover it. 00:39:04.919 [2024-09-27 15:57:45.346679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.919 [2024-09-27 15:57:45.346688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.919 qpair failed and we were unable to recover it. 00:39:04.919 [2024-09-27 15:57:45.346861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.919 [2024-09-27 15:57:45.346870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.919 qpair failed and we were unable to recover it. 00:39:04.919 [2024-09-27 15:57:45.347200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.919 [2024-09-27 15:57:45.347210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.919 qpair failed and we were unable to recover it. 00:39:04.919 [2024-09-27 15:57:45.347527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.919 [2024-09-27 15:57:45.347536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.919 qpair failed and we were unable to recover it. 00:39:04.919 [2024-09-27 15:57:45.347852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.919 [2024-09-27 15:57:45.347860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.919 qpair failed and we were unable to recover it. 00:39:04.919 [2024-09-27 15:57:45.348178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.919 [2024-09-27 15:57:45.348188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.919 qpair failed and we were unable to recover it. 00:39:04.919 [2024-09-27 15:57:45.348502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.919 [2024-09-27 15:57:45.348511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.919 qpair failed and we were unable to recover it. 00:39:04.919 [2024-09-27 15:57:45.348862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.919 [2024-09-27 15:57:45.348871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.919 qpair failed and we were unable to recover it. 00:39:04.919 [2024-09-27 15:57:45.349203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.919 [2024-09-27 15:57:45.349212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.919 qpair failed and we were unable to recover it. 00:39:04.919 [2024-09-27 15:57:45.349530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.919 [2024-09-27 15:57:45.349539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.919 qpair failed and we were unable to recover it. 00:39:04.919 [2024-09-27 15:57:45.349855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.919 [2024-09-27 15:57:45.349865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.919 qpair failed and we were unable to recover it. 00:39:04.919 [2024-09-27 15:57:45.350158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.919 [2024-09-27 15:57:45.350167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.919 qpair failed and we were unable to recover it. 00:39:04.919 [2024-09-27 15:57:45.350373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.919 [2024-09-27 15:57:45.350383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.919 qpair failed and we were unable to recover it. 00:39:04.919 [2024-09-27 15:57:45.350682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.919 [2024-09-27 15:57:45.350691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.919 qpair failed and we were unable to recover it. 00:39:04.919 [2024-09-27 15:57:45.351011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.919 [2024-09-27 15:57:45.351020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.919 qpair failed and we were unable to recover it. 00:39:04.919 [2024-09-27 15:57:45.351348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.919 [2024-09-27 15:57:45.351357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.919 qpair failed and we were unable to recover it. 00:39:04.919 [2024-09-27 15:57:45.351545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.919 [2024-09-27 15:57:45.351554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.919 qpair failed and we were unable to recover it. 00:39:04.919 [2024-09-27 15:57:45.351916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.919 [2024-09-27 15:57:45.351925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.919 qpair failed and we were unable to recover it. 00:39:04.919 [2024-09-27 15:57:45.352265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.919 [2024-09-27 15:57:45.352273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.919 qpair failed and we were unable to recover it. 00:39:04.919 [2024-09-27 15:57:45.352593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.919 [2024-09-27 15:57:45.352601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.919 qpair failed and we were unable to recover it. 00:39:04.919 [2024-09-27 15:57:45.352966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.919 [2024-09-27 15:57:45.352973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.919 qpair failed and we were unable to recover it. 00:39:04.919 [2024-09-27 15:57:45.353280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.919 [2024-09-27 15:57:45.353288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.919 qpair failed and we were unable to recover it. 00:39:04.919 [2024-09-27 15:57:45.353569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.920 [2024-09-27 15:57:45.353577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.920 qpair failed and we were unable to recover it. 00:39:04.920 [2024-09-27 15:57:45.353907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.920 [2024-09-27 15:57:45.353915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.920 qpair failed and we were unable to recover it. 00:39:04.920 [2024-09-27 15:57:45.354250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.920 [2024-09-27 15:57:45.354258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.920 qpair failed and we were unable to recover it. 00:39:04.920 [2024-09-27 15:57:45.354601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.920 [2024-09-27 15:57:45.354610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.920 qpair failed and we were unable to recover it. 00:39:04.920 [2024-09-27 15:57:45.354942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.920 [2024-09-27 15:57:45.354952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.920 qpair failed and we were unable to recover it. 00:39:04.920 [2024-09-27 15:57:45.355232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.920 [2024-09-27 15:57:45.355240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.920 qpair failed and we were unable to recover it. 00:39:04.920 [2024-09-27 15:57:45.355535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.920 [2024-09-27 15:57:45.355543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.920 qpair failed and we were unable to recover it. 00:39:04.920 [2024-09-27 15:57:45.355732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.920 [2024-09-27 15:57:45.355741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.920 qpair failed and we were unable to recover it. 00:39:04.920 [2024-09-27 15:57:45.356079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.920 [2024-09-27 15:57:45.356087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.920 qpair failed and we were unable to recover it. 00:39:04.920 [2024-09-27 15:57:45.356407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.920 [2024-09-27 15:57:45.356415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.920 qpair failed and we were unable to recover it. 00:39:04.920 [2024-09-27 15:57:45.356743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.920 [2024-09-27 15:57:45.356750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.920 qpair failed and we were unable to recover it. 00:39:04.920 [2024-09-27 15:57:45.357050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.920 [2024-09-27 15:57:45.357058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.920 qpair failed and we were unable to recover it. 00:39:04.920 [2024-09-27 15:57:45.357256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.920 [2024-09-27 15:57:45.357265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.920 qpair failed and we were unable to recover it. 00:39:04.920 [2024-09-27 15:57:45.357606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.920 [2024-09-27 15:57:45.357614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.920 qpair failed and we were unable to recover it. 00:39:04.920 [2024-09-27 15:57:45.357932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.920 [2024-09-27 15:57:45.357941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.920 qpair failed and we were unable to recover it. 00:39:04.920 [2024-09-27 15:57:45.358279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.920 [2024-09-27 15:57:45.358286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.920 qpair failed and we were unable to recover it. 00:39:04.920 [2024-09-27 15:57:45.358591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.920 [2024-09-27 15:57:45.358599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.920 qpair failed and we were unable to recover it. 00:39:04.920 [2024-09-27 15:57:45.358923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.920 [2024-09-27 15:57:45.358931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.920 qpair failed and we were unable to recover it. 00:39:04.920 [2024-09-27 15:57:45.359241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.920 [2024-09-27 15:57:45.359249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.920 qpair failed and we were unable to recover it. 00:39:04.920 [2024-09-27 15:57:45.359453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.920 [2024-09-27 15:57:45.359462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.920 qpair failed and we were unable to recover it. 00:39:04.920 [2024-09-27 15:57:45.359780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.920 [2024-09-27 15:57:45.359789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.920 qpair failed and we were unable to recover it. 00:39:04.920 [2024-09-27 15:57:45.360138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.920 [2024-09-27 15:57:45.360145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.920 qpair failed and we were unable to recover it. 00:39:04.920 [2024-09-27 15:57:45.360455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.920 [2024-09-27 15:57:45.360463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.920 qpair failed and we were unable to recover it. 00:39:04.920 [2024-09-27 15:57:45.360822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.920 [2024-09-27 15:57:45.360831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.920 qpair failed and we were unable to recover it. 00:39:04.920 [2024-09-27 15:57:45.361176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.920 [2024-09-27 15:57:45.361185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.920 qpair failed and we were unable to recover it. 00:39:04.920 [2024-09-27 15:57:45.361361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.920 [2024-09-27 15:57:45.361371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.920 qpair failed and we were unable to recover it. 00:39:04.920 [2024-09-27 15:57:45.361716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.920 [2024-09-27 15:57:45.361725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.920 qpair failed and we were unable to recover it. 00:39:04.920 [2024-09-27 15:57:45.362034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.920 [2024-09-27 15:57:45.362043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.920 qpair failed and we were unable to recover it. 00:39:04.920 [2024-09-27 15:57:45.362366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.920 [2024-09-27 15:57:45.362374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.920 qpair failed and we were unable to recover it. 00:39:04.920 [2024-09-27 15:57:45.362707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.920 [2024-09-27 15:57:45.362714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.920 qpair failed and we were unable to recover it. 00:39:04.920 [2024-09-27 15:57:45.362933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.920 [2024-09-27 15:57:45.362941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.920 qpair failed and we were unable to recover it. 00:39:04.920 [2024-09-27 15:57:45.363319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.920 [2024-09-27 15:57:45.363327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.920 qpair failed and we were unable to recover it. 00:39:04.920 [2024-09-27 15:57:45.363652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.920 [2024-09-27 15:57:45.363660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.920 qpair failed and we were unable to recover it. 00:39:04.920 [2024-09-27 15:57:45.364016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.920 [2024-09-27 15:57:45.364024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.920 qpair failed and we were unable to recover it. 00:39:04.921 [2024-09-27 15:57:45.364348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.921 [2024-09-27 15:57:45.364355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.921 qpair failed and we were unable to recover it. 00:39:04.921 [2024-09-27 15:57:45.364677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.921 [2024-09-27 15:57:45.364685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.921 qpair failed and we were unable to recover it. 00:39:04.921 [2024-09-27 15:57:45.364910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.921 [2024-09-27 15:57:45.364919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.921 qpair failed and we were unable to recover it. 00:39:04.921 [2024-09-27 15:57:45.365248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.921 [2024-09-27 15:57:45.365257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.921 qpair failed and we were unable to recover it. 00:39:04.921 [2024-09-27 15:57:45.365466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.921 [2024-09-27 15:57:45.365475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.921 qpair failed and we were unable to recover it. 00:39:04.921 [2024-09-27 15:57:45.365789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.921 [2024-09-27 15:57:45.365797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.921 qpair failed and we were unable to recover it. 00:39:04.921 [2024-09-27 15:57:45.366194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.921 [2024-09-27 15:57:45.366203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.921 qpair failed and we were unable to recover it. 00:39:04.921 [2024-09-27 15:57:45.366517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.921 [2024-09-27 15:57:45.366526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.921 qpair failed and we were unable to recover it. 00:39:04.921 [2024-09-27 15:57:45.366813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.921 [2024-09-27 15:57:45.366822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.921 qpair failed and we were unable to recover it. 00:39:04.921 [2024-09-27 15:57:45.367141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.921 [2024-09-27 15:57:45.367148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.921 qpair failed and we were unable to recover it. 00:39:04.921 [2024-09-27 15:57:45.367492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.921 [2024-09-27 15:57:45.367500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.921 qpair failed and we were unable to recover it. 00:39:04.921 [2024-09-27 15:57:45.367853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.921 [2024-09-27 15:57:45.367860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.921 qpair failed and we were unable to recover it. 00:39:04.921 [2024-09-27 15:57:45.368175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.921 [2024-09-27 15:57:45.368183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.921 qpair failed and we were unable to recover it. 00:39:04.921 [2024-09-27 15:57:45.368504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.921 [2024-09-27 15:57:45.368511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.921 qpair failed and we were unable to recover it. 00:39:04.921 [2024-09-27 15:57:45.368827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.921 [2024-09-27 15:57:45.368833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.921 qpair failed and we were unable to recover it. 00:39:04.921 [2024-09-27 15:57:45.369151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.921 [2024-09-27 15:57:45.369158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.921 qpair failed and we were unable to recover it. 00:39:04.921 [2024-09-27 15:57:45.369489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.921 [2024-09-27 15:57:45.369496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.921 qpair failed and we were unable to recover it. 00:39:04.921 [2024-09-27 15:57:45.369806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.921 [2024-09-27 15:57:45.369813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.921 qpair failed and we were unable to recover it. 00:39:04.921 [2024-09-27 15:57:45.370172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.921 [2024-09-27 15:57:45.370179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.921 qpair failed and we were unable to recover it. 00:39:04.921 [2024-09-27 15:57:45.370500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:04.921 [2024-09-27 15:57:45.370506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:04.921 qpair failed and we were unable to recover it. 00:39:05.198 [2024-09-27 15:57:45.370832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.198 [2024-09-27 15:57:45.370842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.198 qpair failed and we were unable to recover it. 00:39:05.198 [2024-09-27 15:57:45.371153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.198 [2024-09-27 15:57:45.371162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.198 qpair failed and we were unable to recover it. 00:39:05.198 [2024-09-27 15:57:45.371486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.198 [2024-09-27 15:57:45.371494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.198 qpair failed and we were unable to recover it. 00:39:05.198 [2024-09-27 15:57:45.371820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.198 [2024-09-27 15:57:45.371827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.198 qpair failed and we were unable to recover it. 00:39:05.198 [2024-09-27 15:57:45.372127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.198 [2024-09-27 15:57:45.372135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.198 qpair failed and we were unable to recover it. 00:39:05.198 [2024-09-27 15:57:45.372467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.198 [2024-09-27 15:57:45.372475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.198 qpair failed and we were unable to recover it. 00:39:05.198 [2024-09-27 15:57:45.372796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.198 [2024-09-27 15:57:45.372804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.198 qpair failed and we were unable to recover it. 00:39:05.198 [2024-09-27 15:57:45.373144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.198 [2024-09-27 15:57:45.373152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.198 qpair failed and we were unable to recover it. 00:39:05.198 [2024-09-27 15:57:45.373530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.198 [2024-09-27 15:57:45.373538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.198 qpair failed and we were unable to recover it. 00:39:05.198 [2024-09-27 15:57:45.373713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.198 [2024-09-27 15:57:45.373722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.198 qpair failed and we were unable to recover it. 00:39:05.198 [2024-09-27 15:57:45.374049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.198 [2024-09-27 15:57:45.374059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.198 qpair failed and we were unable to recover it. 00:39:05.198 [2024-09-27 15:57:45.374369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.198 [2024-09-27 15:57:45.374378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.198 qpair failed and we were unable to recover it. 00:39:05.198 [2024-09-27 15:57:45.374568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.198 [2024-09-27 15:57:45.374577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.198 qpair failed and we were unable to recover it. 00:39:05.198 [2024-09-27 15:57:45.374904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.198 [2024-09-27 15:57:45.374914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.198 qpair failed and we were unable to recover it. 00:39:05.198 [2024-09-27 15:57:45.375229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.198 [2024-09-27 15:57:45.375240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.198 qpair failed and we were unable to recover it. 00:39:05.198 [2024-09-27 15:57:45.375542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.198 [2024-09-27 15:57:45.375551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.198 qpair failed and we were unable to recover it. 00:39:05.198 [2024-09-27 15:57:45.375791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.198 [2024-09-27 15:57:45.375800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.198 qpair failed and we were unable to recover it. 00:39:05.198 [2024-09-27 15:57:45.376123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.198 [2024-09-27 15:57:45.376132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.198 qpair failed and we were unable to recover it. 00:39:05.198 [2024-09-27 15:57:45.376351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.198 [2024-09-27 15:57:45.376360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.198 qpair failed and we were unable to recover it. 00:39:05.198 [2024-09-27 15:57:45.376677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.198 [2024-09-27 15:57:45.376686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.198 qpair failed and we were unable to recover it. 00:39:05.198 [2024-09-27 15:57:45.376878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.198 [2024-09-27 15:57:45.376888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.198 qpair failed and we were unable to recover it. 00:39:05.198 [2024-09-27 15:57:45.376988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.198 [2024-09-27 15:57:45.376996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.198 qpair failed and we were unable to recover it. 00:39:05.198 [2024-09-27 15:57:45.377232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.198 [2024-09-27 15:57:45.377241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.198 qpair failed and we were unable to recover it. 00:39:05.198 [2024-09-27 15:57:45.377546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.198 [2024-09-27 15:57:45.377555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.198 qpair failed and we were unable to recover it. 00:39:05.198 [2024-09-27 15:57:45.377866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.198 [2024-09-27 15:57:45.377875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.198 qpair failed and we were unable to recover it. 00:39:05.198 [2024-09-27 15:57:45.378206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.198 [2024-09-27 15:57:45.378216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.198 qpair failed and we were unable to recover it. 00:39:05.198 [2024-09-27 15:57:45.378539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.198 [2024-09-27 15:57:45.378548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.198 qpair failed and we were unable to recover it. 00:39:05.198 [2024-09-27 15:57:45.378869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.199 [2024-09-27 15:57:45.378879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.199 qpair failed and we were unable to recover it. 00:39:05.199 [2024-09-27 15:57:45.379222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.199 [2024-09-27 15:57:45.379231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.199 qpair failed and we were unable to recover it. 00:39:05.199 [2024-09-27 15:57:45.379404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.199 [2024-09-27 15:57:45.379413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.199 qpair failed and we were unable to recover it. 00:39:05.199 [2024-09-27 15:57:45.379738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.199 [2024-09-27 15:57:45.379747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.199 qpair failed and we were unable to recover it. 00:39:05.199 [2024-09-27 15:57:45.380095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.199 [2024-09-27 15:57:45.380105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.199 qpair failed and we were unable to recover it. 00:39:05.199 [2024-09-27 15:57:45.380288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.199 [2024-09-27 15:57:45.380297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.199 qpair failed and we were unable to recover it. 00:39:05.199 [2024-09-27 15:57:45.380504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.199 [2024-09-27 15:57:45.380513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.199 qpair failed and we were unable to recover it. 00:39:05.199 [2024-09-27 15:57:45.380854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.199 [2024-09-27 15:57:45.380863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.199 qpair failed and we were unable to recover it. 00:39:05.199 [2024-09-27 15:57:45.381096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.199 [2024-09-27 15:57:45.381104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.199 qpair failed and we were unable to recover it. 00:39:05.199 [2024-09-27 15:57:45.381444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.199 [2024-09-27 15:57:45.381452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.199 qpair failed and we were unable to recover it. 00:39:05.199 [2024-09-27 15:57:45.381750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.199 [2024-09-27 15:57:45.381758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.199 qpair failed and we were unable to recover it. 00:39:05.199 [2024-09-27 15:57:45.382089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.199 [2024-09-27 15:57:45.382097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.199 qpair failed and we were unable to recover it. 00:39:05.199 [2024-09-27 15:57:45.382331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.199 [2024-09-27 15:57:45.382339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.199 qpair failed and we were unable to recover it. 00:39:05.199 [2024-09-27 15:57:45.382533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.199 [2024-09-27 15:57:45.382541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.199 qpair failed and we were unable to recover it. 00:39:05.199 [2024-09-27 15:57:45.382911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.199 [2024-09-27 15:57:45.382922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.199 qpair failed and we were unable to recover it. 00:39:05.199 [2024-09-27 15:57:45.383035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.199 [2024-09-27 15:57:45.383043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.199 qpair failed and we were unable to recover it. 00:39:05.199 [2024-09-27 15:57:45.383232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.199 [2024-09-27 15:57:45.383240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.199 qpair failed and we were unable to recover it. 00:39:05.199 [2024-09-27 15:57:45.383573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.199 [2024-09-27 15:57:45.383580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.199 qpair failed and we were unable to recover it. 00:39:05.199 [2024-09-27 15:57:45.383917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.199 [2024-09-27 15:57:45.383925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.199 qpair failed and we were unable to recover it. 00:39:05.199 [2024-09-27 15:57:45.384236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.199 [2024-09-27 15:57:45.384246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.199 qpair failed and we were unable to recover it. 00:39:05.199 [2024-09-27 15:57:45.384570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.199 [2024-09-27 15:57:45.384577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.199 qpair failed and we were unable to recover it. 00:39:05.199 [2024-09-27 15:57:45.384905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.199 [2024-09-27 15:57:45.384914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.199 qpair failed and we were unable to recover it. 00:39:05.199 [2024-09-27 15:57:45.385219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.199 [2024-09-27 15:57:45.385226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.199 qpair failed and we were unable to recover it. 00:39:05.199 [2024-09-27 15:57:45.385412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.199 [2024-09-27 15:57:45.385419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.199 qpair failed and we were unable to recover it. 00:39:05.199 [2024-09-27 15:57:45.385624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.199 [2024-09-27 15:57:45.385632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.199 qpair failed and we were unable to recover it. 00:39:05.199 [2024-09-27 15:57:45.385931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.199 [2024-09-27 15:57:45.385939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.199 qpair failed and we were unable to recover it. 00:39:05.199 [2024-09-27 15:57:45.386286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.199 [2024-09-27 15:57:45.386293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.199 qpair failed and we were unable to recover it. 00:39:05.199 [2024-09-27 15:57:45.386633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.199 [2024-09-27 15:57:45.386641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.199 qpair failed and we were unable to recover it. 00:39:05.199 [2024-09-27 15:57:45.386966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.199 [2024-09-27 15:57:45.386974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.199 qpair failed and we were unable to recover it. 00:39:05.199 [2024-09-27 15:57:45.387310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.199 [2024-09-27 15:57:45.387318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.199 qpair failed and we were unable to recover it. 00:39:05.199 [2024-09-27 15:57:45.387730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.199 [2024-09-27 15:57:45.387737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.199 qpair failed and we were unable to recover it. 00:39:05.199 [2024-09-27 15:57:45.388089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.199 [2024-09-27 15:57:45.388097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.199 qpair failed and we were unable to recover it. 00:39:05.199 [2024-09-27 15:57:45.388296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.199 [2024-09-27 15:57:45.388304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.199 qpair failed and we were unable to recover it. 00:39:05.199 [2024-09-27 15:57:45.388582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.199 [2024-09-27 15:57:45.388590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.199 qpair failed and we were unable to recover it. 00:39:05.199 [2024-09-27 15:57:45.388916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.199 [2024-09-27 15:57:45.388924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.199 qpair failed and we were unable to recover it. 00:39:05.199 [2024-09-27 15:57:45.389250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.199 [2024-09-27 15:57:45.389257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.199 qpair failed and we were unable to recover it. 00:39:05.199 [2024-09-27 15:57:45.389629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.199 [2024-09-27 15:57:45.389637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.199 qpair failed and we were unable to recover it. 00:39:05.200 [2024-09-27 15:57:45.389983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.200 [2024-09-27 15:57:45.389991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.200 qpair failed and we were unable to recover it. 00:39:05.200 [2024-09-27 15:57:45.390313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.200 [2024-09-27 15:57:45.390321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.200 qpair failed and we were unable to recover it. 00:39:05.200 [2024-09-27 15:57:45.390639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.200 [2024-09-27 15:57:45.390646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.200 qpair failed and we were unable to recover it. 00:39:05.200 [2024-09-27 15:57:45.390969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.200 [2024-09-27 15:57:45.390977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.200 qpair failed and we were unable to recover it. 00:39:05.200 [2024-09-27 15:57:45.391307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.200 [2024-09-27 15:57:45.391315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.200 qpair failed and we were unable to recover it. 00:39:05.200 [2024-09-27 15:57:45.391655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.200 [2024-09-27 15:57:45.391664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.200 qpair failed and we were unable to recover it. 00:39:05.200 [2024-09-27 15:57:45.391986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.200 [2024-09-27 15:57:45.391994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.200 qpair failed and we were unable to recover it. 00:39:05.200 [2024-09-27 15:57:45.392305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.200 [2024-09-27 15:57:45.392312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.200 qpair failed and we were unable to recover it. 00:39:05.200 [2024-09-27 15:57:45.392631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.200 [2024-09-27 15:57:45.392640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.200 qpair failed and we were unable to recover it. 00:39:05.200 [2024-09-27 15:57:45.392840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.200 [2024-09-27 15:57:45.392848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.200 qpair failed and we were unable to recover it. 00:39:05.200 [2024-09-27 15:57:45.392923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.200 [2024-09-27 15:57:45.392932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.200 qpair failed and we were unable to recover it. 00:39:05.200 [2024-09-27 15:57:45.393256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.200 [2024-09-27 15:57:45.393264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.200 qpair failed and we were unable to recover it. 00:39:05.200 [2024-09-27 15:57:45.393570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.200 [2024-09-27 15:57:45.393579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.200 qpair failed and we were unable to recover it. 00:39:05.200 [2024-09-27 15:57:45.393850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.200 [2024-09-27 15:57:45.393858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.200 qpair failed and we were unable to recover it. 00:39:05.200 [2024-09-27 15:57:45.394181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.200 [2024-09-27 15:57:45.394190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.200 qpair failed and we were unable to recover it. 00:39:05.200 [2024-09-27 15:57:45.394395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.200 [2024-09-27 15:57:45.394404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.200 qpair failed and we were unable to recover it. 00:39:05.200 [2024-09-27 15:57:45.394746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.200 [2024-09-27 15:57:45.394754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.200 qpair failed and we were unable to recover it. 00:39:05.200 [2024-09-27 15:57:45.395073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.200 [2024-09-27 15:57:45.395081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.200 qpair failed and we were unable to recover it. 00:39:05.200 [2024-09-27 15:57:45.395395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.200 [2024-09-27 15:57:45.395404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.200 qpair failed and we were unable to recover it. 00:39:05.200 [2024-09-27 15:57:45.395728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.200 [2024-09-27 15:57:45.395735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.200 qpair failed and we were unable to recover it. 00:39:05.200 [2024-09-27 15:57:45.395913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.200 [2024-09-27 15:57:45.395922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.200 qpair failed and we were unable to recover it. 00:39:05.200 [2024-09-27 15:57:45.396307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.200 [2024-09-27 15:57:45.396316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.200 qpair failed and we were unable to recover it. 00:39:05.200 [2024-09-27 15:57:45.396638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.200 [2024-09-27 15:57:45.396646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.200 qpair failed and we were unable to recover it. 00:39:05.200 [2024-09-27 15:57:45.396967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.200 [2024-09-27 15:57:45.396975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.200 qpair failed and we were unable to recover it. 00:39:05.200 [2024-09-27 15:57:45.397174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.200 [2024-09-27 15:57:45.397181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.200 qpair failed and we were unable to recover it. 00:39:05.200 [2024-09-27 15:57:45.397453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.200 [2024-09-27 15:57:45.397460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.200 qpair failed and we were unable to recover it. 00:39:05.200 [2024-09-27 15:57:45.397748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.200 [2024-09-27 15:57:45.397756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.200 qpair failed and we were unable to recover it. 00:39:05.200 [2024-09-27 15:57:45.398112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.200 [2024-09-27 15:57:45.398121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.200 qpair failed and we were unable to recover it. 00:39:05.200 [2024-09-27 15:57:45.398340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.200 [2024-09-27 15:57:45.398349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.200 qpair failed and we were unable to recover it. 00:39:05.200 [2024-09-27 15:57:45.398694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.200 [2024-09-27 15:57:45.398702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.200 qpair failed and we were unable to recover it. 00:39:05.200 [2024-09-27 15:57:45.399036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.200 [2024-09-27 15:57:45.399045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.200 qpair failed and we were unable to recover it. 00:39:05.200 [2024-09-27 15:57:45.399246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.200 [2024-09-27 15:57:45.399255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.200 qpair failed and we were unable to recover it. 00:39:05.200 [2024-09-27 15:57:45.399577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.200 [2024-09-27 15:57:45.399585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.200 qpair failed and we were unable to recover it. 00:39:05.200 [2024-09-27 15:57:45.399912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.200 [2024-09-27 15:57:45.399920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.200 qpair failed and we were unable to recover it. 00:39:05.200 [2024-09-27 15:57:45.400263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.200 [2024-09-27 15:57:45.400270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.200 qpair failed and we were unable to recover it. 00:39:05.200 [2024-09-27 15:57:45.400473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.200 [2024-09-27 15:57:45.400480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.200 qpair failed and we were unable to recover it. 00:39:05.200 [2024-09-27 15:57:45.400868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.201 [2024-09-27 15:57:45.400876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.201 qpair failed and we were unable to recover it. 00:39:05.201 [2024-09-27 15:57:45.401205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.201 [2024-09-27 15:57:45.401213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.201 qpair failed and we were unable to recover it. 00:39:05.201 [2024-09-27 15:57:45.401533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.201 [2024-09-27 15:57:45.401540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.201 qpair failed and we were unable to recover it. 00:39:05.201 [2024-09-27 15:57:45.401862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.201 [2024-09-27 15:57:45.401870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.201 qpair failed and we were unable to recover it. 00:39:05.201 [2024-09-27 15:57:45.402234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.201 [2024-09-27 15:57:45.402243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.201 qpair failed and we were unable to recover it. 00:39:05.201 [2024-09-27 15:57:45.402549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.201 [2024-09-27 15:57:45.402557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.201 qpair failed and we were unable to recover it. 00:39:05.201 [2024-09-27 15:57:45.402904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.201 [2024-09-27 15:57:45.402912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.201 qpair failed and we were unable to recover it. 00:39:05.201 [2024-09-27 15:57:45.403307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.201 [2024-09-27 15:57:45.403316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.201 qpair failed and we were unable to recover it. 00:39:05.201 [2024-09-27 15:57:45.403405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.201 [2024-09-27 15:57:45.403412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.201 qpair failed and we were unable to recover it. 00:39:05.201 [2024-09-27 15:57:45.403692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.201 [2024-09-27 15:57:45.403701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.201 qpair failed and we were unable to recover it. 00:39:05.201 [2024-09-27 15:57:45.404023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.201 [2024-09-27 15:57:45.404031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.201 qpair failed and we were unable to recover it. 00:39:05.201 [2024-09-27 15:57:45.404274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.201 [2024-09-27 15:57:45.404282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.201 qpair failed and we were unable to recover it. 00:39:05.201 [2024-09-27 15:57:45.404597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.201 [2024-09-27 15:57:45.404605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.201 qpair failed and we were unable to recover it. 00:39:05.201 [2024-09-27 15:57:45.404936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.201 [2024-09-27 15:57:45.404944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.201 qpair failed and we were unable to recover it. 00:39:05.201 [2024-09-27 15:57:45.405298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.201 [2024-09-27 15:57:45.405305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.201 qpair failed and we were unable to recover it. 00:39:05.201 [2024-09-27 15:57:45.405609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.201 [2024-09-27 15:57:45.405617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.201 qpair failed and we were unable to recover it. 00:39:05.201 [2024-09-27 15:57:45.405986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.201 [2024-09-27 15:57:45.405994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.201 qpair failed and we were unable to recover it. 00:39:05.201 [2024-09-27 15:57:45.406200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.201 [2024-09-27 15:57:45.406208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.201 qpair failed and we were unable to recover it. 00:39:05.201 [2024-09-27 15:57:45.406587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.201 [2024-09-27 15:57:45.406594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.201 qpair failed and we were unable to recover it. 00:39:05.201 [2024-09-27 15:57:45.406790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.201 [2024-09-27 15:57:45.406798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.201 qpair failed and we were unable to recover it. 00:39:05.201 [2024-09-27 15:57:45.407116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.201 [2024-09-27 15:57:45.407125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.201 qpair failed and we were unable to recover it. 00:39:05.201 [2024-09-27 15:57:45.407446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.201 [2024-09-27 15:57:45.407455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.201 qpair failed and we were unable to recover it. 00:39:05.201 [2024-09-27 15:57:45.407632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.201 [2024-09-27 15:57:45.407641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.201 qpair failed and we were unable to recover it. 00:39:05.201 [2024-09-27 15:57:45.407984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.201 [2024-09-27 15:57:45.407992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.201 qpair failed and we were unable to recover it. 00:39:05.201 [2024-09-27 15:57:45.408340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.201 [2024-09-27 15:57:45.408347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.201 qpair failed and we were unable to recover it. 00:39:05.201 [2024-09-27 15:57:45.408681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.201 [2024-09-27 15:57:45.408688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.201 qpair failed and we were unable to recover it. 00:39:05.201 [2024-09-27 15:57:45.408998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.201 [2024-09-27 15:57:45.409006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.201 qpair failed and we were unable to recover it. 00:39:05.201 [2024-09-27 15:57:45.409338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.201 [2024-09-27 15:57:45.409346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.201 qpair failed and we were unable to recover it. 00:39:05.201 [2024-09-27 15:57:45.409543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.201 [2024-09-27 15:57:45.409551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.201 qpair failed and we were unable to recover it. 00:39:05.201 [2024-09-27 15:57:45.409752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.201 [2024-09-27 15:57:45.409760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.201 qpair failed and we were unable to recover it. 00:39:05.201 [2024-09-27 15:57:45.410091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.201 [2024-09-27 15:57:45.410099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.201 qpair failed and we were unable to recover it. 00:39:05.201 [2024-09-27 15:57:45.410471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.201 [2024-09-27 15:57:45.410479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.201 qpair failed and we were unable to recover it. 00:39:05.201 [2024-09-27 15:57:45.410712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.201 [2024-09-27 15:57:45.410719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.201 qpair failed and we were unable to recover it. 00:39:05.201 [2024-09-27 15:57:45.411035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.201 [2024-09-27 15:57:45.411042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.201 qpair failed and we were unable to recover it. 00:39:05.201 [2024-09-27 15:57:45.411232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.201 [2024-09-27 15:57:45.411242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.201 qpair failed and we were unable to recover it. 00:39:05.201 [2024-09-27 15:57:45.411564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.201 [2024-09-27 15:57:45.411571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.201 qpair failed and we were unable to recover it. 00:39:05.201 [2024-09-27 15:57:45.411903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.201 [2024-09-27 15:57:45.411913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.201 qpair failed and we were unable to recover it. 00:39:05.201 [2024-09-27 15:57:45.412233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.202 [2024-09-27 15:57:45.412241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.202 qpair failed and we were unable to recover it. 00:39:05.202 [2024-09-27 15:57:45.412560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.202 [2024-09-27 15:57:45.412568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.202 qpair failed and we were unable to recover it. 00:39:05.202 [2024-09-27 15:57:45.412774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.202 [2024-09-27 15:57:45.412783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.202 qpair failed and we were unable to recover it. 00:39:05.202 [2024-09-27 15:57:45.413101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.202 [2024-09-27 15:57:45.413109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.202 qpair failed and we were unable to recover it. 00:39:05.202 [2024-09-27 15:57:45.413410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.202 [2024-09-27 15:57:45.413418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.202 qpair failed and we were unable to recover it. 00:39:05.202 [2024-09-27 15:57:45.413769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.202 [2024-09-27 15:57:45.413776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.202 qpair failed and we were unable to recover it. 00:39:05.202 [2024-09-27 15:57:45.414103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.202 [2024-09-27 15:57:45.414111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.202 qpair failed and we were unable to recover it. 00:39:05.202 [2024-09-27 15:57:45.414441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.202 [2024-09-27 15:57:45.414449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.202 qpair failed and we were unable to recover it. 00:39:05.202 [2024-09-27 15:57:45.414658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.202 [2024-09-27 15:57:45.414665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.202 qpair failed and we were unable to recover it. 00:39:05.202 [2024-09-27 15:57:45.414978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.202 [2024-09-27 15:57:45.414985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.202 qpair failed and we were unable to recover it. 00:39:05.202 [2024-09-27 15:57:45.415197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.202 [2024-09-27 15:57:45.415205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.202 qpair failed and we were unable to recover it. 00:39:05.202 [2024-09-27 15:57:45.415435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.202 [2024-09-27 15:57:45.415443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.202 qpair failed and we were unable to recover it. 00:39:05.202 [2024-09-27 15:57:45.415798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.202 [2024-09-27 15:57:45.415806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.202 qpair failed and we were unable to recover it. 00:39:05.202 [2024-09-27 15:57:45.416133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.202 [2024-09-27 15:57:45.416141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.202 qpair failed and we were unable to recover it. 00:39:05.202 [2024-09-27 15:57:45.416461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.202 [2024-09-27 15:57:45.416469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.202 qpair failed and we were unable to recover it. 00:39:05.202 [2024-09-27 15:57:45.416811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.202 [2024-09-27 15:57:45.416818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.202 qpair failed and we were unable to recover it. 00:39:05.202 [2024-09-27 15:57:45.417095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.202 [2024-09-27 15:57:45.417103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.202 qpair failed and we were unable to recover it. 00:39:05.202 [2024-09-27 15:57:45.417421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.202 [2024-09-27 15:57:45.417429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.202 qpair failed and we were unable to recover it. 00:39:05.202 [2024-09-27 15:57:45.417735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.202 [2024-09-27 15:57:45.417742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.202 qpair failed and we were unable to recover it. 00:39:05.202 [2024-09-27 15:57:45.418080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.202 [2024-09-27 15:57:45.418089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.202 qpair failed and we were unable to recover it. 00:39:05.202 [2024-09-27 15:57:45.418404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.202 [2024-09-27 15:57:45.418413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.202 qpair failed and we were unable to recover it. 00:39:05.202 [2024-09-27 15:57:45.418717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.202 [2024-09-27 15:57:45.418724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.202 qpair failed and we were unable to recover it. 00:39:05.202 [2024-09-27 15:57:45.418970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.202 [2024-09-27 15:57:45.418978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.202 qpair failed and we were unable to recover it. 00:39:05.202 [2024-09-27 15:57:45.419348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.202 [2024-09-27 15:57:45.419356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.202 qpair failed and we were unable to recover it. 00:39:05.202 [2024-09-27 15:57:45.419682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.202 [2024-09-27 15:57:45.419690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.202 qpair failed and we were unable to recover it. 00:39:05.202 [2024-09-27 15:57:45.420020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.202 [2024-09-27 15:57:45.420028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.202 qpair failed and we were unable to recover it. 00:39:05.202 [2024-09-27 15:57:45.420358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.202 [2024-09-27 15:57:45.420366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.202 qpair failed and we were unable to recover it. 00:39:05.202 [2024-09-27 15:57:45.420545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.202 [2024-09-27 15:57:45.420552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.202 qpair failed and we were unable to recover it. 00:39:05.202 [2024-09-27 15:57:45.420862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.202 [2024-09-27 15:57:45.420870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.202 qpair failed and we were unable to recover it. 00:39:05.202 [2024-09-27 15:57:45.421189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.202 [2024-09-27 15:57:45.421197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.202 qpair failed and we were unable to recover it. 00:39:05.202 [2024-09-27 15:57:45.421531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.202 [2024-09-27 15:57:45.421539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.202 qpair failed and we were unable to recover it. 00:39:05.202 [2024-09-27 15:57:45.421855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.202 [2024-09-27 15:57:45.421862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.202 qpair failed and we were unable to recover it. 00:39:05.202 [2024-09-27 15:57:45.422153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.202 [2024-09-27 15:57:45.422161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.202 qpair failed and we were unable to recover it. 00:39:05.202 [2024-09-27 15:57:45.422372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.202 [2024-09-27 15:57:45.422381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.202 qpair failed and we were unable to recover it. 00:39:05.202 [2024-09-27 15:57:45.422703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.203 [2024-09-27 15:57:45.422710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.203 qpair failed and we were unable to recover it. 00:39:05.203 [2024-09-27 15:57:45.423037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.203 [2024-09-27 15:57:45.423045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.203 qpair failed and we were unable to recover it. 00:39:05.203 [2024-09-27 15:57:45.423371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.203 [2024-09-27 15:57:45.423379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.203 qpair failed and we were unable to recover it. 00:39:05.203 [2024-09-27 15:57:45.423741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.203 [2024-09-27 15:57:45.423757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.203 qpair failed and we were unable to recover it. 00:39:05.203 [2024-09-27 15:57:45.424081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.203 [2024-09-27 15:57:45.424089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.203 qpair failed and we were unable to recover it. 00:39:05.203 [2024-09-27 15:57:45.424410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.203 [2024-09-27 15:57:45.424418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.203 qpair failed and we were unable to recover it. 00:39:05.203 [2024-09-27 15:57:45.424630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.203 [2024-09-27 15:57:45.424644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.203 qpair failed and we were unable to recover it. 00:39:05.203 [2024-09-27 15:57:45.425000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.203 [2024-09-27 15:57:45.425008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.203 qpair failed and we were unable to recover it. 00:39:05.203 [2024-09-27 15:57:45.425328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.203 [2024-09-27 15:57:45.425336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.203 qpair failed and we were unable to recover it. 00:39:05.203 [2024-09-27 15:57:45.425657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.203 [2024-09-27 15:57:45.425664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.203 qpair failed and we were unable to recover it. 00:39:05.203 [2024-09-27 15:57:45.426061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.203 [2024-09-27 15:57:45.426070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.203 qpair failed and we were unable to recover it. 00:39:05.203 [2024-09-27 15:57:45.426397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.203 [2024-09-27 15:57:45.426405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.203 qpair failed and we were unable to recover it. 00:39:05.203 [2024-09-27 15:57:45.426728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.203 [2024-09-27 15:57:45.426736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.203 qpair failed and we were unable to recover it. 00:39:05.203 [2024-09-27 15:57:45.427072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.203 [2024-09-27 15:57:45.427079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.203 qpair failed and we were unable to recover it. 00:39:05.203 [2024-09-27 15:57:45.427400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.203 [2024-09-27 15:57:45.427408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.203 qpair failed and we were unable to recover it. 00:39:05.203 [2024-09-27 15:57:45.427738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.203 [2024-09-27 15:57:45.427745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.203 qpair failed and we were unable to recover it. 00:39:05.203 [2024-09-27 15:57:45.428050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.203 [2024-09-27 15:57:45.428058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.203 qpair failed and we were unable to recover it. 00:39:05.203 [2024-09-27 15:57:45.428375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.203 [2024-09-27 15:57:45.428382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.203 qpair failed and we were unable to recover it. 00:39:05.203 [2024-09-27 15:57:45.428706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.203 [2024-09-27 15:57:45.428714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.203 qpair failed and we were unable to recover it. 00:39:05.203 [2024-09-27 15:57:45.429040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.203 [2024-09-27 15:57:45.429047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.203 qpair failed and we were unable to recover it. 00:39:05.203 [2024-09-27 15:57:45.429365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.203 [2024-09-27 15:57:45.429373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.203 qpair failed and we were unable to recover it. 00:39:05.203 [2024-09-27 15:57:45.429687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.203 [2024-09-27 15:57:45.429694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.203 qpair failed and we were unable to recover it. 00:39:05.203 [2024-09-27 15:57:45.430018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.203 [2024-09-27 15:57:45.430026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.203 qpair failed and we were unable to recover it. 00:39:05.203 [2024-09-27 15:57:45.430346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.203 [2024-09-27 15:57:45.430353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.203 qpair failed and we were unable to recover it. 00:39:05.203 [2024-09-27 15:57:45.430683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.203 [2024-09-27 15:57:45.430691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.203 qpair failed and we were unable to recover it. 00:39:05.203 [2024-09-27 15:57:45.431108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.203 [2024-09-27 15:57:45.431118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.203 qpair failed and we were unable to recover it. 00:39:05.203 [2024-09-27 15:57:45.431420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.203 [2024-09-27 15:57:45.431429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.203 qpair failed and we were unable to recover it. 00:39:05.203 [2024-09-27 15:57:45.431610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.203 [2024-09-27 15:57:45.431618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.203 qpair failed and we were unable to recover it. 00:39:05.203 [2024-09-27 15:57:45.431957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.203 [2024-09-27 15:57:45.431966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.203 qpair failed and we were unable to recover it. 00:39:05.203 [2024-09-27 15:57:45.432288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.203 [2024-09-27 15:57:45.432296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.203 qpair failed and we were unable to recover it. 00:39:05.203 [2024-09-27 15:57:45.432619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.203 [2024-09-27 15:57:45.432627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.203 qpair failed and we were unable to recover it. 00:39:05.203 [2024-09-27 15:57:45.432959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.203 [2024-09-27 15:57:45.432967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.203 qpair failed and we were unable to recover it. 00:39:05.203 [2024-09-27 15:57:45.433313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.203 [2024-09-27 15:57:45.433321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.203 qpair failed and we were unable to recover it. 00:39:05.203 [2024-09-27 15:57:45.433551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.203 [2024-09-27 15:57:45.433562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.203 qpair failed and we were unable to recover it. 00:39:05.203 [2024-09-27 15:57:45.433884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.203 [2024-09-27 15:57:45.433891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.204 qpair failed and we were unable to recover it. 00:39:05.204 [2024-09-27 15:57:45.434208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.204 [2024-09-27 15:57:45.434216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.204 qpair failed and we were unable to recover it. 00:39:05.204 [2024-09-27 15:57:45.434533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.204 [2024-09-27 15:57:45.434540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.204 qpair failed and we were unable to recover it. 00:39:05.204 [2024-09-27 15:57:45.434845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.204 [2024-09-27 15:57:45.434853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.204 qpair failed and we were unable to recover it. 00:39:05.204 [2024-09-27 15:57:45.435170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.204 [2024-09-27 15:57:45.435178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.204 qpair failed and we were unable to recover it. 00:39:05.204 [2024-09-27 15:57:45.435505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.204 [2024-09-27 15:57:45.435513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.204 qpair failed and we were unable to recover it. 00:39:05.204 [2024-09-27 15:57:45.435843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.204 [2024-09-27 15:57:45.435850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.204 qpair failed and we were unable to recover it. 00:39:05.204 [2024-09-27 15:57:45.436168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.204 [2024-09-27 15:57:45.436175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.204 qpair failed and we were unable to recover it. 00:39:05.204 [2024-09-27 15:57:45.436498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.204 [2024-09-27 15:57:45.436506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.204 qpair failed and we were unable to recover it. 00:39:05.204 [2024-09-27 15:57:45.436834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.204 [2024-09-27 15:57:45.436842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.204 qpair failed and we were unable to recover it. 00:39:05.204 [2024-09-27 15:57:45.437163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.204 [2024-09-27 15:57:45.437172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.204 qpair failed and we were unable to recover it. 00:39:05.204 [2024-09-27 15:57:45.437452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.204 [2024-09-27 15:57:45.437461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.204 qpair failed and we were unable to recover it. 00:39:05.204 [2024-09-27 15:57:45.437784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.204 [2024-09-27 15:57:45.437792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.204 qpair failed and we were unable to recover it. 00:39:05.204 [2024-09-27 15:57:45.438138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.204 [2024-09-27 15:57:45.438145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.204 qpair failed and we were unable to recover it. 00:39:05.204 [2024-09-27 15:57:45.438441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.204 [2024-09-27 15:57:45.438449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.204 qpair failed and we were unable to recover it. 00:39:05.204 [2024-09-27 15:57:45.438715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.204 [2024-09-27 15:57:45.438723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.204 qpair failed and we were unable to recover it. 00:39:05.204 [2024-09-27 15:57:45.439038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.204 [2024-09-27 15:57:45.439045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.204 qpair failed and we were unable to recover it. 00:39:05.204 [2024-09-27 15:57:45.439370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.204 [2024-09-27 15:57:45.439378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.204 qpair failed and we were unable to recover it. 00:39:05.204 [2024-09-27 15:57:45.439700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.204 [2024-09-27 15:57:45.439707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.204 qpair failed and we were unable to recover it. 00:39:05.204 [2024-09-27 15:57:45.440026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.204 [2024-09-27 15:57:45.440033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.204 qpair failed and we were unable to recover it. 00:39:05.204 [2024-09-27 15:57:45.440352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.204 [2024-09-27 15:57:45.440359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.204 qpair failed and we were unable to recover it. 00:39:05.204 [2024-09-27 15:57:45.440683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.204 [2024-09-27 15:57:45.440690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.204 qpair failed and we were unable to recover it. 00:39:05.204 [2024-09-27 15:57:45.440997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.204 [2024-09-27 15:57:45.441005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.204 qpair failed and we were unable to recover it. 00:39:05.204 [2024-09-27 15:57:45.441346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.204 [2024-09-27 15:57:45.441354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.204 qpair failed and we were unable to recover it. 00:39:05.204 [2024-09-27 15:57:45.441675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.204 [2024-09-27 15:57:45.441684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.204 qpair failed and we were unable to recover it. 00:39:05.204 [2024-09-27 15:57:45.442013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.204 [2024-09-27 15:57:45.442021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.204 qpair failed and we were unable to recover it. 00:39:05.204 [2024-09-27 15:57:45.442345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.204 [2024-09-27 15:57:45.442355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.204 qpair failed and we were unable to recover it. 00:39:05.204 [2024-09-27 15:57:45.442679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.204 [2024-09-27 15:57:45.442687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.204 qpair failed and we were unable to recover it. 00:39:05.204 [2024-09-27 15:57:45.443013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.204 [2024-09-27 15:57:45.443021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.204 qpair failed and we were unable to recover it. 00:39:05.204 [2024-09-27 15:57:45.443227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.204 [2024-09-27 15:57:45.443235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.204 qpair failed and we were unable to recover it. 00:39:05.204 [2024-09-27 15:57:45.443603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.204 [2024-09-27 15:57:45.443613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.204 qpair failed and we were unable to recover it. 00:39:05.204 [2024-09-27 15:57:45.443920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.204 [2024-09-27 15:57:45.443928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.204 qpair failed and we were unable to recover it. 00:39:05.204 [2024-09-27 15:57:45.444151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.204 [2024-09-27 15:57:45.444159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.204 qpair failed and we were unable to recover it. 00:39:05.204 [2024-09-27 15:57:45.444528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.204 [2024-09-27 15:57:45.444535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.204 qpair failed and we were unable to recover it. 00:39:05.204 [2024-09-27 15:57:45.444855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.204 [2024-09-27 15:57:45.444862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.204 qpair failed and we were unable to recover it. 00:39:05.204 [2024-09-27 15:57:45.445188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.204 [2024-09-27 15:57:45.445196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.204 qpair failed and we were unable to recover it. 00:39:05.204 [2024-09-27 15:57:45.445516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.204 [2024-09-27 15:57:45.445525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.204 qpair failed and we were unable to recover it. 00:39:05.204 [2024-09-27 15:57:45.445844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.204 [2024-09-27 15:57:45.445852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.205 qpair failed and we were unable to recover it. 00:39:05.205 [2024-09-27 15:57:45.446058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.205 [2024-09-27 15:57:45.446066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.205 qpair failed and we were unable to recover it. 00:39:05.205 [2024-09-27 15:57:45.446401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.205 [2024-09-27 15:57:45.446408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.205 qpair failed and we were unable to recover it. 00:39:05.205 [2024-09-27 15:57:45.446726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.205 [2024-09-27 15:57:45.446734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.205 qpair failed and we were unable to recover it. 00:39:05.205 [2024-09-27 15:57:45.447046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.205 [2024-09-27 15:57:45.447054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.205 qpair failed and we were unable to recover it. 00:39:05.205 [2024-09-27 15:57:45.447354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.205 [2024-09-27 15:57:45.447362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.205 qpair failed and we were unable to recover it. 00:39:05.205 [2024-09-27 15:57:45.447568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.205 [2024-09-27 15:57:45.447576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.205 qpair failed and we were unable to recover it. 00:39:05.205 [2024-09-27 15:57:45.447849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.205 [2024-09-27 15:57:45.447857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.205 qpair failed and we were unable to recover it. 00:39:05.205 [2024-09-27 15:57:45.448266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.205 [2024-09-27 15:57:45.448274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.205 qpair failed and we were unable to recover it. 00:39:05.205 [2024-09-27 15:57:45.448591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.205 [2024-09-27 15:57:45.448599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.205 qpair failed and we were unable to recover it. 00:39:05.205 [2024-09-27 15:57:45.448904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.205 [2024-09-27 15:57:45.448911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.205 qpair failed and we were unable to recover it. 00:39:05.205 [2024-09-27 15:57:45.449218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.205 [2024-09-27 15:57:45.449234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.205 qpair failed and we were unable to recover it. 00:39:05.205 [2024-09-27 15:57:45.449548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.205 [2024-09-27 15:57:45.449556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.205 qpair failed and we were unable to recover it. 00:39:05.205 [2024-09-27 15:57:45.449847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.205 [2024-09-27 15:57:45.449854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.205 qpair failed and we were unable to recover it. 00:39:05.205 [2024-09-27 15:57:45.450171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.205 [2024-09-27 15:57:45.450178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.205 qpair failed and we were unable to recover it. 00:39:05.205 [2024-09-27 15:57:45.450497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.205 [2024-09-27 15:57:45.450505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.205 qpair failed and we were unable to recover it. 00:39:05.205 [2024-09-27 15:57:45.450824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.205 [2024-09-27 15:57:45.450833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.205 qpair failed and we were unable to recover it. 00:39:05.205 [2024-09-27 15:57:45.451157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.205 [2024-09-27 15:57:45.451165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.205 qpair failed and we were unable to recover it. 00:39:05.205 [2024-09-27 15:57:45.451364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.205 [2024-09-27 15:57:45.451372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.205 qpair failed and we were unable to recover it. 00:39:05.205 [2024-09-27 15:57:45.451663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.205 [2024-09-27 15:57:45.451671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.205 qpair failed and we were unable to recover it. 00:39:05.205 [2024-09-27 15:57:45.452034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.205 [2024-09-27 15:57:45.452041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.205 qpair failed and we were unable to recover it. 00:39:05.205 [2024-09-27 15:57:45.452344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.205 [2024-09-27 15:57:45.452352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.205 qpair failed and we were unable to recover it. 00:39:05.205 [2024-09-27 15:57:45.452684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.205 [2024-09-27 15:57:45.452693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.205 qpair failed and we were unable to recover it. 00:39:05.205 [2024-09-27 15:57:45.453014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.205 [2024-09-27 15:57:45.453023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.205 qpair failed and we were unable to recover it. 00:39:05.205 [2024-09-27 15:57:45.453309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.205 [2024-09-27 15:57:45.453318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.205 qpair failed and we were unable to recover it. 00:39:05.205 [2024-09-27 15:57:45.453633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.205 [2024-09-27 15:57:45.453640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.205 qpair failed and we were unable to recover it. 00:39:05.205 [2024-09-27 15:57:45.453921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.205 [2024-09-27 15:57:45.453929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.205 qpair failed and we were unable to recover it. 00:39:05.205 [2024-09-27 15:57:45.454134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.205 [2024-09-27 15:57:45.454141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.205 qpair failed and we were unable to recover it. 00:39:05.205 [2024-09-27 15:57:45.454507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.205 [2024-09-27 15:57:45.454515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.205 qpair failed and we were unable to recover it. 00:39:05.205 [2024-09-27 15:57:45.454839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.205 [2024-09-27 15:57:45.454847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.205 qpair failed and we were unable to recover it. 00:39:05.205 [2024-09-27 15:57:45.455234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.205 [2024-09-27 15:57:45.455243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.205 qpair failed and we were unable to recover it. 00:39:05.205 [2024-09-27 15:57:45.455547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.205 [2024-09-27 15:57:45.455555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.205 qpair failed and we were unable to recover it. 00:39:05.205 [2024-09-27 15:57:45.455875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.205 [2024-09-27 15:57:45.455884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.205 qpair failed and we were unable to recover it. 00:39:05.205 [2024-09-27 15:57:45.456277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.205 [2024-09-27 15:57:45.456285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.205 qpair failed and we were unable to recover it. 00:39:05.205 [2024-09-27 15:57:45.456591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.205 [2024-09-27 15:57:45.456600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.205 qpair failed and we were unable to recover it. 00:39:05.205 [2024-09-27 15:57:45.456901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.205 [2024-09-27 15:57:45.456908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.205 qpair failed and we were unable to recover it. 00:39:05.205 [2024-09-27 15:57:45.457223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.205 [2024-09-27 15:57:45.457231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.205 qpair failed and we were unable to recover it. 00:39:05.205 [2024-09-27 15:57:45.457556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.205 [2024-09-27 15:57:45.457564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.205 qpair failed and we were unable to recover it. 00:39:05.206 [2024-09-27 15:57:45.457862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.206 [2024-09-27 15:57:45.457870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.206 qpair failed and we were unable to recover it. 00:39:05.206 [2024-09-27 15:57:45.458146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.206 [2024-09-27 15:57:45.458154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.206 qpair failed and we were unable to recover it. 00:39:05.206 [2024-09-27 15:57:45.458467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.206 [2024-09-27 15:57:45.458475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.206 qpair failed and we were unable to recover it. 00:39:05.206 [2024-09-27 15:57:45.458670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.206 [2024-09-27 15:57:45.458678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.206 qpair failed and we were unable to recover it. 00:39:05.206 [2024-09-27 15:57:45.458875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.206 [2024-09-27 15:57:45.458882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.206 qpair failed and we were unable to recover it. 00:39:05.206 [2024-09-27 15:57:45.459196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.206 [2024-09-27 15:57:45.459204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.206 qpair failed and we were unable to recover it. 00:39:05.206 [2024-09-27 15:57:45.459502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.206 [2024-09-27 15:57:45.459510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.206 qpair failed and we were unable to recover it. 00:39:05.206 [2024-09-27 15:57:45.459837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.206 [2024-09-27 15:57:45.459844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.206 qpair failed and we were unable to recover it. 00:39:05.206 [2024-09-27 15:57:45.460160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.206 [2024-09-27 15:57:45.460169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.206 qpair failed and we were unable to recover it. 00:39:05.206 [2024-09-27 15:57:45.460456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.206 [2024-09-27 15:57:45.460463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.206 qpair failed and we were unable to recover it. 00:39:05.206 [2024-09-27 15:57:45.460782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.206 [2024-09-27 15:57:45.460789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.206 qpair failed and we were unable to recover it. 00:39:05.206 [2024-09-27 15:57:45.461129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.206 [2024-09-27 15:57:45.461137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.206 qpair failed and we were unable to recover it. 00:39:05.206 [2024-09-27 15:57:45.461355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.206 [2024-09-27 15:57:45.461363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.206 qpair failed and we were unable to recover it. 00:39:05.206 [2024-09-27 15:57:45.461685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.206 [2024-09-27 15:57:45.461693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.206 qpair failed and we were unable to recover it. 00:39:05.206 [2024-09-27 15:57:45.462018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.206 [2024-09-27 15:57:45.462026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.206 qpair failed and we were unable to recover it. 00:39:05.206 [2024-09-27 15:57:45.462359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.206 [2024-09-27 15:57:45.462367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.206 qpair failed and we were unable to recover it. 00:39:05.206 [2024-09-27 15:57:45.462684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.206 [2024-09-27 15:57:45.462692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.206 qpair failed and we were unable to recover it. 00:39:05.206 [2024-09-27 15:57:45.463014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.206 [2024-09-27 15:57:45.463022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.206 qpair failed and we were unable to recover it. 00:39:05.206 [2024-09-27 15:57:45.463209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.206 [2024-09-27 15:57:45.463218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.206 qpair failed and we were unable to recover it. 00:39:05.206 [2024-09-27 15:57:45.463567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.206 [2024-09-27 15:57:45.463574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.206 qpair failed and we were unable to recover it. 00:39:05.206 [2024-09-27 15:57:45.463888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.206 [2024-09-27 15:57:45.463900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.206 qpair failed and we were unable to recover it. 00:39:05.206 [2024-09-27 15:57:45.464105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.206 [2024-09-27 15:57:45.464112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.206 qpair failed and we were unable to recover it. 00:39:05.206 [2024-09-27 15:57:45.464436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.206 [2024-09-27 15:57:45.464443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.206 qpair failed and we were unable to recover it. 00:39:05.206 [2024-09-27 15:57:45.464749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.206 [2024-09-27 15:57:45.464757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.206 qpair failed and we were unable to recover it. 00:39:05.206 [2024-09-27 15:57:45.465092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.206 [2024-09-27 15:57:45.465100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.206 qpair failed and we were unable to recover it. 00:39:05.206 [2024-09-27 15:57:45.465418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.206 [2024-09-27 15:57:45.465426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.206 qpair failed and we were unable to recover it. 00:39:05.206 [2024-09-27 15:57:45.465746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.206 [2024-09-27 15:57:45.465753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.206 qpair failed and we were unable to recover it. 00:39:05.206 [2024-09-27 15:57:45.466069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.206 [2024-09-27 15:57:45.466077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.206 qpair failed and we were unable to recover it. 00:39:05.206 [2024-09-27 15:57:45.466399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.206 [2024-09-27 15:57:45.466406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.206 qpair failed and we were unable to recover it. 00:39:05.206 [2024-09-27 15:57:45.466694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.206 [2024-09-27 15:57:45.466701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.206 qpair failed and we were unable to recover it. 00:39:05.206 [2024-09-27 15:57:45.467028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.206 [2024-09-27 15:57:45.467035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.206 qpair failed and we were unable to recover it. 00:39:05.206 [2024-09-27 15:57:45.467374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.206 [2024-09-27 15:57:45.467381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.206 qpair failed and we were unable to recover it. 00:39:05.206 [2024-09-27 15:57:45.467680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.206 [2024-09-27 15:57:45.467688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.206 qpair failed and we were unable to recover it. 00:39:05.206 [2024-09-27 15:57:45.468004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.206 [2024-09-27 15:57:45.468012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.206 qpair failed and we were unable to recover it. 00:39:05.206 [2024-09-27 15:57:45.468332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.206 [2024-09-27 15:57:45.468339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.206 qpair failed and we were unable to recover it. 00:39:05.206 [2024-09-27 15:57:45.468673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.206 [2024-09-27 15:57:45.468681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.206 qpair failed and we were unable to recover it. 00:39:05.206 [2024-09-27 15:57:45.469015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.206 [2024-09-27 15:57:45.469023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.206 qpair failed and we were unable to recover it. 00:39:05.207 [2024-09-27 15:57:45.469343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.207 [2024-09-27 15:57:45.469352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.207 qpair failed and we were unable to recover it. 00:39:05.207 [2024-09-27 15:57:45.469674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.207 [2024-09-27 15:57:45.469682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.207 qpair failed and we were unable to recover it. 00:39:05.207 [2024-09-27 15:57:45.469843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.207 [2024-09-27 15:57:45.469851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.207 qpair failed and we were unable to recover it. 00:39:05.207 [2024-09-27 15:57:45.470148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.207 [2024-09-27 15:57:45.470156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.207 qpair failed and we were unable to recover it. 00:39:05.207 [2024-09-27 15:57:45.470360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.207 [2024-09-27 15:57:45.470368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.207 qpair failed and we were unable to recover it. 00:39:05.207 [2024-09-27 15:57:45.470573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.207 [2024-09-27 15:57:45.470582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.207 qpair failed and we were unable to recover it. 00:39:05.207 [2024-09-27 15:57:45.470897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.207 [2024-09-27 15:57:45.470906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.207 qpair failed and we were unable to recover it. 00:39:05.207 [2024-09-27 15:57:45.471221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.207 [2024-09-27 15:57:45.471229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.207 qpair failed and we were unable to recover it. 00:39:05.207 [2024-09-27 15:57:45.471550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.207 [2024-09-27 15:57:45.471558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.207 qpair failed and we were unable to recover it. 00:39:05.207 [2024-09-27 15:57:45.471902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.207 [2024-09-27 15:57:45.471911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.207 qpair failed and we were unable to recover it. 00:39:05.207 [2024-09-27 15:57:45.472245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.207 [2024-09-27 15:57:45.472260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.207 qpair failed and we were unable to recover it. 00:39:05.207 [2024-09-27 15:57:45.472581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.207 [2024-09-27 15:57:45.472588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.207 qpair failed and we were unable to recover it. 00:39:05.207 [2024-09-27 15:57:45.472904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.207 [2024-09-27 15:57:45.472912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.207 qpair failed and we were unable to recover it. 00:39:05.207 [2024-09-27 15:57:45.473228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.207 [2024-09-27 15:57:45.473236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.207 qpair failed and we were unable to recover it. 00:39:05.207 [2024-09-27 15:57:45.473565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.207 [2024-09-27 15:57:45.473573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.207 qpair failed and we were unable to recover it. 00:39:05.207 [2024-09-27 15:57:45.473884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.207 [2024-09-27 15:57:45.473905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.207 qpair failed and we were unable to recover it. 00:39:05.207 [2024-09-27 15:57:45.474225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.207 [2024-09-27 15:57:45.474232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.207 qpair failed and we were unable to recover it. 00:39:05.207 [2024-09-27 15:57:45.474537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.207 [2024-09-27 15:57:45.474545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.207 qpair failed and we were unable to recover it. 00:39:05.207 [2024-09-27 15:57:45.474867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.207 [2024-09-27 15:57:45.474874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.207 qpair failed and we were unable to recover it. 00:39:05.207 [2024-09-27 15:57:45.475184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.207 [2024-09-27 15:57:45.475192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.207 qpair failed and we were unable to recover it. 00:39:05.207 [2024-09-27 15:57:45.475397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.207 [2024-09-27 15:57:45.475406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.207 qpair failed and we were unable to recover it. 00:39:05.207 [2024-09-27 15:57:45.475745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.207 [2024-09-27 15:57:45.475752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.207 qpair failed and we were unable to recover it. 00:39:05.207 [2024-09-27 15:57:45.476063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.207 [2024-09-27 15:57:45.476071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.207 qpair failed and we were unable to recover it. 00:39:05.207 [2024-09-27 15:57:45.476387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.207 [2024-09-27 15:57:45.476395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.207 qpair failed and we were unable to recover it. 00:39:05.207 [2024-09-27 15:57:45.476723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.207 [2024-09-27 15:57:45.476730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.207 qpair failed and we were unable to recover it. 00:39:05.207 [2024-09-27 15:57:45.477051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.207 [2024-09-27 15:57:45.477059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.207 qpair failed and we were unable to recover it. 00:39:05.207 [2024-09-27 15:57:45.477382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.207 [2024-09-27 15:57:45.477389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.207 qpair failed and we were unable to recover it. 00:39:05.207 [2024-09-27 15:57:45.477710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.207 [2024-09-27 15:57:45.477717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.207 qpair failed and we were unable to recover it. 00:39:05.207 [2024-09-27 15:57:45.478042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.207 [2024-09-27 15:57:45.478051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.207 qpair failed and we were unable to recover it. 00:39:05.207 [2024-09-27 15:57:45.478252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.207 [2024-09-27 15:57:45.478260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.207 qpair failed and we were unable to recover it. 00:39:05.207 [2024-09-27 15:57:45.478584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.207 [2024-09-27 15:57:45.478593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.207 qpair failed and we were unable to recover it. 00:39:05.207 [2024-09-27 15:57:45.478788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.207 [2024-09-27 15:57:45.478797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.207 qpair failed and we were unable to recover it. 00:39:05.207 [2024-09-27 15:57:45.479081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.207 [2024-09-27 15:57:45.479090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.207 qpair failed and we were unable to recover it. 00:39:05.207 [2024-09-27 15:57:45.479396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.208 [2024-09-27 15:57:45.479404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.208 qpair failed and we were unable to recover it. 00:39:05.208 [2024-09-27 15:57:45.479720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.208 [2024-09-27 15:57:45.479728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.208 qpair failed and we were unable to recover it. 00:39:05.208 [2024-09-27 15:57:45.480045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.208 [2024-09-27 15:57:45.480053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.208 qpair failed and we were unable to recover it. 00:39:05.208 [2024-09-27 15:57:45.480456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.208 [2024-09-27 15:57:45.480467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.208 qpair failed and we were unable to recover it. 00:39:05.208 [2024-09-27 15:57:45.480783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.208 [2024-09-27 15:57:45.480791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.208 qpair failed and we were unable to recover it. 00:39:05.208 [2024-09-27 15:57:45.481112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.208 [2024-09-27 15:57:45.481120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.208 qpair failed and we were unable to recover it. 00:39:05.208 [2024-09-27 15:57:45.481439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.208 [2024-09-27 15:57:45.481446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.208 qpair failed and we were unable to recover it. 00:39:05.208 [2024-09-27 15:57:45.481768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.208 [2024-09-27 15:57:45.481776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.208 qpair failed and we were unable to recover it. 00:39:05.208 [2024-09-27 15:57:45.482015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.208 [2024-09-27 15:57:45.482023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.208 qpair failed and we were unable to recover it. 00:39:05.208 [2024-09-27 15:57:45.482219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.208 [2024-09-27 15:57:45.482228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.208 qpair failed and we were unable to recover it. 00:39:05.208 [2024-09-27 15:57:45.482542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.208 [2024-09-27 15:57:45.482549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.208 qpair failed and we were unable to recover it. 00:39:05.208 [2024-09-27 15:57:45.482773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.208 [2024-09-27 15:57:45.482781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.208 qpair failed and we were unable to recover it. 00:39:05.208 [2024-09-27 15:57:45.483121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.208 [2024-09-27 15:57:45.483129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.208 qpair failed and we were unable to recover it. 00:39:05.208 [2024-09-27 15:57:45.483468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.208 [2024-09-27 15:57:45.483476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.208 qpair failed and we were unable to recover it. 00:39:05.208 [2024-09-27 15:57:45.483670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.208 [2024-09-27 15:57:45.483677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.208 qpair failed and we were unable to recover it. 00:39:05.208 [2024-09-27 15:57:45.483993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.208 [2024-09-27 15:57:45.484000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.208 qpair failed and we were unable to recover it. 00:39:05.208 [2024-09-27 15:57:45.484336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.208 [2024-09-27 15:57:45.484343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.208 qpair failed and we were unable to recover it. 00:39:05.208 [2024-09-27 15:57:45.484662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.208 [2024-09-27 15:57:45.484670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.208 qpair failed and we were unable to recover it. 00:39:05.208 [2024-09-27 15:57:45.485050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.208 [2024-09-27 15:57:45.485057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.208 qpair failed and we were unable to recover it. 00:39:05.208 [2024-09-27 15:57:45.485443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.208 [2024-09-27 15:57:45.485451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.208 qpair failed and we were unable to recover it. 00:39:05.208 [2024-09-27 15:57:45.485774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.208 [2024-09-27 15:57:45.485782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.208 qpair failed and we were unable to recover it. 00:39:05.208 [2024-09-27 15:57:45.486069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.208 [2024-09-27 15:57:45.486077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.208 qpair failed and we were unable to recover it. 00:39:05.208 [2024-09-27 15:57:45.486401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.208 [2024-09-27 15:57:45.486409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.208 qpair failed and we were unable to recover it. 00:39:05.208 [2024-09-27 15:57:45.486721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.208 [2024-09-27 15:57:45.486729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.208 qpair failed and we were unable to recover it. 00:39:05.208 [2024-09-27 15:57:45.487055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.208 [2024-09-27 15:57:45.487063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.208 qpair failed and we were unable to recover it. 00:39:05.208 [2024-09-27 15:57:45.487234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.208 [2024-09-27 15:57:45.487242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.208 qpair failed and we were unable to recover it. 00:39:05.208 [2024-09-27 15:57:45.487567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.208 [2024-09-27 15:57:45.487575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.208 qpair failed and we were unable to recover it. 00:39:05.208 [2024-09-27 15:57:45.487890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.208 [2024-09-27 15:57:45.487919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.208 qpair failed and we were unable to recover it. 00:39:05.208 [2024-09-27 15:57:45.488251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.208 [2024-09-27 15:57:45.488259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.208 qpair failed and we were unable to recover it. 00:39:05.208 [2024-09-27 15:57:45.488581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.208 [2024-09-27 15:57:45.488590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.208 qpair failed and we were unable to recover it. 00:39:05.208 [2024-09-27 15:57:45.488905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.208 [2024-09-27 15:57:45.488913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.208 qpair failed and we were unable to recover it. 00:39:05.208 [2024-09-27 15:57:45.489234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.208 [2024-09-27 15:57:45.489243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.208 qpair failed and we were unable to recover it. 00:39:05.208 [2024-09-27 15:57:45.489547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.208 [2024-09-27 15:57:45.489555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.208 qpair failed and we were unable to recover it. 00:39:05.208 [2024-09-27 15:57:45.489879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.208 [2024-09-27 15:57:45.489886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.208 qpair failed and we were unable to recover it. 00:39:05.208 [2024-09-27 15:57:45.490214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.208 [2024-09-27 15:57:45.490223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.208 qpair failed and we were unable to recover it. 00:39:05.208 [2024-09-27 15:57:45.490543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.208 [2024-09-27 15:57:45.490550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.208 qpair failed and we were unable to recover it. 00:39:05.208 [2024-09-27 15:57:45.490843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.208 [2024-09-27 15:57:45.490852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.208 qpair failed and we were unable to recover it. 00:39:05.208 [2024-09-27 15:57:45.491141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.209 [2024-09-27 15:57:45.491149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.209 qpair failed and we were unable to recover it. 00:39:05.209 [2024-09-27 15:57:45.491450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.209 [2024-09-27 15:57:45.491459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.209 qpair failed and we were unable to recover it. 00:39:05.209 [2024-09-27 15:57:45.491665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.209 [2024-09-27 15:57:45.491672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.209 qpair failed and we were unable to recover it. 00:39:05.209 [2024-09-27 15:57:45.492003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.209 [2024-09-27 15:57:45.492011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.209 qpair failed and we were unable to recover it. 00:39:05.209 [2024-09-27 15:57:45.492336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.209 [2024-09-27 15:57:45.492343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.209 qpair failed and we were unable to recover it. 00:39:05.209 [2024-09-27 15:57:45.492664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.209 [2024-09-27 15:57:45.492671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.209 qpair failed and we were unable to recover it. 00:39:05.209 [2024-09-27 15:57:45.492957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.209 [2024-09-27 15:57:45.492965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.209 qpair failed and we were unable to recover it. 00:39:05.209 [2024-09-27 15:57:45.493310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.209 [2024-09-27 15:57:45.493317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.209 qpair failed and we were unable to recover it. 00:39:05.209 [2024-09-27 15:57:45.493629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.209 [2024-09-27 15:57:45.493637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.209 qpair failed and we were unable to recover it. 00:39:05.209 [2024-09-27 15:57:45.493951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.209 [2024-09-27 15:57:45.493959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.209 qpair failed and we were unable to recover it. 00:39:05.209 [2024-09-27 15:57:45.494291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.209 [2024-09-27 15:57:45.494300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.209 qpair failed and we were unable to recover it. 00:39:05.209 [2024-09-27 15:57:45.494611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.209 [2024-09-27 15:57:45.494619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.209 qpair failed and we were unable to recover it. 00:39:05.209 [2024-09-27 15:57:45.494834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.209 [2024-09-27 15:57:45.494843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.209 qpair failed and we were unable to recover it. 00:39:05.209 [2024-09-27 15:57:45.495220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.209 [2024-09-27 15:57:45.495229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.209 qpair failed and we were unable to recover it. 00:39:05.209 [2024-09-27 15:57:45.495573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.209 [2024-09-27 15:57:45.495581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.209 qpair failed and we were unable to recover it. 00:39:05.209 [2024-09-27 15:57:45.495928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.209 [2024-09-27 15:57:45.495938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.209 qpair failed and we were unable to recover it. 00:39:05.209 [2024-09-27 15:57:45.496150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.209 [2024-09-27 15:57:45.496159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.209 qpair failed and we were unable to recover it. 00:39:05.209 [2024-09-27 15:57:45.496436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.209 [2024-09-27 15:57:45.496443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.209 qpair failed and we were unable to recover it. 00:39:05.209 [2024-09-27 15:57:45.496775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.209 [2024-09-27 15:57:45.496783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.209 qpair failed and we were unable to recover it. 00:39:05.209 [2024-09-27 15:57:45.497001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.209 [2024-09-27 15:57:45.497009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.209 qpair failed and we were unable to recover it. 00:39:05.209 [2024-09-27 15:57:45.497327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.209 [2024-09-27 15:57:45.497334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.209 qpair failed and we were unable to recover it. 00:39:05.209 [2024-09-27 15:57:45.497652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.209 [2024-09-27 15:57:45.497660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.209 qpair failed and we were unable to recover it. 00:39:05.209 [2024-09-27 15:57:45.497988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.209 [2024-09-27 15:57:45.497996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.209 qpair failed and we were unable to recover it. 00:39:05.209 [2024-09-27 15:57:45.498225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.209 [2024-09-27 15:57:45.498233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.209 qpair failed and we were unable to recover it. 00:39:05.209 [2024-09-27 15:57:45.498567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.209 [2024-09-27 15:57:45.498575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.209 qpair failed and we were unable to recover it. 00:39:05.209 [2024-09-27 15:57:45.498885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.209 [2024-09-27 15:57:45.498897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.209 qpair failed and we were unable to recover it. 00:39:05.209 [2024-09-27 15:57:45.499222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.209 [2024-09-27 15:57:45.499230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.209 qpair failed and we were unable to recover it. 00:39:05.209 [2024-09-27 15:57:45.499412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.209 [2024-09-27 15:57:45.499420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.209 qpair failed and we were unable to recover it. 00:39:05.209 [2024-09-27 15:57:45.499627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.209 [2024-09-27 15:57:45.499634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.209 qpair failed and we were unable to recover it. 00:39:05.209 [2024-09-27 15:57:45.499954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.209 [2024-09-27 15:57:45.499964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.209 qpair failed and we were unable to recover it. 00:39:05.209 [2024-09-27 15:57:45.500303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.209 [2024-09-27 15:57:45.500311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.209 qpair failed and we were unable to recover it. 00:39:05.209 [2024-09-27 15:57:45.500641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.209 [2024-09-27 15:57:45.500648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.209 qpair failed and we were unable to recover it. 00:39:05.209 [2024-09-27 15:57:45.500816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.209 [2024-09-27 15:57:45.500823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.209 qpair failed and we were unable to recover it. 00:39:05.209 [2024-09-27 15:57:45.501062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.209 [2024-09-27 15:57:45.501070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.209 qpair failed and we were unable to recover it. 00:39:05.209 [2024-09-27 15:57:45.501345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.209 [2024-09-27 15:57:45.501355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.209 qpair failed and we were unable to recover it. 00:39:05.209 [2024-09-27 15:57:45.501556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.209 [2024-09-27 15:57:45.501564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.209 qpair failed and we were unable to recover it. 00:39:05.209 [2024-09-27 15:57:45.501757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.209 [2024-09-27 15:57:45.501766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.209 qpair failed and we were unable to recover it. 00:39:05.209 [2024-09-27 15:57:45.501964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.209 [2024-09-27 15:57:45.501973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.210 qpair failed and we were unable to recover it. 00:39:05.210 [2024-09-27 15:57:45.502371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.210 [2024-09-27 15:57:45.502379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.210 qpair failed and we were unable to recover it. 00:39:05.210 [2024-09-27 15:57:45.502684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.210 [2024-09-27 15:57:45.502691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.210 qpair failed and we were unable to recover it. 00:39:05.210 [2024-09-27 15:57:45.503006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.210 [2024-09-27 15:57:45.503014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.210 qpair failed and we were unable to recover it. 00:39:05.210 [2024-09-27 15:57:45.503356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.210 [2024-09-27 15:57:45.503364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.210 qpair failed and we were unable to recover it. 00:39:05.210 [2024-09-27 15:57:45.503717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.210 [2024-09-27 15:57:45.503724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.210 qpair failed and we were unable to recover it. 00:39:05.210 [2024-09-27 15:57:45.504062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.210 [2024-09-27 15:57:45.504070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.210 qpair failed and we were unable to recover it. 00:39:05.210 [2024-09-27 15:57:45.504406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.210 [2024-09-27 15:57:45.504414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.210 qpair failed and we were unable to recover it. 00:39:05.210 [2024-09-27 15:57:45.504827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.210 [2024-09-27 15:57:45.504836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.210 qpair failed and we were unable to recover it. 00:39:05.210 [2024-09-27 15:57:45.505156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.210 [2024-09-27 15:57:45.505164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.210 qpair failed and we were unable to recover it. 00:39:05.210 [2024-09-27 15:57:45.505512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.210 [2024-09-27 15:57:45.505519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.210 qpair failed and we were unable to recover it. 00:39:05.210 [2024-09-27 15:57:45.505845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.210 [2024-09-27 15:57:45.505852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.210 qpair failed and we were unable to recover it. 00:39:05.210 [2024-09-27 15:57:45.506174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.210 [2024-09-27 15:57:45.506182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.210 qpair failed and we were unable to recover it. 00:39:05.210 [2024-09-27 15:57:45.506504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.210 [2024-09-27 15:57:45.506511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.210 qpair failed and we were unable to recover it. 00:39:05.210 [2024-09-27 15:57:45.506843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.210 [2024-09-27 15:57:45.506851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.210 qpair failed and we were unable to recover it. 00:39:05.210 [2024-09-27 15:57:45.507194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.210 [2024-09-27 15:57:45.507201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.210 qpair failed and we were unable to recover it. 00:39:05.210 [2024-09-27 15:57:45.507506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.210 [2024-09-27 15:57:45.507514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.210 qpair failed and we were unable to recover it. 00:39:05.210 [2024-09-27 15:57:45.507708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.210 [2024-09-27 15:57:45.507716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.210 qpair failed and we were unable to recover it. 00:39:05.210 [2024-09-27 15:57:45.508046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.210 [2024-09-27 15:57:45.508054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.210 qpair failed and we were unable to recover it. 00:39:05.210 [2024-09-27 15:57:45.508385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.210 [2024-09-27 15:57:45.508392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.210 qpair failed and we were unable to recover it. 00:39:05.210 [2024-09-27 15:57:45.508718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.210 [2024-09-27 15:57:45.508725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.210 qpair failed and we were unable to recover it. 00:39:05.210 [2024-09-27 15:57:45.509061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.210 [2024-09-27 15:57:45.509069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.210 qpair failed and we were unable to recover it. 00:39:05.210 [2024-09-27 15:57:45.509402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.210 [2024-09-27 15:57:45.509410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.210 qpair failed and we were unable to recover it. 00:39:05.210 [2024-09-27 15:57:45.509742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.210 [2024-09-27 15:57:45.509749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.210 qpair failed and we were unable to recover it. 00:39:05.210 [2024-09-27 15:57:45.510095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.210 [2024-09-27 15:57:45.510105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.210 qpair failed and we were unable to recover it. 00:39:05.210 [2024-09-27 15:57:45.510423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.210 [2024-09-27 15:57:45.510432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.210 qpair failed and we were unable to recover it. 00:39:05.210 [2024-09-27 15:57:45.510645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.210 [2024-09-27 15:57:45.510653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.210 qpair failed and we were unable to recover it. 00:39:05.210 [2024-09-27 15:57:45.510964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.210 [2024-09-27 15:57:45.510973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.210 qpair failed and we were unable to recover it. 00:39:05.210 [2024-09-27 15:57:45.511298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.210 [2024-09-27 15:57:45.511305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.210 qpair failed and we were unable to recover it. 00:39:05.210 [2024-09-27 15:57:45.511640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.210 [2024-09-27 15:57:45.511648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.210 qpair failed and we were unable to recover it. 00:39:05.210 [2024-09-27 15:57:45.511972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.210 [2024-09-27 15:57:45.511980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.210 qpair failed and we were unable to recover it. 00:39:05.210 [2024-09-27 15:57:45.512227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.210 [2024-09-27 15:57:45.512235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.210 qpair failed and we were unable to recover it. 00:39:05.210 [2024-09-27 15:57:45.512559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.210 [2024-09-27 15:57:45.512569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.210 qpair failed and we were unable to recover it. 00:39:05.210 [2024-09-27 15:57:45.512922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.210 [2024-09-27 15:57:45.512929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.210 qpair failed and we were unable to recover it. 00:39:05.210 [2024-09-27 15:57:45.513281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.210 [2024-09-27 15:57:45.513288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.210 qpair failed and we were unable to recover it. 00:39:05.210 [2024-09-27 15:57:45.513606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.210 [2024-09-27 15:57:45.513614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.210 qpair failed and we were unable to recover it. 00:39:05.210 [2024-09-27 15:57:45.513946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.210 [2024-09-27 15:57:45.513954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.210 qpair failed and we were unable to recover it. 00:39:05.210 [2024-09-27 15:57:45.514300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.210 [2024-09-27 15:57:45.514307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.211 qpair failed and we were unable to recover it. 00:39:05.211 [2024-09-27 15:57:45.514623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.211 [2024-09-27 15:57:45.514630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.211 qpair failed and we were unable to recover it. 00:39:05.211 [2024-09-27 15:57:45.514903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.211 [2024-09-27 15:57:45.514911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.211 qpair failed and we were unable to recover it. 00:39:05.211 [2024-09-27 15:57:45.515251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.211 [2024-09-27 15:57:45.515259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.211 qpair failed and we were unable to recover it. 00:39:05.211 [2024-09-27 15:57:45.515478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.211 [2024-09-27 15:57:45.515487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.211 qpair failed and we were unable to recover it. 00:39:05.211 [2024-09-27 15:57:45.515813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.211 [2024-09-27 15:57:45.515821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.211 qpair failed and we were unable to recover it. 00:39:05.211 [2024-09-27 15:57:45.516155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.211 [2024-09-27 15:57:45.516164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.211 qpair failed and we were unable to recover it. 00:39:05.211 [2024-09-27 15:57:45.516478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.211 [2024-09-27 15:57:45.516486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.211 qpair failed and we were unable to recover it. 00:39:05.211 [2024-09-27 15:57:45.516806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.211 [2024-09-27 15:57:45.516814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.211 qpair failed and we were unable to recover it. 00:39:05.211 [2024-09-27 15:57:45.517194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.211 [2024-09-27 15:57:45.517202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.211 qpair failed and we were unable to recover it. 00:39:05.211 [2024-09-27 15:57:45.517462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.211 [2024-09-27 15:57:45.517470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.211 qpair failed and we were unable to recover it. 00:39:05.211 [2024-09-27 15:57:45.517751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.211 [2024-09-27 15:57:45.517758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.211 qpair failed and we were unable to recover it. 00:39:05.211 [2024-09-27 15:57:45.518099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.211 [2024-09-27 15:57:45.518107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.211 qpair failed and we were unable to recover it. 00:39:05.211 [2024-09-27 15:57:45.518422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.211 [2024-09-27 15:57:45.518430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.211 qpair failed and we were unable to recover it. 00:39:05.211 [2024-09-27 15:57:45.518628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.211 [2024-09-27 15:57:45.518638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.211 qpair failed and we were unable to recover it. 00:39:05.211 [2024-09-27 15:57:45.518971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.211 [2024-09-27 15:57:45.518979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.211 qpair failed and we were unable to recover it. 00:39:05.211 [2024-09-27 15:57:45.519301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.211 [2024-09-27 15:57:45.519309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.211 qpair failed and we were unable to recover it. 00:39:05.211 [2024-09-27 15:57:45.519637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.211 [2024-09-27 15:57:45.519645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.211 qpair failed and we were unable to recover it. 00:39:05.211 [2024-09-27 15:57:45.519971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.211 [2024-09-27 15:57:45.519979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.211 qpair failed and we were unable to recover it. 00:39:05.211 [2024-09-27 15:57:45.520326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.211 [2024-09-27 15:57:45.520333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.211 qpair failed and we were unable to recover it. 00:39:05.211 [2024-09-27 15:57:45.520655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.211 [2024-09-27 15:57:45.520662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.211 qpair failed and we were unable to recover it. 00:39:05.211 [2024-09-27 15:57:45.520989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.211 [2024-09-27 15:57:45.520997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.211 qpair failed and we were unable to recover it. 00:39:05.211 [2024-09-27 15:57:45.521318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.211 [2024-09-27 15:57:45.521325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.211 qpair failed and we were unable to recover it. 00:39:05.211 [2024-09-27 15:57:45.521647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.211 [2024-09-27 15:57:45.521654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.211 qpair failed and we were unable to recover it. 00:39:05.211 [2024-09-27 15:57:45.521969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.211 [2024-09-27 15:57:45.521977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.211 qpair failed and we were unable to recover it. 00:39:05.211 [2024-09-27 15:57:45.522328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.211 [2024-09-27 15:57:45.522336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.211 qpair failed and we were unable to recover it. 00:39:05.211 [2024-09-27 15:57:45.522513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.211 [2024-09-27 15:57:45.522520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.211 qpair failed and we were unable to recover it. 00:39:05.211 [2024-09-27 15:57:45.522847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.211 [2024-09-27 15:57:45.522855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.211 qpair failed and we were unable to recover it. 00:39:05.211 [2024-09-27 15:57:45.523163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.211 [2024-09-27 15:57:45.523170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.211 qpair failed and we were unable to recover it. 00:39:05.211 [2024-09-27 15:57:45.523397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.211 [2024-09-27 15:57:45.523404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.211 qpair failed and we were unable to recover it. 00:39:05.211 [2024-09-27 15:57:45.523628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.211 [2024-09-27 15:57:45.523635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.211 qpair failed and we were unable to recover it. 00:39:05.211 [2024-09-27 15:57:45.523941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.211 [2024-09-27 15:57:45.523950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.211 qpair failed and we were unable to recover it. 00:39:05.211 [2024-09-27 15:57:45.524273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.211 [2024-09-27 15:57:45.524281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.211 qpair failed and we were unable to recover it. 00:39:05.211 [2024-09-27 15:57:45.524457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.211 [2024-09-27 15:57:45.524465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.211 qpair failed and we were unable to recover it. 00:39:05.211 [2024-09-27 15:57:45.524823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.211 [2024-09-27 15:57:45.524830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.211 qpair failed and we were unable to recover it. 00:39:05.211 [2024-09-27 15:57:45.525142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.211 [2024-09-27 15:57:45.525150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.211 qpair failed and we were unable to recover it. 00:39:05.211 [2024-09-27 15:57:45.525459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.211 [2024-09-27 15:57:45.525466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.211 qpair failed and we were unable to recover it. 00:39:05.211 [2024-09-27 15:57:45.525789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.211 [2024-09-27 15:57:45.525797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.211 qpair failed and we were unable to recover it. 00:39:05.212 [2024-09-27 15:57:45.525991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.212 [2024-09-27 15:57:45.525999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.212 qpair failed and we were unable to recover it. 00:39:05.212 [2024-09-27 15:57:45.526247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.212 [2024-09-27 15:57:45.526254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.212 qpair failed and we were unable to recover it. 00:39:05.212 [2024-09-27 15:57:45.526564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.212 [2024-09-27 15:57:45.526571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.212 qpair failed and we were unable to recover it. 00:39:05.212 [2024-09-27 15:57:45.526902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.212 [2024-09-27 15:57:45.526909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.212 qpair failed and we were unable to recover it. 00:39:05.212 [2024-09-27 15:57:45.527293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.212 [2024-09-27 15:57:45.527301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.212 qpair failed and we were unable to recover it. 00:39:05.212 [2024-09-27 15:57:45.527603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.212 [2024-09-27 15:57:45.527611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.212 qpair failed and we were unable to recover it. 00:39:05.212 [2024-09-27 15:57:45.527934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.212 [2024-09-27 15:57:45.527943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.212 qpair failed and we were unable to recover it. 00:39:05.212 [2024-09-27 15:57:45.528272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.212 [2024-09-27 15:57:45.528280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.212 qpair failed and we were unable to recover it. 00:39:05.212 [2024-09-27 15:57:45.528632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.212 [2024-09-27 15:57:45.528640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.212 qpair failed and we were unable to recover it. 00:39:05.212 [2024-09-27 15:57:45.529047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.212 [2024-09-27 15:57:45.529056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.212 qpair failed and we were unable to recover it. 00:39:05.212 [2024-09-27 15:57:45.529122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.212 [2024-09-27 15:57:45.529130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.212 qpair failed and we were unable to recover it. 00:39:05.212 [2024-09-27 15:57:45.529447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.212 [2024-09-27 15:57:45.529456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.212 qpair failed and we were unable to recover it. 00:39:05.212 [2024-09-27 15:57:45.529784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.212 [2024-09-27 15:57:45.529791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.212 qpair failed and we were unable to recover it. 00:39:05.212 [2024-09-27 15:57:45.530111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.212 [2024-09-27 15:57:45.530119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.212 qpair failed and we were unable to recover it. 00:39:05.212 [2024-09-27 15:57:45.530336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.212 [2024-09-27 15:57:45.530343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.212 qpair failed and we were unable to recover it. 00:39:05.212 [2024-09-27 15:57:45.530671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.212 [2024-09-27 15:57:45.530678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.212 qpair failed and we were unable to recover it. 00:39:05.212 [2024-09-27 15:57:45.530968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.212 [2024-09-27 15:57:45.530975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.212 qpair failed and we were unable to recover it. 00:39:05.212 [2024-09-27 15:57:45.531316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.212 [2024-09-27 15:57:45.531324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.212 qpair failed and we were unable to recover it. 00:39:05.212 [2024-09-27 15:57:45.531683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.212 [2024-09-27 15:57:45.531691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.212 qpair failed and we were unable to recover it. 00:39:05.212 [2024-09-27 15:57:45.532024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.212 [2024-09-27 15:57:45.532032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.212 qpair failed and we were unable to recover it. 00:39:05.212 [2024-09-27 15:57:45.532391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.212 [2024-09-27 15:57:45.532398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.212 qpair failed and we were unable to recover it. 00:39:05.212 [2024-09-27 15:57:45.532696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.212 [2024-09-27 15:57:45.532703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.212 qpair failed and we were unable to recover it. 00:39:05.212 [2024-09-27 15:57:45.533019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.212 [2024-09-27 15:57:45.533027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.212 qpair failed and we were unable to recover it. 00:39:05.212 [2024-09-27 15:57:45.533247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.212 [2024-09-27 15:57:45.533256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.212 qpair failed and we were unable to recover it. 00:39:05.212 [2024-09-27 15:57:45.533436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.212 [2024-09-27 15:57:45.533445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.212 qpair failed and we were unable to recover it. 00:39:05.212 [2024-09-27 15:57:45.533785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.212 [2024-09-27 15:57:45.533793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.212 qpair failed and we were unable to recover it. 00:39:05.212 [2024-09-27 15:57:45.534192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.212 [2024-09-27 15:57:45.534200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.212 qpair failed and we were unable to recover it. 00:39:05.212 [2024-09-27 15:57:45.534513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.212 [2024-09-27 15:57:45.534522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.212 qpair failed and we were unable to recover it. 00:39:05.212 [2024-09-27 15:57:45.534706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.212 [2024-09-27 15:57:45.534713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.212 qpair failed and we were unable to recover it. 00:39:05.212 [2024-09-27 15:57:45.535045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.212 [2024-09-27 15:57:45.535053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.212 qpair failed and we were unable to recover it. 00:39:05.212 [2024-09-27 15:57:45.535419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.212 [2024-09-27 15:57:45.535427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.212 qpair failed and we were unable to recover it. 00:39:05.212 [2024-09-27 15:57:45.535755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.212 [2024-09-27 15:57:45.535763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.212 qpair failed and we were unable to recover it. 00:39:05.212 [2024-09-27 15:57:45.536099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.212 [2024-09-27 15:57:45.536114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.212 qpair failed and we were unable to recover it. 00:39:05.212 [2024-09-27 15:57:45.536422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.213 [2024-09-27 15:57:45.536429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.213 qpair failed and we were unable to recover it. 00:39:05.213 [2024-09-27 15:57:45.536721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.213 [2024-09-27 15:57:45.536729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.213 qpair failed and we were unable to recover it. 00:39:05.213 [2024-09-27 15:57:45.536904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.213 [2024-09-27 15:57:45.536913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.213 qpair failed and we were unable to recover it. 00:39:05.213 [2024-09-27 15:57:45.537259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.213 [2024-09-27 15:57:45.537266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.213 qpair failed and we were unable to recover it. 00:39:05.213 [2024-09-27 15:57:45.537588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.213 [2024-09-27 15:57:45.537596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.213 qpair failed and we were unable to recover it. 00:39:05.213 [2024-09-27 15:57:45.537928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.213 [2024-09-27 15:57:45.537936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.213 qpair failed and we were unable to recover it. 00:39:05.213 [2024-09-27 15:57:45.538152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.213 [2024-09-27 15:57:45.538159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.213 qpair failed and we were unable to recover it. 00:39:05.213 [2024-09-27 15:57:45.538483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.213 [2024-09-27 15:57:45.538491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.213 qpair failed and we were unable to recover it. 00:39:05.213 [2024-09-27 15:57:45.538812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.213 [2024-09-27 15:57:45.538820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.213 qpair failed and we were unable to recover it. 00:39:05.213 [2024-09-27 15:57:45.539143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.213 [2024-09-27 15:57:45.539151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.213 qpair failed and we were unable to recover it. 00:39:05.213 [2024-09-27 15:57:45.539468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.213 [2024-09-27 15:57:45.539476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.213 qpair failed and we were unable to recover it. 00:39:05.213 [2024-09-27 15:57:45.539793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.213 [2024-09-27 15:57:45.539803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.213 qpair failed and we were unable to recover it. 00:39:05.213 [2024-09-27 15:57:45.540109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.213 [2024-09-27 15:57:45.540117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.213 qpair failed and we were unable to recover it. 00:39:05.213 [2024-09-27 15:57:45.540488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.213 [2024-09-27 15:57:45.540495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.213 qpair failed and we were unable to recover it. 00:39:05.213 [2024-09-27 15:57:45.540791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.213 [2024-09-27 15:57:45.540799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.213 qpair failed and we were unable to recover it. 00:39:05.213 [2024-09-27 15:57:45.541098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.213 [2024-09-27 15:57:45.541106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.213 qpair failed and we were unable to recover it. 00:39:05.213 [2024-09-27 15:57:45.541424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.213 [2024-09-27 15:57:45.541432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.213 qpair failed and we were unable to recover it. 00:39:05.213 [2024-09-27 15:57:45.541747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.213 [2024-09-27 15:57:45.541756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.213 qpair failed and we were unable to recover it. 00:39:05.213 [2024-09-27 15:57:45.542071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.213 [2024-09-27 15:57:45.542081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.213 qpair failed and we were unable to recover it. 00:39:05.213 [2024-09-27 15:57:45.542419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.213 [2024-09-27 15:57:45.542428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.213 qpair failed and we were unable to recover it. 00:39:05.213 [2024-09-27 15:57:45.542654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.213 [2024-09-27 15:57:45.542663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.213 qpair failed and we were unable to recover it. 00:39:05.213 [2024-09-27 15:57:45.543002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.213 [2024-09-27 15:57:45.543009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.213 qpair failed and we were unable to recover it. 00:39:05.213 [2024-09-27 15:57:45.543353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.213 [2024-09-27 15:57:45.543361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.213 qpair failed and we were unable to recover it. 00:39:05.213 [2024-09-27 15:57:45.543682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.213 [2024-09-27 15:57:45.543689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.213 qpair failed and we were unable to recover it. 00:39:05.213 [2024-09-27 15:57:45.544098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.213 [2024-09-27 15:57:45.544107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.213 qpair failed and we were unable to recover it. 00:39:05.213 [2024-09-27 15:57:45.544452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.213 [2024-09-27 15:57:45.544459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.213 qpair failed and we were unable to recover it. 00:39:05.213 [2024-09-27 15:57:45.544762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.213 [2024-09-27 15:57:45.544770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.213 qpair failed and we were unable to recover it. 00:39:05.213 [2024-09-27 15:57:45.545081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.213 [2024-09-27 15:57:45.545089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.213 qpair failed and we were unable to recover it. 00:39:05.213 [2024-09-27 15:57:45.545407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.213 [2024-09-27 15:57:45.545414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.213 qpair failed and we were unable to recover it. 00:39:05.213 [2024-09-27 15:57:45.545653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.213 [2024-09-27 15:57:45.545660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.213 qpair failed and we were unable to recover it. 00:39:05.213 [2024-09-27 15:57:45.546012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.213 [2024-09-27 15:57:45.546021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.213 qpair failed and we were unable to recover it. 00:39:05.213 [2024-09-27 15:57:45.546384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.213 [2024-09-27 15:57:45.546391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.213 qpair failed and we were unable to recover it. 00:39:05.213 [2024-09-27 15:57:45.546709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.213 [2024-09-27 15:57:45.546717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.213 qpair failed and we were unable to recover it. 00:39:05.213 [2024-09-27 15:57:45.547119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.213 [2024-09-27 15:57:45.547127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.213 qpair failed and we were unable to recover it. 00:39:05.213 [2024-09-27 15:57:45.547464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.213 [2024-09-27 15:57:45.547471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.213 qpair failed and we were unable to recover it. 00:39:05.213 [2024-09-27 15:57:45.547760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.213 [2024-09-27 15:57:45.547768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.213 qpair failed and we were unable to recover it. 00:39:05.213 [2024-09-27 15:57:45.548067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.213 [2024-09-27 15:57:45.548075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.213 qpair failed and we were unable to recover it. 00:39:05.214 [2024-09-27 15:57:45.548259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.214 [2024-09-27 15:57:45.548272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.214 qpair failed and we were unable to recover it. 00:39:05.214 [2024-09-27 15:57:45.548592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.214 [2024-09-27 15:57:45.548601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.214 qpair failed and we were unable to recover it. 00:39:05.214 [2024-09-27 15:57:45.548929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.214 [2024-09-27 15:57:45.548937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.214 qpair failed and we were unable to recover it. 00:39:05.214 [2024-09-27 15:57:45.549259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.214 [2024-09-27 15:57:45.549266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.214 qpair failed and we were unable to recover it. 00:39:05.214 [2024-09-27 15:57:45.549575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.214 [2024-09-27 15:57:45.549583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.214 qpair failed and we were unable to recover it. 00:39:05.214 [2024-09-27 15:57:45.549911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.214 [2024-09-27 15:57:45.549918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.214 qpair failed and we were unable to recover it. 00:39:05.214 [2024-09-27 15:57:45.550229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.214 [2024-09-27 15:57:45.550236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.214 qpair failed and we were unable to recover it. 00:39:05.214 [2024-09-27 15:57:45.550562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.214 [2024-09-27 15:57:45.550569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.214 qpair failed and we were unable to recover it. 00:39:05.214 [2024-09-27 15:57:45.550878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.214 [2024-09-27 15:57:45.550886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.214 qpair failed and we were unable to recover it. 00:39:05.214 [2024-09-27 15:57:45.551213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.214 [2024-09-27 15:57:45.551220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.214 qpair failed and we were unable to recover it. 00:39:05.214 [2024-09-27 15:57:45.551539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.214 [2024-09-27 15:57:45.551546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.214 qpair failed and we were unable to recover it. 00:39:05.214 [2024-09-27 15:57:45.551860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.214 [2024-09-27 15:57:45.551867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.214 qpair failed and we were unable to recover it. 00:39:05.214 [2024-09-27 15:57:45.552063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.214 [2024-09-27 15:57:45.552072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.214 qpair failed and we were unable to recover it. 00:39:05.214 [2024-09-27 15:57:45.552275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.214 [2024-09-27 15:57:45.552283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.214 qpair failed and we were unable to recover it. 00:39:05.214 [2024-09-27 15:57:45.552618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.214 [2024-09-27 15:57:45.552625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.214 qpair failed and we were unable to recover it. 00:39:05.214 [2024-09-27 15:57:45.552952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.214 [2024-09-27 15:57:45.552960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.214 qpair failed and we were unable to recover it. 00:39:05.214 [2024-09-27 15:57:45.553297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.214 [2024-09-27 15:57:45.553304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.214 qpair failed and we were unable to recover it. 00:39:05.214 [2024-09-27 15:57:45.553596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.214 [2024-09-27 15:57:45.553603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.214 qpair failed and we were unable to recover it. 00:39:05.214 [2024-09-27 15:57:45.553923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.214 [2024-09-27 15:57:45.553932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.214 qpair failed and we were unable to recover it. 00:39:05.214 [2024-09-27 15:57:45.554256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.214 [2024-09-27 15:57:45.554263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.214 qpair failed and we were unable to recover it. 00:39:05.214 [2024-09-27 15:57:45.554663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.214 [2024-09-27 15:57:45.554672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.214 qpair failed and we were unable to recover it. 00:39:05.214 [2024-09-27 15:57:45.555006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.214 [2024-09-27 15:57:45.555013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.214 qpair failed and we were unable to recover it. 00:39:05.214 [2024-09-27 15:57:45.555342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.214 [2024-09-27 15:57:45.555350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.214 qpair failed and we were unable to recover it. 00:39:05.214 [2024-09-27 15:57:45.555672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.214 [2024-09-27 15:57:45.555680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.214 qpair failed and we were unable to recover it. 00:39:05.214 [2024-09-27 15:57:45.555987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.214 [2024-09-27 15:57:45.555996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.214 qpair failed and we were unable to recover it. 00:39:05.214 [2024-09-27 15:57:45.556328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.214 [2024-09-27 15:57:45.556337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.214 qpair failed and we were unable to recover it. 00:39:05.214 [2024-09-27 15:57:45.556663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.214 [2024-09-27 15:57:45.556671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.214 qpair failed and we were unable to recover it. 00:39:05.214 [2024-09-27 15:57:45.556991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.214 [2024-09-27 15:57:45.556998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.214 qpair failed and we were unable to recover it. 00:39:05.214 [2024-09-27 15:57:45.557329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.214 [2024-09-27 15:57:45.557337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.214 qpair failed and we were unable to recover it. 00:39:05.214 [2024-09-27 15:57:45.557677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.214 [2024-09-27 15:57:45.557684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.214 qpair failed and we were unable to recover it. 00:39:05.214 [2024-09-27 15:57:45.557995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.214 [2024-09-27 15:57:45.558003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.214 qpair failed and we were unable to recover it. 00:39:05.214 [2024-09-27 15:57:45.558320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.214 [2024-09-27 15:57:45.558327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.214 qpair failed and we were unable to recover it. 00:39:05.214 [2024-09-27 15:57:45.558651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.214 [2024-09-27 15:57:45.558659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.214 qpair failed and we were unable to recover it. 00:39:05.214 [2024-09-27 15:57:45.559003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.214 [2024-09-27 15:57:45.559010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.214 qpair failed and we were unable to recover it. 00:39:05.214 [2024-09-27 15:57:45.559334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.214 [2024-09-27 15:57:45.559343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.214 qpair failed and we were unable to recover it. 00:39:05.214 [2024-09-27 15:57:45.559660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.214 [2024-09-27 15:57:45.559667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.214 qpair failed and we were unable to recover it. 00:39:05.215 [2024-09-27 15:57:45.559863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.215 [2024-09-27 15:57:45.559870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.215 qpair failed and we were unable to recover it. 00:39:05.215 [2024-09-27 15:57:45.560245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.215 [2024-09-27 15:57:45.560253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.215 qpair failed and we were unable to recover it. 00:39:05.215 [2024-09-27 15:57:45.560586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.215 [2024-09-27 15:57:45.560595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.215 qpair failed and we were unable to recover it. 00:39:05.215 [2024-09-27 15:57:45.560918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.215 [2024-09-27 15:57:45.560925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.215 qpair failed and we were unable to recover it. 00:39:05.215 [2024-09-27 15:57:45.561221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.215 [2024-09-27 15:57:45.561229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.215 qpair failed and we were unable to recover it. 00:39:05.215 [2024-09-27 15:57:45.561571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.215 [2024-09-27 15:57:45.561579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.215 qpair failed and we were unable to recover it. 00:39:05.215 [2024-09-27 15:57:45.561903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.215 [2024-09-27 15:57:45.561913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.215 qpair failed and we were unable to recover it. 00:39:05.215 [2024-09-27 15:57:45.562233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.215 [2024-09-27 15:57:45.562240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.215 qpair failed and we were unable to recover it. 00:39:05.215 [2024-09-27 15:57:45.562563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.215 [2024-09-27 15:57:45.562570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.215 qpair failed and we were unable to recover it. 00:39:05.215 [2024-09-27 15:57:45.562740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.215 [2024-09-27 15:57:45.562749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.215 qpair failed and we were unable to recover it. 00:39:05.215 [2024-09-27 15:57:45.563103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.215 [2024-09-27 15:57:45.563110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.215 qpair failed and we were unable to recover it. 00:39:05.215 [2024-09-27 15:57:45.563430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.215 [2024-09-27 15:57:45.563438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.215 qpair failed and we were unable to recover it. 00:39:05.215 [2024-09-27 15:57:45.563755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.215 [2024-09-27 15:57:45.563763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.215 qpair failed and we were unable to recover it. 00:39:05.215 [2024-09-27 15:57:45.564077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.215 [2024-09-27 15:57:45.564086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.215 qpair failed and we were unable to recover it. 00:39:05.215 [2024-09-27 15:57:45.564416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.215 [2024-09-27 15:57:45.564424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.215 qpair failed and we were unable to recover it. 00:39:05.215 [2024-09-27 15:57:45.564738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.215 [2024-09-27 15:57:45.564746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.215 qpair failed and we were unable to recover it. 00:39:05.215 [2024-09-27 15:57:45.565066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.215 [2024-09-27 15:57:45.565074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.215 qpair failed and we were unable to recover it. 00:39:05.215 [2024-09-27 15:57:45.565405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.215 [2024-09-27 15:57:45.565413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.215 qpair failed and we were unable to recover it. 00:39:05.215 [2024-09-27 15:57:45.565720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.215 [2024-09-27 15:57:45.565727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.215 qpair failed and we were unable to recover it. 00:39:05.215 [2024-09-27 15:57:45.566030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.215 [2024-09-27 15:57:45.566037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.215 qpair failed and we were unable to recover it. 00:39:05.215 [2024-09-27 15:57:45.566232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.215 [2024-09-27 15:57:45.566240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.215 qpair failed and we were unable to recover it. 00:39:05.215 [2024-09-27 15:57:45.566410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.215 [2024-09-27 15:57:45.566418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.215 qpair failed and we were unable to recover it. 00:39:05.215 [2024-09-27 15:57:45.566597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.215 [2024-09-27 15:57:45.566606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.215 qpair failed and we were unable to recover it. 00:39:05.215 [2024-09-27 15:57:45.566930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.215 [2024-09-27 15:57:45.566937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.215 qpair failed and we were unable to recover it. 00:39:05.215 [2024-09-27 15:57:45.567310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.215 [2024-09-27 15:57:45.567317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.215 qpair failed and we were unable to recover it. 00:39:05.215 [2024-09-27 15:57:45.567648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.215 [2024-09-27 15:57:45.567656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.215 qpair failed and we were unable to recover it. 00:39:05.215 [2024-09-27 15:57:45.567977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.215 [2024-09-27 15:57:45.567985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.215 qpair failed and we were unable to recover it. 00:39:05.215 [2024-09-27 15:57:45.568331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.215 [2024-09-27 15:57:45.568339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.215 qpair failed and we were unable to recover it. 00:39:05.215 [2024-09-27 15:57:45.568659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.215 [2024-09-27 15:57:45.568667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.215 qpair failed and we were unable to recover it. 00:39:05.215 [2024-09-27 15:57:45.568998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.215 [2024-09-27 15:57:45.569006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.215 qpair failed and we were unable to recover it. 00:39:05.215 [2024-09-27 15:57:45.569328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.215 [2024-09-27 15:57:45.569336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.215 qpair failed and we were unable to recover it. 00:39:05.215 [2024-09-27 15:57:45.569669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.215 [2024-09-27 15:57:45.569677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.215 qpair failed and we were unable to recover it. 00:39:05.215 [2024-09-27 15:57:45.570037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.215 [2024-09-27 15:57:45.570046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.215 qpair failed and we were unable to recover it. 00:39:05.215 [2024-09-27 15:57:45.570364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.215 [2024-09-27 15:57:45.570374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.215 qpair failed and we were unable to recover it. 00:39:05.215 [2024-09-27 15:57:45.570689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.215 [2024-09-27 15:57:45.570697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.215 qpair failed and we were unable to recover it. 00:39:05.215 [2024-09-27 15:57:45.571024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.215 [2024-09-27 15:57:45.571032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.215 qpair failed and we were unable to recover it. 00:39:05.215 [2024-09-27 15:57:45.571351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.215 [2024-09-27 15:57:45.571358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.215 qpair failed and we were unable to recover it. 00:39:05.216 [2024-09-27 15:57:45.571718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.216 [2024-09-27 15:57:45.571727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.216 qpair failed and we were unable to recover it. 00:39:05.216 [2024-09-27 15:57:45.572049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.216 [2024-09-27 15:57:45.572058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.216 qpair failed and we were unable to recover it. 00:39:05.216 [2024-09-27 15:57:45.572376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.216 [2024-09-27 15:57:45.572386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.216 qpair failed and we were unable to recover it. 00:39:05.216 [2024-09-27 15:57:45.572703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.216 [2024-09-27 15:57:45.572711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.216 qpair failed and we were unable to recover it. 00:39:05.216 [2024-09-27 15:57:45.573021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.216 [2024-09-27 15:57:45.573029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.216 qpair failed and we were unable to recover it. 00:39:05.216 [2024-09-27 15:57:45.573351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.216 [2024-09-27 15:57:45.573358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.216 qpair failed and we were unable to recover it. 00:39:05.216 [2024-09-27 15:57:45.573657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.216 [2024-09-27 15:57:45.573665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.216 qpair failed and we were unable to recover it. 00:39:05.216 [2024-09-27 15:57:45.573876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.216 [2024-09-27 15:57:45.573884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.216 qpair failed and we were unable to recover it. 00:39:05.216 [2024-09-27 15:57:45.574082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.216 [2024-09-27 15:57:45.574091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.216 qpair failed and we were unable to recover it. 00:39:05.216 [2024-09-27 15:57:45.574281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.216 [2024-09-27 15:57:45.574290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.216 qpair failed and we were unable to recover it. 00:39:05.216 [2024-09-27 15:57:45.574485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.216 [2024-09-27 15:57:45.574492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.216 qpair failed and we were unable to recover it. 00:39:05.216 [2024-09-27 15:57:45.574795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.216 [2024-09-27 15:57:45.574802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.216 qpair failed and we were unable to recover it. 00:39:05.216 [2024-09-27 15:57:45.575023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.216 [2024-09-27 15:57:45.575031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.216 qpair failed and we were unable to recover it. 00:39:05.216 [2024-09-27 15:57:45.575260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.216 [2024-09-27 15:57:45.575267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.216 qpair failed and we were unable to recover it. 00:39:05.216 [2024-09-27 15:57:45.575338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.216 [2024-09-27 15:57:45.575346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.216 qpair failed and we were unable to recover it. 00:39:05.216 [2024-09-27 15:57:45.575656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.216 [2024-09-27 15:57:45.575665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.216 qpair failed and we were unable to recover it. 00:39:05.216 [2024-09-27 15:57:45.575990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.216 [2024-09-27 15:57:45.576000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.216 qpair failed and we were unable to recover it. 00:39:05.216 [2024-09-27 15:57:45.576167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.216 [2024-09-27 15:57:45.576175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.216 qpair failed and we were unable to recover it. 00:39:05.216 [2024-09-27 15:57:45.576496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.216 [2024-09-27 15:57:45.576504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.216 qpair failed and we were unable to recover it. 00:39:05.216 [2024-09-27 15:57:45.576833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.216 [2024-09-27 15:57:45.576842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.216 qpair failed and we were unable to recover it. 00:39:05.216 [2024-09-27 15:57:45.577139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.216 [2024-09-27 15:57:45.577148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.216 qpair failed and we were unable to recover it. 00:39:05.216 [2024-09-27 15:57:45.577504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.216 [2024-09-27 15:57:45.577512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.216 qpair failed and we were unable to recover it. 00:39:05.216 [2024-09-27 15:57:45.577828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.216 [2024-09-27 15:57:45.577836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.216 qpair failed and we were unable to recover it. 00:39:05.216 [2024-09-27 15:57:45.578155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.216 [2024-09-27 15:57:45.578165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.216 qpair failed and we were unable to recover it. 00:39:05.216 [2024-09-27 15:57:45.578473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.216 [2024-09-27 15:57:45.578483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.216 qpair failed and we were unable to recover it. 00:39:05.216 [2024-09-27 15:57:45.578802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.216 [2024-09-27 15:57:45.578810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.216 qpair failed and we were unable to recover it. 00:39:05.216 [2024-09-27 15:57:45.578951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.216 [2024-09-27 15:57:45.578959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.216 qpair failed and we were unable to recover it. 00:39:05.216 [2024-09-27 15:57:45.579273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.216 [2024-09-27 15:57:45.579282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.216 qpair failed and we were unable to recover it. 00:39:05.216 [2024-09-27 15:57:45.579605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.216 [2024-09-27 15:57:45.579614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.216 qpair failed and we were unable to recover it. 00:39:05.216 [2024-09-27 15:57:45.579929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.216 [2024-09-27 15:57:45.579938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.216 qpair failed and we were unable to recover it. 00:39:05.216 [2024-09-27 15:57:45.580039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.216 [2024-09-27 15:57:45.580047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.216 qpair failed and we were unable to recover it. 00:39:05.216 [2024-09-27 15:57:45.580312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.216 [2024-09-27 15:57:45.580320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.216 qpair failed and we were unable to recover it. 00:39:05.216 [2024-09-27 15:57:45.580634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.216 [2024-09-27 15:57:45.580642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.216 qpair failed and we were unable to recover it. 00:39:05.216 [2024-09-27 15:57:45.580874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.216 [2024-09-27 15:57:45.580882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.216 qpair failed and we were unable to recover it. 00:39:05.216 [2024-09-27 15:57:45.581203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.216 [2024-09-27 15:57:45.581212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.216 qpair failed and we were unable to recover it. 00:39:05.216 [2024-09-27 15:57:45.581516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.216 [2024-09-27 15:57:45.581524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.216 qpair failed and we were unable to recover it. 00:39:05.216 [2024-09-27 15:57:45.581843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.216 [2024-09-27 15:57:45.581850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.217 qpair failed and we were unable to recover it. 00:39:05.217 [2024-09-27 15:57:45.582173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.217 [2024-09-27 15:57:45.582182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.217 qpair failed and we were unable to recover it. 00:39:05.217 [2024-09-27 15:57:45.582502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.217 [2024-09-27 15:57:45.582510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.217 qpair failed and we were unable to recover it. 00:39:05.217 [2024-09-27 15:57:45.582817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.217 [2024-09-27 15:57:45.582825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.217 qpair failed and we were unable to recover it. 00:39:05.217 [2024-09-27 15:57:45.583163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.217 [2024-09-27 15:57:45.583171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.217 qpair failed and we were unable to recover it. 00:39:05.217 [2024-09-27 15:57:45.583475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.217 [2024-09-27 15:57:45.583490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.217 qpair failed and we were unable to recover it. 00:39:05.217 [2024-09-27 15:57:45.583804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.217 [2024-09-27 15:57:45.583811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.217 qpair failed and we were unable to recover it. 00:39:05.217 [2024-09-27 15:57:45.584133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.217 [2024-09-27 15:57:45.584141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.217 qpair failed and we were unable to recover it. 00:39:05.217 [2024-09-27 15:57:45.584462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.217 [2024-09-27 15:57:45.584470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.217 qpair failed and we were unable to recover it. 00:39:05.217 [2024-09-27 15:57:45.584787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.217 [2024-09-27 15:57:45.584794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.217 qpair failed and we were unable to recover it. 00:39:05.217 [2024-09-27 15:57:45.585137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.217 [2024-09-27 15:57:45.585145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.217 qpair failed and we were unable to recover it. 00:39:05.217 [2024-09-27 15:57:45.585361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.217 [2024-09-27 15:57:45.585369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.217 qpair failed and we were unable to recover it. 00:39:05.217 [2024-09-27 15:57:45.585709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.217 [2024-09-27 15:57:45.585716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.217 qpair failed and we were unable to recover it. 00:39:05.217 [2024-09-27 15:57:45.586040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.217 [2024-09-27 15:57:45.586048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.217 qpair failed and we were unable to recover it. 00:39:05.217 [2024-09-27 15:57:45.586359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.217 [2024-09-27 15:57:45.586370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.217 qpair failed and we were unable to recover it. 00:39:05.217 [2024-09-27 15:57:45.586558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.217 [2024-09-27 15:57:45.586566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.217 qpair failed and we were unable to recover it. 00:39:05.217 [2024-09-27 15:57:45.586847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.217 [2024-09-27 15:57:45.586854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.217 qpair failed and we were unable to recover it. 00:39:05.217 [2024-09-27 15:57:45.587066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.217 [2024-09-27 15:57:45.587074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.217 qpair failed and we were unable to recover it. 00:39:05.217 [2024-09-27 15:57:45.587464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.217 [2024-09-27 15:57:45.587471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.217 qpair failed and we were unable to recover it. 00:39:05.217 [2024-09-27 15:57:45.587815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.217 [2024-09-27 15:57:45.587823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.217 qpair failed and we were unable to recover it. 00:39:05.217 [2024-09-27 15:57:45.588146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.217 [2024-09-27 15:57:45.588154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.217 qpair failed and we were unable to recover it. 00:39:05.217 [2024-09-27 15:57:45.588471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.217 [2024-09-27 15:57:45.588479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.217 qpair failed and we were unable to recover it. 00:39:05.217 [2024-09-27 15:57:45.588795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.217 [2024-09-27 15:57:45.588803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.217 qpair failed and we were unable to recover it. 00:39:05.217 [2024-09-27 15:57:45.588998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.217 [2024-09-27 15:57:45.589006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.217 qpair failed and we were unable to recover it. 00:39:05.217 [2024-09-27 15:57:45.589329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.217 [2024-09-27 15:57:45.589336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.217 qpair failed and we were unable to recover it. 00:39:05.217 [2024-09-27 15:57:45.589630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.217 [2024-09-27 15:57:45.589638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.217 qpair failed and we were unable to recover it. 00:39:05.217 [2024-09-27 15:57:45.589834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.217 [2024-09-27 15:57:45.589843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.217 qpair failed and we were unable to recover it. 00:39:05.217 [2024-09-27 15:57:45.590175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.217 [2024-09-27 15:57:45.590184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.217 qpair failed and we were unable to recover it. 00:39:05.217 [2024-09-27 15:57:45.590504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.217 [2024-09-27 15:57:45.590512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.217 qpair failed and we were unable to recover it. 00:39:05.217 [2024-09-27 15:57:45.590842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.217 [2024-09-27 15:57:45.590851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.217 qpair failed and we were unable to recover it. 00:39:05.217 [2024-09-27 15:57:45.591150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.217 [2024-09-27 15:57:45.591159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.217 qpair failed and we were unable to recover it. 00:39:05.217 [2024-09-27 15:57:45.591477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.217 [2024-09-27 15:57:45.591486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.217 qpair failed and we were unable to recover it. 00:39:05.217 [2024-09-27 15:57:45.591686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.217 [2024-09-27 15:57:45.591695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.217 qpair failed and we were unable to recover it. 00:39:05.217 [2024-09-27 15:57:45.592015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.218 [2024-09-27 15:57:45.592023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.218 qpair failed and we were unable to recover it. 00:39:05.218 [2024-09-27 15:57:45.592342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.218 [2024-09-27 15:57:45.592350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.218 qpair failed and we were unable to recover it. 00:39:05.218 [2024-09-27 15:57:45.592673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.218 [2024-09-27 15:57:45.592680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.218 qpair failed and we were unable to recover it. 00:39:05.218 [2024-09-27 15:57:45.593001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.218 [2024-09-27 15:57:45.593009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.218 qpair failed and we were unable to recover it. 00:39:05.218 [2024-09-27 15:57:45.593360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.218 [2024-09-27 15:57:45.593367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.218 qpair failed and we were unable to recover it. 00:39:05.218 [2024-09-27 15:57:45.593677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.218 [2024-09-27 15:57:45.593685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.218 qpair failed and we were unable to recover it. 00:39:05.218 [2024-09-27 15:57:45.594034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.218 [2024-09-27 15:57:45.594041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.218 qpair failed and we were unable to recover it. 00:39:05.218 [2024-09-27 15:57:45.594355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.218 [2024-09-27 15:57:45.594362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.218 qpair failed and we were unable to recover it. 00:39:05.218 [2024-09-27 15:57:45.594690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.218 [2024-09-27 15:57:45.594699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.218 qpair failed and we were unable to recover it. 00:39:05.218 [2024-09-27 15:57:45.595030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.218 [2024-09-27 15:57:45.595039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.218 qpair failed and we were unable to recover it. 00:39:05.218 [2024-09-27 15:57:45.595362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.218 [2024-09-27 15:57:45.595371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.218 qpair failed and we were unable to recover it. 00:39:05.218 [2024-09-27 15:57:45.595696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.218 [2024-09-27 15:57:45.595705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.218 qpair failed and we were unable to recover it. 00:39:05.218 [2024-09-27 15:57:45.596060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.218 [2024-09-27 15:57:45.596068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.218 qpair failed and we were unable to recover it. 00:39:05.218 [2024-09-27 15:57:45.596402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.218 [2024-09-27 15:57:45.596409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.218 qpair failed and we were unable to recover it. 00:39:05.218 [2024-09-27 15:57:45.596752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.218 [2024-09-27 15:57:45.596759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.218 qpair failed and we were unable to recover it. 00:39:05.218 [2024-09-27 15:57:45.597081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.218 [2024-09-27 15:57:45.597088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.218 qpair failed and we were unable to recover it. 00:39:05.218 [2024-09-27 15:57:45.597410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.218 [2024-09-27 15:57:45.597418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.218 qpair failed and we were unable to recover it. 00:39:05.218 [2024-09-27 15:57:45.597610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.218 [2024-09-27 15:57:45.597617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.218 qpair failed and we were unable to recover it. 00:39:05.218 [2024-09-27 15:57:45.597950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.218 [2024-09-27 15:57:45.597958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.218 qpair failed and we were unable to recover it. 00:39:05.218 [2024-09-27 15:57:45.598298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.218 [2024-09-27 15:57:45.598305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.218 qpair failed and we were unable to recover it. 00:39:05.218 [2024-09-27 15:57:45.598623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.218 [2024-09-27 15:57:45.598630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.218 qpair failed and we were unable to recover it. 00:39:05.218 [2024-09-27 15:57:45.598971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.218 [2024-09-27 15:57:45.598979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.218 qpair failed and we were unable to recover it. 00:39:05.218 [2024-09-27 15:57:45.599295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.218 [2024-09-27 15:57:45.599303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.218 qpair failed and we were unable to recover it. 00:39:05.218 [2024-09-27 15:57:45.599622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.218 [2024-09-27 15:57:45.599630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.218 qpair failed and we were unable to recover it. 00:39:05.218 [2024-09-27 15:57:45.599952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.218 [2024-09-27 15:57:45.599959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.218 qpair failed and we were unable to recover it. 00:39:05.218 [2024-09-27 15:57:45.600311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.218 [2024-09-27 15:57:45.600319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.218 qpair failed and we were unable to recover it. 00:39:05.218 [2024-09-27 15:57:45.600618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.218 [2024-09-27 15:57:45.600626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.218 qpair failed and we were unable to recover it. 00:39:05.218 [2024-09-27 15:57:45.600942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.218 [2024-09-27 15:57:45.600949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.218 qpair failed and we were unable to recover it. 00:39:05.218 [2024-09-27 15:57:45.601287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.218 [2024-09-27 15:57:45.601294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.218 qpair failed and we were unable to recover it. 00:39:05.218 [2024-09-27 15:57:45.601630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.218 [2024-09-27 15:57:45.601638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.218 qpair failed and we were unable to recover it. 00:39:05.218 [2024-09-27 15:57:45.601948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.218 [2024-09-27 15:57:45.601956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.218 qpair failed and we were unable to recover it. 00:39:05.218 [2024-09-27 15:57:45.602280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.218 [2024-09-27 15:57:45.602288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.218 qpair failed and we were unable to recover it. 00:39:05.218 [2024-09-27 15:57:45.602630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.218 [2024-09-27 15:57:45.602639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.218 qpair failed and we were unable to recover it. 00:39:05.218 [2024-09-27 15:57:45.602958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.218 [2024-09-27 15:57:45.602967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.218 qpair failed and we were unable to recover it. 00:39:05.218 [2024-09-27 15:57:45.603331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.218 [2024-09-27 15:57:45.603339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.218 qpair failed and we were unable to recover it. 00:39:05.218 [2024-09-27 15:57:45.603510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.218 [2024-09-27 15:57:45.603519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.218 qpair failed and we were unable to recover it. 00:39:05.218 [2024-09-27 15:57:45.603810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.218 [2024-09-27 15:57:45.603819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.218 qpair failed and we were unable to recover it. 00:39:05.218 [2024-09-27 15:57:45.604145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.219 [2024-09-27 15:57:45.604153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.219 qpair failed and we were unable to recover it. 00:39:05.219 [2024-09-27 15:57:45.604456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.219 [2024-09-27 15:57:45.604464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.219 qpair failed and we were unable to recover it. 00:39:05.219 [2024-09-27 15:57:45.604786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.219 [2024-09-27 15:57:45.604793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.219 qpair failed and we were unable to recover it. 00:39:05.219 [2024-09-27 15:57:45.605112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.219 [2024-09-27 15:57:45.605120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.219 qpair failed and we were unable to recover it. 00:39:05.219 [2024-09-27 15:57:45.605447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.219 [2024-09-27 15:57:45.605456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.219 qpair failed and we were unable to recover it. 00:39:05.219 [2024-09-27 15:57:45.605775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.219 [2024-09-27 15:57:45.605783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.219 qpair failed and we were unable to recover it. 00:39:05.219 [2024-09-27 15:57:45.606103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.219 [2024-09-27 15:57:45.606111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.219 qpair failed and we were unable to recover it. 00:39:05.219 [2024-09-27 15:57:45.606431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.219 [2024-09-27 15:57:45.606439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.219 qpair failed and we were unable to recover it. 00:39:05.219 [2024-09-27 15:57:45.606758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.219 [2024-09-27 15:57:45.606766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.219 qpair failed and we were unable to recover it. 00:39:05.219 [2024-09-27 15:57:45.607083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.219 [2024-09-27 15:57:45.607090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.219 qpair failed and we were unable to recover it. 00:39:05.219 [2024-09-27 15:57:45.607414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.219 [2024-09-27 15:57:45.607421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.219 qpair failed and we were unable to recover it. 00:39:05.219 [2024-09-27 15:57:45.607749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.219 [2024-09-27 15:57:45.607756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.219 qpair failed and we were unable to recover it. 00:39:05.219 [2024-09-27 15:57:45.608083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.219 [2024-09-27 15:57:45.608094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.219 qpair failed and we were unable to recover it. 00:39:05.219 [2024-09-27 15:57:45.608296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.219 [2024-09-27 15:57:45.608304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.219 qpair failed and we were unable to recover it. 00:39:05.219 [2024-09-27 15:57:45.608624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.219 [2024-09-27 15:57:45.608632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.219 qpair failed and we were unable to recover it. 00:39:05.219 [2024-09-27 15:57:45.608964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.219 [2024-09-27 15:57:45.608972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.219 qpair failed and we were unable to recover it. 00:39:05.219 [2024-09-27 15:57:45.609292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.219 [2024-09-27 15:57:45.609300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.219 qpair failed and we were unable to recover it. 00:39:05.219 [2024-09-27 15:57:45.609648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.219 [2024-09-27 15:57:45.609656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.219 qpair failed and we were unable to recover it. 00:39:05.219 [2024-09-27 15:57:45.609972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.219 [2024-09-27 15:57:45.609979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.219 qpair failed and we were unable to recover it. 00:39:05.219 [2024-09-27 15:57:45.610315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.219 [2024-09-27 15:57:45.610323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.219 qpair failed and we were unable to recover it. 00:39:05.219 [2024-09-27 15:57:45.610638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.219 [2024-09-27 15:57:45.610646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.219 qpair failed and we were unable to recover it. 00:39:05.219 [2024-09-27 15:57:45.611007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.219 [2024-09-27 15:57:45.611016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.219 qpair failed and we were unable to recover it. 00:39:05.219 [2024-09-27 15:57:45.611339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.219 [2024-09-27 15:57:45.611347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.219 qpair failed and we were unable to recover it. 00:39:05.219 [2024-09-27 15:57:45.611661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.219 [2024-09-27 15:57:45.611668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.219 qpair failed and we were unable to recover it. 00:39:05.219 [2024-09-27 15:57:45.611994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.219 [2024-09-27 15:57:45.612002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.219 qpair failed and we were unable to recover it. 00:39:05.219 [2024-09-27 15:57:45.612326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.219 [2024-09-27 15:57:45.612333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.219 qpair failed and we were unable to recover it. 00:39:05.219 [2024-09-27 15:57:45.612656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.219 [2024-09-27 15:57:45.612664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.219 qpair failed and we were unable to recover it. 00:39:05.219 [2024-09-27 15:57:45.612852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.219 [2024-09-27 15:57:45.612860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.219 qpair failed and we were unable to recover it. 00:39:05.219 [2024-09-27 15:57:45.613200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.219 [2024-09-27 15:57:45.613208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.219 qpair failed and we were unable to recover it. 00:39:05.219 [2024-09-27 15:57:45.613563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.219 [2024-09-27 15:57:45.613571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.219 qpair failed and we were unable to recover it. 00:39:05.219 [2024-09-27 15:57:45.613813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.219 [2024-09-27 15:57:45.613820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.219 qpair failed and we were unable to recover it. 00:39:05.219 [2024-09-27 15:57:45.614151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.219 [2024-09-27 15:57:45.614159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.219 qpair failed and we were unable to recover it. 00:39:05.219 [2024-09-27 15:57:45.614486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.219 [2024-09-27 15:57:45.614493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.219 qpair failed and we were unable to recover it. 00:39:05.219 [2024-09-27 15:57:45.614816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.219 [2024-09-27 15:57:45.614823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.219 qpair failed and we were unable to recover it. 00:39:05.219 [2024-09-27 15:57:45.615155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.219 [2024-09-27 15:57:45.615163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.219 qpair failed and we were unable to recover it. 00:39:05.219 [2024-09-27 15:57:45.615487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.219 [2024-09-27 15:57:45.615494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.219 qpair failed and we were unable to recover it. 00:39:05.219 [2024-09-27 15:57:45.615814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.219 [2024-09-27 15:57:45.615821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.219 qpair failed and we were unable to recover it. 00:39:05.219 [2024-09-27 15:57:45.616139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.219 [2024-09-27 15:57:45.616149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.220 qpair failed and we were unable to recover it. 00:39:05.220 [2024-09-27 15:57:45.616455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.220 [2024-09-27 15:57:45.616463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.220 qpair failed and we were unable to recover it. 00:39:05.220 [2024-09-27 15:57:45.616782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.220 [2024-09-27 15:57:45.616795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.220 qpair failed and we were unable to recover it. 00:39:05.220 [2024-09-27 15:57:45.617162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.220 [2024-09-27 15:57:45.617170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.220 qpair failed and we were unable to recover it. 00:39:05.220 [2024-09-27 15:57:45.617478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.220 [2024-09-27 15:57:45.617486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.220 qpair failed and we were unable to recover it. 00:39:05.220 [2024-09-27 15:57:45.617850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.220 [2024-09-27 15:57:45.617857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.220 qpair failed and we were unable to recover it. 00:39:05.220 [2024-09-27 15:57:45.618159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.220 [2024-09-27 15:57:45.618167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.220 qpair failed and we were unable to recover it. 00:39:05.220 [2024-09-27 15:57:45.618492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.220 [2024-09-27 15:57:45.618499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.220 qpair failed and we were unable to recover it. 00:39:05.220 [2024-09-27 15:57:45.618819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.220 [2024-09-27 15:57:45.618828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.220 qpair failed and we were unable to recover it. 00:39:05.220 [2024-09-27 15:57:45.619274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.220 [2024-09-27 15:57:45.619282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.220 qpair failed and we were unable to recover it. 00:39:05.220 [2024-09-27 15:57:45.619594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.220 [2024-09-27 15:57:45.619607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.220 qpair failed and we were unable to recover it. 00:39:05.220 [2024-09-27 15:57:45.619913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.220 [2024-09-27 15:57:45.619923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.220 qpair failed and we were unable to recover it. 00:39:05.220 [2024-09-27 15:57:45.620232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.220 [2024-09-27 15:57:45.620239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.220 qpair failed and we were unable to recover it. 00:39:05.220 [2024-09-27 15:57:45.620557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.220 [2024-09-27 15:57:45.620564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.220 qpair failed and we were unable to recover it. 00:39:05.220 [2024-09-27 15:57:45.620754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.220 [2024-09-27 15:57:45.620767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.220 qpair failed and we were unable to recover it. 00:39:05.220 [2024-09-27 15:57:45.621103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.220 [2024-09-27 15:57:45.621111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.220 qpair failed and we were unable to recover it. 00:39:05.220 [2024-09-27 15:57:45.621435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.220 [2024-09-27 15:57:45.621443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.220 qpair failed and we were unable to recover it. 00:39:05.220 [2024-09-27 15:57:45.621770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.220 [2024-09-27 15:57:45.621777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.220 qpair failed and we were unable to recover it. 00:39:05.220 [2024-09-27 15:57:45.622065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.220 [2024-09-27 15:57:45.622073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.220 qpair failed and we were unable to recover it. 00:39:05.220 [2024-09-27 15:57:45.622394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.220 [2024-09-27 15:57:45.622401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.220 qpair failed and we were unable to recover it. 00:39:05.220 [2024-09-27 15:57:45.622735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.220 [2024-09-27 15:57:45.622743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.220 qpair failed and we were unable to recover it. 00:39:05.220 [2024-09-27 15:57:45.623033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.220 [2024-09-27 15:57:45.623041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.220 qpair failed and we were unable to recover it. 00:39:05.220 [2024-09-27 15:57:45.623367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.220 [2024-09-27 15:57:45.623375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.220 qpair failed and we were unable to recover it. 00:39:05.220 [2024-09-27 15:57:45.623696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.220 [2024-09-27 15:57:45.623703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.220 qpair failed and we were unable to recover it. 00:39:05.220 [2024-09-27 15:57:45.624032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.220 [2024-09-27 15:57:45.624040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.220 qpair failed and we were unable to recover it. 00:39:05.220 [2024-09-27 15:57:45.624367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.220 [2024-09-27 15:57:45.624375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.220 qpair failed and we were unable to recover it. 00:39:05.220 [2024-09-27 15:57:45.624700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.220 [2024-09-27 15:57:45.624709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.220 qpair failed and we were unable to recover it. 00:39:05.220 [2024-09-27 15:57:45.625034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.220 [2024-09-27 15:57:45.625042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.220 qpair failed and we were unable to recover it. 00:39:05.220 [2024-09-27 15:57:45.625351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.220 [2024-09-27 15:57:45.625360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.220 qpair failed and we were unable to recover it. 00:39:05.220 [2024-09-27 15:57:45.625689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.220 [2024-09-27 15:57:45.625696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.220 qpair failed and we were unable to recover it. 00:39:05.220 [2024-09-27 15:57:45.626016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.220 [2024-09-27 15:57:45.626024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.220 qpair failed and we were unable to recover it. 00:39:05.220 [2024-09-27 15:57:45.626254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.220 [2024-09-27 15:57:45.626263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.220 qpair failed and we were unable to recover it. 00:39:05.220 [2024-09-27 15:57:45.626581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.220 [2024-09-27 15:57:45.626588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.220 qpair failed and we were unable to recover it. 00:39:05.220 [2024-09-27 15:57:45.626985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.220 [2024-09-27 15:57:45.626993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.220 qpair failed and we were unable to recover it. 00:39:05.220 [2024-09-27 15:57:45.627172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.220 [2024-09-27 15:57:45.627180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.220 qpair failed and we were unable to recover it. 00:39:05.220 [2024-09-27 15:57:45.627504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.220 [2024-09-27 15:57:45.627512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.220 qpair failed and we were unable to recover it. 00:39:05.220 [2024-09-27 15:57:45.627822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.220 [2024-09-27 15:57:45.627830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.220 qpair failed and we were unable to recover it. 00:39:05.220 [2024-09-27 15:57:45.628148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.220 [2024-09-27 15:57:45.628156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.220 qpair failed and we were unable to recover it. 00:39:05.221 [2024-09-27 15:57:45.628465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.221 [2024-09-27 15:57:45.628473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.221 qpair failed and we were unable to recover it. 00:39:05.221 [2024-09-27 15:57:45.628838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.221 [2024-09-27 15:57:45.628845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.221 qpair failed and we were unable to recover it. 00:39:05.221 [2024-09-27 15:57:45.629028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.221 [2024-09-27 15:57:45.629037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.221 qpair failed and we were unable to recover it. 00:39:05.221 [2024-09-27 15:57:45.629339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.221 [2024-09-27 15:57:45.629348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.221 qpair failed and we were unable to recover it. 00:39:05.221 [2024-09-27 15:57:45.629702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.221 [2024-09-27 15:57:45.629710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.221 qpair failed and we were unable to recover it. 00:39:05.221 [2024-09-27 15:57:45.630027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.221 [2024-09-27 15:57:45.630035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.221 qpair failed and we were unable to recover it. 00:39:05.221 [2024-09-27 15:57:45.630350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.221 [2024-09-27 15:57:45.630358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.221 qpair failed and we were unable to recover it. 00:39:05.221 [2024-09-27 15:57:45.630681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.221 [2024-09-27 15:57:45.630688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.221 qpair failed and we were unable to recover it. 00:39:05.221 [2024-09-27 15:57:45.631087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.221 [2024-09-27 15:57:45.631102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.221 qpair failed and we were unable to recover it. 00:39:05.221 [2024-09-27 15:57:45.631505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.221 [2024-09-27 15:57:45.631512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.221 qpair failed and we were unable to recover it. 00:39:05.221 [2024-09-27 15:57:45.631820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.221 [2024-09-27 15:57:45.631827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.221 qpair failed and we were unable to recover it. 00:39:05.221 [2024-09-27 15:57:45.632050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.221 [2024-09-27 15:57:45.632058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.221 qpair failed and we were unable to recover it. 00:39:05.221 [2024-09-27 15:57:45.632380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.221 [2024-09-27 15:57:45.632387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.221 qpair failed and we were unable to recover it. 00:39:05.221 [2024-09-27 15:57:45.632709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.221 [2024-09-27 15:57:45.632716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.221 qpair failed and we were unable to recover it. 00:39:05.221 [2024-09-27 15:57:45.633034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.221 [2024-09-27 15:57:45.633042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.221 qpair failed and we were unable to recover it. 00:39:05.221 [2024-09-27 15:57:45.633379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.221 [2024-09-27 15:57:45.633386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.221 qpair failed and we were unable to recover it. 00:39:05.221 [2024-09-27 15:57:45.633595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.221 [2024-09-27 15:57:45.633603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.221 qpair failed and we were unable to recover it. 00:39:05.221 [2024-09-27 15:57:45.633802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.221 [2024-09-27 15:57:45.633811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.221 qpair failed and we were unable to recover it. 00:39:05.221 [2024-09-27 15:57:45.634120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.221 [2024-09-27 15:57:45.634129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.221 qpair failed and we were unable to recover it. 00:39:05.221 [2024-09-27 15:57:45.634445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.221 [2024-09-27 15:57:45.634453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.221 qpair failed and we were unable to recover it. 00:39:05.221 [2024-09-27 15:57:45.634794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.221 [2024-09-27 15:57:45.634801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.221 qpair failed and we were unable to recover it. 00:39:05.221 [2024-09-27 15:57:45.634969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.221 [2024-09-27 15:57:45.634978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.221 qpair failed and we were unable to recover it. 00:39:05.221 [2024-09-27 15:57:45.635370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.221 [2024-09-27 15:57:45.635379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.221 qpair failed and we were unable to recover it. 00:39:05.221 [2024-09-27 15:57:45.635698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.221 [2024-09-27 15:57:45.635707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.221 qpair failed and we were unable to recover it. 00:39:05.221 [2024-09-27 15:57:45.636030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.221 [2024-09-27 15:57:45.636038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.221 qpair failed and we were unable to recover it. 00:39:05.221 [2024-09-27 15:57:45.636363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.221 [2024-09-27 15:57:45.636370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.221 qpair failed and we were unable to recover it. 00:39:05.221 [2024-09-27 15:57:45.636725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.221 [2024-09-27 15:57:45.636732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.221 qpair failed and we were unable to recover it. 00:39:05.221 [2024-09-27 15:57:45.637052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.221 [2024-09-27 15:57:45.637060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.221 qpair failed and we were unable to recover it. 00:39:05.221 [2024-09-27 15:57:45.637354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.221 [2024-09-27 15:57:45.637361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.221 qpair failed and we were unable to recover it. 00:39:05.221 [2024-09-27 15:57:45.637724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.221 [2024-09-27 15:57:45.637733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.221 qpair failed and we were unable to recover it. 00:39:05.221 [2024-09-27 15:57:45.638063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.221 [2024-09-27 15:57:45.638071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.221 qpair failed and we were unable to recover it. 00:39:05.221 [2024-09-27 15:57:45.638400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.221 [2024-09-27 15:57:45.638407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.221 qpair failed and we were unable to recover it. 00:39:05.221 [2024-09-27 15:57:45.638749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.221 [2024-09-27 15:57:45.638759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.221 qpair failed and we were unable to recover it. 00:39:05.221 [2024-09-27 15:57:45.639075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.222 [2024-09-27 15:57:45.639083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.222 qpair failed and we were unable to recover it. 00:39:05.222 [2024-09-27 15:57:45.639406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.222 [2024-09-27 15:57:45.639415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.222 qpair failed and we were unable to recover it. 00:39:05.222 [2024-09-27 15:57:45.639738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.222 [2024-09-27 15:57:45.639747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.222 qpair failed and we were unable to recover it. 00:39:05.222 [2024-09-27 15:57:45.640066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.222 [2024-09-27 15:57:45.640075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.222 qpair failed and we were unable to recover it. 00:39:05.222 [2024-09-27 15:57:45.640402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.222 [2024-09-27 15:57:45.640409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.222 qpair failed and we were unable to recover it. 00:39:05.222 [2024-09-27 15:57:45.640728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.222 [2024-09-27 15:57:45.640736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.222 qpair failed and we were unable to recover it. 00:39:05.222 [2024-09-27 15:57:45.641056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.222 [2024-09-27 15:57:45.641066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.222 qpair failed and we were unable to recover it. 00:39:05.222 [2024-09-27 15:57:45.641389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.222 [2024-09-27 15:57:45.641398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.222 qpair failed and we were unable to recover it. 00:39:05.222 [2024-09-27 15:57:45.641709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.222 [2024-09-27 15:57:45.641716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.222 qpair failed and we were unable to recover it. 00:39:05.222 [2024-09-27 15:57:45.641945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.222 [2024-09-27 15:57:45.641953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.222 qpair failed and we were unable to recover it. 00:39:05.222 [2024-09-27 15:57:45.642273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.222 [2024-09-27 15:57:45.642280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.222 qpair failed and we were unable to recover it. 00:39:05.222 [2024-09-27 15:57:45.642583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.222 [2024-09-27 15:57:45.642591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.222 qpair failed and we were unable to recover it. 00:39:05.222 [2024-09-27 15:57:45.642911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.222 [2024-09-27 15:57:45.642919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.222 qpair failed and we were unable to recover it. 00:39:05.222 [2024-09-27 15:57:45.643241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.222 [2024-09-27 15:57:45.643249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.222 qpair failed and we were unable to recover it. 00:39:05.222 [2024-09-27 15:57:45.643566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.222 [2024-09-27 15:57:45.643573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.222 qpair failed and we were unable to recover it. 00:39:05.222 [2024-09-27 15:57:45.643904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.222 [2024-09-27 15:57:45.643912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.222 qpair failed and we were unable to recover it. 00:39:05.222 [2024-09-27 15:57:45.644252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.222 [2024-09-27 15:57:45.644260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.222 qpair failed and we were unable to recover it. 00:39:05.222 [2024-09-27 15:57:45.644590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.222 [2024-09-27 15:57:45.644598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.222 qpair failed and we were unable to recover it. 00:39:05.222 [2024-09-27 15:57:45.644809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.222 [2024-09-27 15:57:45.644818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.222 qpair failed and we were unable to recover it. 00:39:05.222 [2024-09-27 15:57:45.645123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.222 [2024-09-27 15:57:45.645131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.222 qpair failed and we were unable to recover it. 00:39:05.222 [2024-09-27 15:57:45.645445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.222 [2024-09-27 15:57:45.645453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.222 qpair failed and we were unable to recover it. 00:39:05.222 [2024-09-27 15:57:45.645643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.222 [2024-09-27 15:57:45.645652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.222 qpair failed and we were unable to recover it. 00:39:05.222 [2024-09-27 15:57:45.645976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.222 [2024-09-27 15:57:45.645984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.222 qpair failed and we were unable to recover it. 00:39:05.222 [2024-09-27 15:57:45.646309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.222 [2024-09-27 15:57:45.646317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.222 qpair failed and we were unable to recover it. 00:39:05.222 [2024-09-27 15:57:45.646637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.222 [2024-09-27 15:57:45.646644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.222 qpair failed and we were unable to recover it. 00:39:05.222 [2024-09-27 15:57:45.646969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.222 [2024-09-27 15:57:45.646977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.222 qpair failed and we were unable to recover it. 00:39:05.222 [2024-09-27 15:57:45.647301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.222 [2024-09-27 15:57:45.647311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.222 qpair failed and we were unable to recover it. 00:39:05.222 [2024-09-27 15:57:45.647620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.222 [2024-09-27 15:57:45.647628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.222 qpair failed and we were unable to recover it. 00:39:05.222 [2024-09-27 15:57:45.647867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.222 [2024-09-27 15:57:45.647875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.222 qpair failed and we were unable to recover it. 00:39:05.222 [2024-09-27 15:57:45.648291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.222 [2024-09-27 15:57:45.648299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.222 qpair failed and we were unable to recover it. 00:39:05.222 [2024-09-27 15:57:45.648544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.222 [2024-09-27 15:57:45.648552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.222 qpair failed and we were unable to recover it. 00:39:05.222 [2024-09-27 15:57:45.648893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.222 [2024-09-27 15:57:45.648908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.222 qpair failed and we were unable to recover it. 00:39:05.222 [2024-09-27 15:57:45.649242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.222 [2024-09-27 15:57:45.649249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.222 qpair failed and we were unable to recover it. 00:39:05.222 [2024-09-27 15:57:45.649553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.222 [2024-09-27 15:57:45.649561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.222 qpair failed and we were unable to recover it. 00:39:05.222 [2024-09-27 15:57:45.649884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.222 [2024-09-27 15:57:45.649892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.222 qpair failed and we were unable to recover it. 00:39:05.222 [2024-09-27 15:57:45.650211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.222 [2024-09-27 15:57:45.650222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.222 qpair failed and we were unable to recover it. 00:39:05.222 [2024-09-27 15:57:45.650539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.222 [2024-09-27 15:57:45.650546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.222 qpair failed and we were unable to recover it. 00:39:05.222 [2024-09-27 15:57:45.650863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.222 [2024-09-27 15:57:45.650871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.223 qpair failed and we were unable to recover it. 00:39:05.223 [2024-09-27 15:57:45.651172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.223 [2024-09-27 15:57:45.651181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.223 qpair failed and we were unable to recover it. 00:39:05.223 [2024-09-27 15:57:45.651540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.223 [2024-09-27 15:57:45.651547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.223 qpair failed and we were unable to recover it. 00:39:05.223 [2024-09-27 15:57:45.651796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.223 [2024-09-27 15:57:45.651803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.223 qpair failed and we were unable to recover it. 00:39:05.223 [2024-09-27 15:57:45.652136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.223 [2024-09-27 15:57:45.652143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.223 qpair failed and we were unable to recover it. 00:39:05.223 [2024-09-27 15:57:45.652461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.223 [2024-09-27 15:57:45.652469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.223 qpair failed and we were unable to recover it. 00:39:05.223 [2024-09-27 15:57:45.652803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.223 [2024-09-27 15:57:45.652810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.223 qpair failed and we were unable to recover it. 00:39:05.223 [2024-09-27 15:57:45.653016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.223 [2024-09-27 15:57:45.653024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.223 qpair failed and we were unable to recover it. 00:39:05.223 [2024-09-27 15:57:45.653389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.223 [2024-09-27 15:57:45.653396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.223 qpair failed and we were unable to recover it. 00:39:05.223 [2024-09-27 15:57:45.653747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.223 [2024-09-27 15:57:45.653755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.223 qpair failed and we were unable to recover it. 00:39:05.223 [2024-09-27 15:57:45.653972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.223 [2024-09-27 15:57:45.653980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.223 qpair failed and we were unable to recover it. 00:39:05.223 [2024-09-27 15:57:45.654270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.223 [2024-09-27 15:57:45.654278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.223 qpair failed and we were unable to recover it. 00:39:05.223 [2024-09-27 15:57:45.654508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.223 [2024-09-27 15:57:45.654518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.223 qpair failed and we were unable to recover it. 00:39:05.223 [2024-09-27 15:57:45.654867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.223 [2024-09-27 15:57:45.654875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.223 qpair failed and we were unable to recover it. 00:39:05.223 [2024-09-27 15:57:45.655200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.223 [2024-09-27 15:57:45.655208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.223 qpair failed and we were unable to recover it. 00:39:05.223 [2024-09-27 15:57:45.655530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.223 [2024-09-27 15:57:45.655537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.223 qpair failed and we were unable to recover it. 00:39:05.223 [2024-09-27 15:57:45.655855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.223 [2024-09-27 15:57:45.655865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.223 qpair failed and we were unable to recover it. 00:39:05.223 [2024-09-27 15:57:45.656186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.223 [2024-09-27 15:57:45.656194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.223 qpair failed and we were unable to recover it. 00:39:05.223 [2024-09-27 15:57:45.656515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.223 [2024-09-27 15:57:45.656523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.223 qpair failed and we were unable to recover it. 00:39:05.223 [2024-09-27 15:57:45.656849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.223 [2024-09-27 15:57:45.656857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.223 qpair failed and we were unable to recover it. 00:39:05.223 [2024-09-27 15:57:45.657179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.223 [2024-09-27 15:57:45.657187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.223 qpair failed and we were unable to recover it. 00:39:05.223 [2024-09-27 15:57:45.657510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.223 [2024-09-27 15:57:45.657519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.223 qpair failed and we were unable to recover it. 00:39:05.223 [2024-09-27 15:57:45.657842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.223 [2024-09-27 15:57:45.657851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.223 qpair failed and we were unable to recover it. 00:39:05.223 [2024-09-27 15:57:45.658169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.223 [2024-09-27 15:57:45.658176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.223 qpair failed and we were unable to recover it. 00:39:05.223 [2024-09-27 15:57:45.658386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.223 [2024-09-27 15:57:45.658394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.223 qpair failed and we were unable to recover it. 00:39:05.223 [2024-09-27 15:57:45.658740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.223 [2024-09-27 15:57:45.658749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.223 qpair failed and we were unable to recover it. 00:39:05.223 [2024-09-27 15:57:45.659080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.223 [2024-09-27 15:57:45.659087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.223 qpair failed and we were unable to recover it. 00:39:05.223 [2024-09-27 15:57:45.659412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.223 [2024-09-27 15:57:45.659420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.223 qpair failed and we were unable to recover it. 00:39:05.223 [2024-09-27 15:57:45.659749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.223 [2024-09-27 15:57:45.659756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.223 qpair failed and we were unable to recover it. 00:39:05.223 [2024-09-27 15:57:45.659950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.223 [2024-09-27 15:57:45.659958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.223 qpair failed and we were unable to recover it. 00:39:05.223 [2024-09-27 15:57:45.660282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.223 [2024-09-27 15:57:45.660290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.223 qpair failed and we were unable to recover it. 00:39:05.223 [2024-09-27 15:57:45.660651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.223 [2024-09-27 15:57:45.660658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.223 qpair failed and we were unable to recover it. 00:39:05.223 [2024-09-27 15:57:45.660983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.223 [2024-09-27 15:57:45.660991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.223 qpair failed and we were unable to recover it. 00:39:05.223 [2024-09-27 15:57:45.661284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.223 [2024-09-27 15:57:45.661292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.223 qpair failed and we were unable to recover it. 00:39:05.223 [2024-09-27 15:57:45.661617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.223 [2024-09-27 15:57:45.661624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.223 qpair failed and we were unable to recover it. 00:39:05.223 [2024-09-27 15:57:45.661949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.223 [2024-09-27 15:57:45.661957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.223 qpair failed and we were unable to recover it. 00:39:05.223 [2024-09-27 15:57:45.662274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.223 [2024-09-27 15:57:45.662281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.223 qpair failed and we were unable to recover it. 00:39:05.223 [2024-09-27 15:57:45.662592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.223 [2024-09-27 15:57:45.662600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.224 qpair failed and we were unable to recover it. 00:39:05.224 [2024-09-27 15:57:45.662954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.224 [2024-09-27 15:57:45.662962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.224 qpair failed and we were unable to recover it. 00:39:05.224 [2024-09-27 15:57:45.663242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.224 [2024-09-27 15:57:45.663251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.224 qpair failed and we were unable to recover it. 00:39:05.224 [2024-09-27 15:57:45.663550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.224 [2024-09-27 15:57:45.663557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.224 qpair failed and we were unable to recover it. 00:39:05.224 [2024-09-27 15:57:45.663878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.224 [2024-09-27 15:57:45.663886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.224 qpair failed and we were unable to recover it. 00:39:05.224 [2024-09-27 15:57:45.664186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.224 [2024-09-27 15:57:45.664195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.224 qpair failed and we were unable to recover it. 00:39:05.224 [2024-09-27 15:57:45.664513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.224 [2024-09-27 15:57:45.664520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.224 qpair failed and we were unable to recover it. 00:39:05.224 [2024-09-27 15:57:45.664849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.224 [2024-09-27 15:57:45.664857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.224 qpair failed and we were unable to recover it. 00:39:05.224 [2024-09-27 15:57:45.665282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.224 [2024-09-27 15:57:45.665290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.224 qpair failed and we were unable to recover it. 00:39:05.224 [2024-09-27 15:57:45.665756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.224 [2024-09-27 15:57:45.665767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.224 qpair failed and we were unable to recover it. 00:39:05.224 [2024-09-27 15:57:45.666101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.224 [2024-09-27 15:57:45.666111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.224 qpair failed and we were unable to recover it. 00:39:05.224 [2024-09-27 15:57:45.666433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.224 [2024-09-27 15:57:45.666440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.224 qpair failed and we were unable to recover it. 00:39:05.224 [2024-09-27 15:57:45.666840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.224 [2024-09-27 15:57:45.666849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.224 qpair failed and we were unable to recover it. 00:39:05.224 [2024-09-27 15:57:45.667206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.224 [2024-09-27 15:57:45.667214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.224 qpair failed and we were unable to recover it. 00:39:05.224 [2024-09-27 15:57:45.667516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.224 [2024-09-27 15:57:45.667524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.224 qpair failed and we were unable to recover it. 00:39:05.224 [2024-09-27 15:57:45.667842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.224 [2024-09-27 15:57:45.667852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.224 qpair failed and we were unable to recover it. 00:39:05.224 [2024-09-27 15:57:45.668072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.224 [2024-09-27 15:57:45.668081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.224 qpair failed and we were unable to recover it. 00:39:05.224 [2024-09-27 15:57:45.668409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.224 [2024-09-27 15:57:45.668419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.224 qpair failed and we were unable to recover it. 00:39:05.224 [2024-09-27 15:57:45.668741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.224 [2024-09-27 15:57:45.668752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.224 qpair failed and we were unable to recover it. 00:39:05.224 [2024-09-27 15:57:45.669065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.224 [2024-09-27 15:57:45.669074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.224 qpair failed and we were unable to recover it. 00:39:05.224 [2024-09-27 15:57:45.669409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.224 [2024-09-27 15:57:45.669420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.224 qpair failed and we were unable to recover it. 00:39:05.224 [2024-09-27 15:57:45.669742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.224 [2024-09-27 15:57:45.669750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.224 qpair failed and we were unable to recover it. 00:39:05.224 [2024-09-27 15:57:45.670073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.224 [2024-09-27 15:57:45.670081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.224 qpair failed and we were unable to recover it. 00:39:05.224 [2024-09-27 15:57:45.670380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.224 [2024-09-27 15:57:45.670388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.224 qpair failed and we were unable to recover it. 00:39:05.224 [2024-09-27 15:57:45.670608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.224 [2024-09-27 15:57:45.670617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.224 qpair failed and we were unable to recover it. 00:39:05.224 [2024-09-27 15:57:45.670924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.224 [2024-09-27 15:57:45.670932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.224 qpair failed and we were unable to recover it. 00:39:05.502 [2024-09-27 15:57:45.671255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.502 [2024-09-27 15:57:45.671266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.502 qpair failed and we were unable to recover it. 00:39:05.502 [2024-09-27 15:57:45.671475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.502 [2024-09-27 15:57:45.671488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.502 qpair failed and we were unable to recover it. 00:39:05.502 [2024-09-27 15:57:45.671808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.502 [2024-09-27 15:57:45.671818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.502 qpair failed and we were unable to recover it. 00:39:05.502 [2024-09-27 15:57:45.672176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.502 [2024-09-27 15:57:45.672184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.502 qpair failed and we were unable to recover it. 00:39:05.502 [2024-09-27 15:57:45.672388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.502 [2024-09-27 15:57:45.672397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.502 qpair failed and we were unable to recover it. 00:39:05.502 [2024-09-27 15:57:45.672735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.502 [2024-09-27 15:57:45.672743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.502 qpair failed and we were unable to recover it. 00:39:05.502 [2024-09-27 15:57:45.673053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.502 [2024-09-27 15:57:45.673062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.502 qpair failed and we were unable to recover it. 00:39:05.502 [2024-09-27 15:57:45.673278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.502 [2024-09-27 15:57:45.673288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.502 qpair failed and we were unable to recover it. 00:39:05.502 [2024-09-27 15:57:45.673612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.502 [2024-09-27 15:57:45.673619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.502 qpair failed and we were unable to recover it. 00:39:05.502 [2024-09-27 15:57:45.673943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.502 [2024-09-27 15:57:45.673952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.502 qpair failed and we were unable to recover it. 00:39:05.502 [2024-09-27 15:57:45.674292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.502 [2024-09-27 15:57:45.674300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.502 qpair failed and we were unable to recover it. 00:39:05.502 [2024-09-27 15:57:45.674600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.502 [2024-09-27 15:57:45.674607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.502 qpair failed and we were unable to recover it. 00:39:05.502 [2024-09-27 15:57:45.674941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.502 [2024-09-27 15:57:45.674950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.502 qpair failed and we were unable to recover it. 00:39:05.502 [2024-09-27 15:57:45.675176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.502 [2024-09-27 15:57:45.675183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.502 qpair failed and we were unable to recover it. 00:39:05.502 [2024-09-27 15:57:45.675487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.502 [2024-09-27 15:57:45.675497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.502 qpair failed and we were unable to recover it. 00:39:05.502 [2024-09-27 15:57:45.675837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.502 [2024-09-27 15:57:45.675845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.502 qpair failed and we were unable to recover it. 00:39:05.502 [2024-09-27 15:57:45.676167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.502 [2024-09-27 15:57:45.676174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.502 qpair failed and we were unable to recover it. 00:39:05.502 [2024-09-27 15:57:45.676527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.502 [2024-09-27 15:57:45.676534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.502 qpair failed and we were unable to recover it. 00:39:05.502 [2024-09-27 15:57:45.676825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.502 [2024-09-27 15:57:45.676833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.502 qpair failed and we were unable to recover it. 00:39:05.502 [2024-09-27 15:57:45.677025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.502 [2024-09-27 15:57:45.677034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.502 qpair failed and we were unable to recover it. 00:39:05.502 [2024-09-27 15:57:45.677385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.502 [2024-09-27 15:57:45.677392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.502 qpair failed and we were unable to recover it. 00:39:05.502 [2024-09-27 15:57:45.677718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.502 [2024-09-27 15:57:45.677728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.502 qpair failed and we were unable to recover it. 00:39:05.502 [2024-09-27 15:57:45.678047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.502 [2024-09-27 15:57:45.678054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.502 qpair failed and we were unable to recover it. 00:39:05.502 [2024-09-27 15:57:45.678379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.502 [2024-09-27 15:57:45.678387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.502 qpair failed and we were unable to recover it. 00:39:05.502 [2024-09-27 15:57:45.678673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.502 [2024-09-27 15:57:45.678680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.502 qpair failed and we were unable to recover it. 00:39:05.502 [2024-09-27 15:57:45.679008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.502 [2024-09-27 15:57:45.679016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.502 qpair failed and we were unable to recover it. 00:39:05.502 [2024-09-27 15:57:45.679183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.502 [2024-09-27 15:57:45.679197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.502 qpair failed and we were unable to recover it. 00:39:05.502 [2024-09-27 15:57:45.679576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.502 [2024-09-27 15:57:45.679583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.502 qpair failed and we were unable to recover it. 00:39:05.502 [2024-09-27 15:57:45.679908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.502 [2024-09-27 15:57:45.679915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.502 qpair failed and we were unable to recover it. 00:39:05.502 [2024-09-27 15:57:45.680269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.502 [2024-09-27 15:57:45.680277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.502 qpair failed and we were unable to recover it. 00:39:05.502 [2024-09-27 15:57:45.680500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.502 [2024-09-27 15:57:45.680508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.502 qpair failed and we were unable to recover it. 00:39:05.502 [2024-09-27 15:57:45.680700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.502 [2024-09-27 15:57:45.680707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.502 qpair failed and we were unable to recover it. 00:39:05.502 [2024-09-27 15:57:45.681102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.502 [2024-09-27 15:57:45.681110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.502 qpair failed and we were unable to recover it. 00:39:05.502 [2024-09-27 15:57:45.681416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.502 [2024-09-27 15:57:45.681424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.502 qpair failed and we were unable to recover it. 00:39:05.502 [2024-09-27 15:57:45.681753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.502 [2024-09-27 15:57:45.681761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.502 qpair failed and we were unable to recover it. 00:39:05.502 [2024-09-27 15:57:45.682085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.503 [2024-09-27 15:57:45.682094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.503 qpair failed and we were unable to recover it. 00:39:05.503 [2024-09-27 15:57:45.682283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.503 [2024-09-27 15:57:45.682293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.503 qpair failed and we were unable to recover it. 00:39:05.503 [2024-09-27 15:57:45.682589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.503 [2024-09-27 15:57:45.682597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.503 qpair failed and we were unable to recover it. 00:39:05.503 [2024-09-27 15:57:45.682932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.503 [2024-09-27 15:57:45.682941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.503 qpair failed and we were unable to recover it. 00:39:05.503 [2024-09-27 15:57:45.683142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.503 [2024-09-27 15:57:45.683149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.503 qpair failed and we were unable to recover it. 00:39:05.503 [2024-09-27 15:57:45.683309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.503 [2024-09-27 15:57:45.683317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.503 qpair failed and we were unable to recover it. 00:39:05.503 [2024-09-27 15:57:45.683647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.503 [2024-09-27 15:57:45.683654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.503 qpair failed and we were unable to recover it. 00:39:05.503 [2024-09-27 15:57:45.683964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.503 [2024-09-27 15:57:45.683972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.503 qpair failed and we were unable to recover it. 00:39:05.503 [2024-09-27 15:57:45.684295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.503 [2024-09-27 15:57:45.684303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.503 qpair failed and we were unable to recover it. 00:39:05.503 [2024-09-27 15:57:45.684678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.503 [2024-09-27 15:57:45.684686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.503 qpair failed and we were unable to recover it. 00:39:05.503 [2024-09-27 15:57:45.685024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.503 [2024-09-27 15:57:45.685032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.503 qpair failed and we were unable to recover it. 00:39:05.503 [2024-09-27 15:57:45.685358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.503 [2024-09-27 15:57:45.685366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.503 qpair failed and we were unable to recover it. 00:39:05.503 [2024-09-27 15:57:45.685688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.503 [2024-09-27 15:57:45.685696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.503 qpair failed and we were unable to recover it. 00:39:05.503 [2024-09-27 15:57:45.686017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.503 [2024-09-27 15:57:45.686029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.503 qpair failed and we were unable to recover it. 00:39:05.503 [2024-09-27 15:57:45.686345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.503 [2024-09-27 15:57:45.686352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.503 qpair failed and we were unable to recover it. 00:39:05.503 [2024-09-27 15:57:45.686660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.503 [2024-09-27 15:57:45.686668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.503 qpair failed and we were unable to recover it. 00:39:05.503 [2024-09-27 15:57:45.686970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.503 [2024-09-27 15:57:45.686978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.503 qpair failed and we were unable to recover it. 00:39:05.503 [2024-09-27 15:57:45.687301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.503 [2024-09-27 15:57:45.687309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.503 qpair failed and we were unable to recover it. 00:39:05.503 [2024-09-27 15:57:45.687641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.503 [2024-09-27 15:57:45.687649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.503 qpair failed and we were unable to recover it. 00:39:05.503 [2024-09-27 15:57:45.687973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.503 [2024-09-27 15:57:45.687982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.503 qpair failed and we were unable to recover it. 00:39:05.503 [2024-09-27 15:57:45.688215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.503 [2024-09-27 15:57:45.688223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.503 qpair failed and we were unable to recover it. 00:39:05.503 [2024-09-27 15:57:45.688405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.503 [2024-09-27 15:57:45.688411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.503 qpair failed and we were unable to recover it. 00:39:05.503 [2024-09-27 15:57:45.688748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.503 [2024-09-27 15:57:45.688756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.503 qpair failed and we were unable to recover it. 00:39:05.503 [2024-09-27 15:57:45.689135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.503 [2024-09-27 15:57:45.689146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.503 qpair failed and we were unable to recover it. 00:39:05.503 [2024-09-27 15:57:45.689470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.503 [2024-09-27 15:57:45.689483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.503 qpair failed and we were unable to recover it. 00:39:05.503 [2024-09-27 15:57:45.689797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.503 [2024-09-27 15:57:45.689805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.503 qpair failed and we were unable to recover it. 00:39:05.503 [2024-09-27 15:57:45.690126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.503 [2024-09-27 15:57:45.690135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.503 qpair failed and we were unable to recover it. 00:39:05.503 [2024-09-27 15:57:45.690373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.503 [2024-09-27 15:57:45.690382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.503 qpair failed and we were unable to recover it. 00:39:05.503 [2024-09-27 15:57:45.690566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.503 [2024-09-27 15:57:45.690574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.503 qpair failed and we were unable to recover it. 00:39:05.503 [2024-09-27 15:57:45.690920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.503 [2024-09-27 15:57:45.690928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.503 qpair failed and we were unable to recover it. 00:39:05.503 [2024-09-27 15:57:45.691248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.503 [2024-09-27 15:57:45.691255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.503 qpair failed and we were unable to recover it. 00:39:05.503 [2024-09-27 15:57:45.691654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.503 [2024-09-27 15:57:45.691663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.503 qpair failed and we were unable to recover it. 00:39:05.503 [2024-09-27 15:57:45.691968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.503 [2024-09-27 15:57:45.691978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.503 qpair failed and we were unable to recover it. 00:39:05.503 [2024-09-27 15:57:45.692316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.503 [2024-09-27 15:57:45.692324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.503 qpair failed and we were unable to recover it. 00:39:05.503 [2024-09-27 15:57:45.692645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.503 [2024-09-27 15:57:45.692653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.503 qpair failed and we were unable to recover it. 00:39:05.503 [2024-09-27 15:57:45.692976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.503 [2024-09-27 15:57:45.692984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.503 qpair failed and we were unable to recover it. 00:39:05.503 [2024-09-27 15:57:45.693318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.504 [2024-09-27 15:57:45.693326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.504 qpair failed and we were unable to recover it. 00:39:05.504 [2024-09-27 15:57:45.693641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.504 [2024-09-27 15:57:45.693649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.504 qpair failed and we were unable to recover it. 00:39:05.504 [2024-09-27 15:57:45.694005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.504 [2024-09-27 15:57:45.694013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.504 qpair failed and we were unable to recover it. 00:39:05.504 [2024-09-27 15:57:45.694327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.504 [2024-09-27 15:57:45.694335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.504 qpair failed and we were unable to recover it. 00:39:05.504 [2024-09-27 15:57:45.694662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.504 [2024-09-27 15:57:45.694671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.504 qpair failed and we were unable to recover it. 00:39:05.504 [2024-09-27 15:57:45.695056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.504 [2024-09-27 15:57:45.695066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.504 qpair failed and we were unable to recover it. 00:39:05.504 [2024-09-27 15:57:45.695288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.504 [2024-09-27 15:57:45.695296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.504 qpair failed and we were unable to recover it. 00:39:05.504 [2024-09-27 15:57:45.695617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.504 [2024-09-27 15:57:45.695625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.504 qpair failed and we were unable to recover it. 00:39:05.504 [2024-09-27 15:57:45.695867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.504 [2024-09-27 15:57:45.695876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.504 qpair failed and we were unable to recover it. 00:39:05.504 [2024-09-27 15:57:45.696101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.504 [2024-09-27 15:57:45.696110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.504 qpair failed and we were unable to recover it. 00:39:05.504 [2024-09-27 15:57:45.696387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.504 [2024-09-27 15:57:45.696396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.504 qpair failed and we were unable to recover it. 00:39:05.504 [2024-09-27 15:57:45.696774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.504 [2024-09-27 15:57:45.696782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.504 qpair failed and we were unable to recover it. 00:39:05.504 [2024-09-27 15:57:45.697076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.504 [2024-09-27 15:57:45.697086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.504 qpair failed and we were unable to recover it. 00:39:05.504 [2024-09-27 15:57:45.697425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.504 [2024-09-27 15:57:45.697434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.504 qpair failed and we were unable to recover it. 00:39:05.504 [2024-09-27 15:57:45.697761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.504 [2024-09-27 15:57:45.697769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.504 qpair failed and we were unable to recover it. 00:39:05.504 [2024-09-27 15:57:45.698101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.504 [2024-09-27 15:57:45.698109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.504 qpair failed and we were unable to recover it. 00:39:05.504 [2024-09-27 15:57:45.698435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.504 [2024-09-27 15:57:45.698442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.504 qpair failed and we were unable to recover it. 00:39:05.504 [2024-09-27 15:57:45.698766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.504 [2024-09-27 15:57:45.698775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.504 qpair failed and we were unable to recover it. 00:39:05.504 [2024-09-27 15:57:45.698947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.504 [2024-09-27 15:57:45.698956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.504 qpair failed and we were unable to recover it. 00:39:05.504 [2024-09-27 15:57:45.699203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.504 [2024-09-27 15:57:45.699210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.504 qpair failed and we were unable to recover it. 00:39:05.504 [2024-09-27 15:57:45.699587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.504 [2024-09-27 15:57:45.699596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.504 qpair failed and we were unable to recover it. 00:39:05.504 [2024-09-27 15:57:45.699916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.504 [2024-09-27 15:57:45.699924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.504 qpair failed and we were unable to recover it. 00:39:05.504 [2024-09-27 15:57:45.700215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.504 [2024-09-27 15:57:45.700223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.504 qpair failed and we were unable to recover it. 00:39:05.504 [2024-09-27 15:57:45.700546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.504 [2024-09-27 15:57:45.700553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.504 qpair failed and we were unable to recover it. 00:39:05.504 [2024-09-27 15:57:45.700881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.504 [2024-09-27 15:57:45.700889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.504 qpair failed and we were unable to recover it. 00:39:05.504 [2024-09-27 15:57:45.701254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.504 [2024-09-27 15:57:45.701261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.504 qpair failed and we were unable to recover it. 00:39:05.504 [2024-09-27 15:57:45.701590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.504 [2024-09-27 15:57:45.701598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.504 qpair failed and we were unable to recover it. 00:39:05.504 [2024-09-27 15:57:45.701915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.504 [2024-09-27 15:57:45.701923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.504 qpair failed and we were unable to recover it. 00:39:05.504 [2024-09-27 15:57:45.702222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.504 [2024-09-27 15:57:45.702230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.504 qpair failed and we were unable to recover it. 00:39:05.504 [2024-09-27 15:57:45.702594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.504 [2024-09-27 15:57:45.702603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.504 qpair failed and we were unable to recover it. 00:39:05.504 [2024-09-27 15:57:45.702934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.504 [2024-09-27 15:57:45.702943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.504 qpair failed and we were unable to recover it. 00:39:05.504 [2024-09-27 15:57:45.703268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.504 [2024-09-27 15:57:45.703275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.504 qpair failed and we were unable to recover it. 00:39:05.504 [2024-09-27 15:57:45.703493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.504 [2024-09-27 15:57:45.703500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.504 qpair failed and we were unable to recover it. 00:39:05.504 [2024-09-27 15:57:45.703826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.504 [2024-09-27 15:57:45.703833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.504 qpair failed and we were unable to recover it. 00:39:05.504 [2024-09-27 15:57:45.704226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.504 [2024-09-27 15:57:45.704236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.504 qpair failed and we were unable to recover it. 00:39:05.504 [2024-09-27 15:57:45.704557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.504 [2024-09-27 15:57:45.704564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.504 qpair failed and we were unable to recover it. 00:39:05.504 [2024-09-27 15:57:45.704885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.504 [2024-09-27 15:57:45.704892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.504 qpair failed and we were unable to recover it. 00:39:05.505 [2024-09-27 15:57:45.705210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.505 [2024-09-27 15:57:45.705218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.505 qpair failed and we were unable to recover it. 00:39:05.505 [2024-09-27 15:57:45.705559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.505 [2024-09-27 15:57:45.705567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.505 qpair failed and we were unable to recover it. 00:39:05.505 [2024-09-27 15:57:45.705782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.505 [2024-09-27 15:57:45.705789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.505 qpair failed and we were unable to recover it. 00:39:05.505 [2024-09-27 15:57:45.706118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.505 [2024-09-27 15:57:45.706126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.505 qpair failed and we were unable to recover it. 00:39:05.505 [2024-09-27 15:57:45.706322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.505 [2024-09-27 15:57:45.706330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.505 qpair failed and we were unable to recover it. 00:39:05.505 [2024-09-27 15:57:45.706665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.505 [2024-09-27 15:57:45.706672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.505 qpair failed and we were unable to recover it. 00:39:05.505 [2024-09-27 15:57:45.706779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.505 [2024-09-27 15:57:45.706787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.505 qpair failed and we were unable to recover it. 00:39:05.505 [2024-09-27 15:57:45.707067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.505 [2024-09-27 15:57:45.707076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.505 qpair failed and we were unable to recover it. 00:39:05.505 [2024-09-27 15:57:45.707495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.505 [2024-09-27 15:57:45.707506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.505 qpair failed and we were unable to recover it. 00:39:05.505 [2024-09-27 15:57:45.707792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.505 [2024-09-27 15:57:45.707801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.505 qpair failed and we were unable to recover it. 00:39:05.505 [2024-09-27 15:57:45.708102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.505 [2024-09-27 15:57:45.708111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.505 qpair failed and we were unable to recover it. 00:39:05.505 [2024-09-27 15:57:45.708445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.505 [2024-09-27 15:57:45.708453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.505 qpair failed and we were unable to recover it. 00:39:05.505 [2024-09-27 15:57:45.708802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.505 [2024-09-27 15:57:45.708811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.505 qpair failed and we were unable to recover it. 00:39:05.505 [2024-09-27 15:57:45.709073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.505 [2024-09-27 15:57:45.709083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.505 qpair failed and we were unable to recover it. 00:39:05.505 [2024-09-27 15:57:45.709408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.505 [2024-09-27 15:57:45.709417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.505 qpair failed and we were unable to recover it. 00:39:05.505 [2024-09-27 15:57:45.709740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.505 [2024-09-27 15:57:45.709749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.505 qpair failed and we were unable to recover it. 00:39:05.505 [2024-09-27 15:57:45.709953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.505 [2024-09-27 15:57:45.709961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.505 qpair failed and we were unable to recover it. 00:39:05.505 [2024-09-27 15:57:45.710258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.505 [2024-09-27 15:57:45.710265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.505 qpair failed and we were unable to recover it. 00:39:05.505 [2024-09-27 15:57:45.710485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.505 [2024-09-27 15:57:45.710493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.505 qpair failed and we were unable to recover it. 00:39:05.505 [2024-09-27 15:57:45.710725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.505 [2024-09-27 15:57:45.710732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.505 qpair failed and we were unable to recover it. 00:39:05.505 [2024-09-27 15:57:45.711073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.505 [2024-09-27 15:57:45.711081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.505 qpair failed and we were unable to recover it. 00:39:05.505 [2024-09-27 15:57:45.711413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.505 [2024-09-27 15:57:45.711420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.505 qpair failed and we were unable to recover it. 00:39:05.505 [2024-09-27 15:57:45.711752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.505 [2024-09-27 15:57:45.711761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.505 qpair failed and we were unable to recover it. 00:39:05.505 [2024-09-27 15:57:45.712073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.505 [2024-09-27 15:57:45.712081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.505 qpair failed and we were unable to recover it. 00:39:05.505 [2024-09-27 15:57:45.712408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.505 [2024-09-27 15:57:45.712416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.505 qpair failed and we were unable to recover it. 00:39:05.505 [2024-09-27 15:57:45.712736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.505 [2024-09-27 15:57:45.712743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.505 qpair failed and we were unable to recover it. 00:39:05.505 [2024-09-27 15:57:45.713060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.505 [2024-09-27 15:57:45.713068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.505 qpair failed and we were unable to recover it. 00:39:05.505 [2024-09-27 15:57:45.713400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.505 [2024-09-27 15:57:45.713408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.505 qpair failed and we were unable to recover it. 00:39:05.505 [2024-09-27 15:57:45.713691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.505 [2024-09-27 15:57:45.713698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.505 qpair failed and we were unable to recover it. 00:39:05.505 [2024-09-27 15:57:45.713912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.505 [2024-09-27 15:57:45.713921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.505 qpair failed and we were unable to recover it. 00:39:05.505 [2024-09-27 15:57:45.714243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.505 [2024-09-27 15:57:45.714251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.505 qpair failed and we were unable to recover it. 00:39:05.505 [2024-09-27 15:57:45.714575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.505 [2024-09-27 15:57:45.714583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.505 qpair failed and we were unable to recover it. 00:39:05.505 [2024-09-27 15:57:45.714761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.505 [2024-09-27 15:57:45.714770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.505 qpair failed and we were unable to recover it. 00:39:05.505 [2024-09-27 15:57:45.715074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.505 [2024-09-27 15:57:45.715082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.505 qpair failed and we were unable to recover it. 00:39:05.505 [2024-09-27 15:57:45.715391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.505 [2024-09-27 15:57:45.715398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.505 qpair failed and we were unable to recover it. 00:39:05.505 [2024-09-27 15:57:45.715687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.506 [2024-09-27 15:57:45.715696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.506 qpair failed and we were unable to recover it. 00:39:05.506 [2024-09-27 15:57:45.716015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.506 [2024-09-27 15:57:45.716023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.506 qpair failed and we were unable to recover it. 00:39:05.506 [2024-09-27 15:57:45.716350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.506 [2024-09-27 15:57:45.716358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.506 qpair failed and we were unable to recover it. 00:39:05.506 [2024-09-27 15:57:45.716662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.506 [2024-09-27 15:57:45.716670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.506 qpair failed and we were unable to recover it. 00:39:05.506 [2024-09-27 15:57:45.717052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.506 [2024-09-27 15:57:45.717059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.506 qpair failed and we were unable to recover it. 00:39:05.506 [2024-09-27 15:57:45.717363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.506 [2024-09-27 15:57:45.717371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.506 qpair failed and we were unable to recover it. 00:39:05.506 [2024-09-27 15:57:45.717695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.506 [2024-09-27 15:57:45.717703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.506 qpair failed and we were unable to recover it. 00:39:05.506 [2024-09-27 15:57:45.718027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.506 [2024-09-27 15:57:45.718035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.506 qpair failed and we were unable to recover it. 00:39:05.506 [2024-09-27 15:57:45.718249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.506 [2024-09-27 15:57:45.718258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.506 qpair failed and we were unable to recover it. 00:39:05.506 [2024-09-27 15:57:45.718571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.506 [2024-09-27 15:57:45.718578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.506 qpair failed and we were unable to recover it. 00:39:05.506 [2024-09-27 15:57:45.718887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.506 [2024-09-27 15:57:45.718901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.506 qpair failed and we were unable to recover it. 00:39:05.506 [2024-09-27 15:57:45.719107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.506 [2024-09-27 15:57:45.719114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.506 qpair failed and we were unable to recover it. 00:39:05.506 [2024-09-27 15:57:45.719328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.506 [2024-09-27 15:57:45.719335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.506 qpair failed and we were unable to recover it. 00:39:05.506 [2024-09-27 15:57:45.719616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.506 [2024-09-27 15:57:45.719624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.506 qpair failed and we were unable to recover it. 00:39:05.506 [2024-09-27 15:57:45.719941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.506 [2024-09-27 15:57:45.719951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.506 qpair failed and we were unable to recover it. 00:39:05.506 [2024-09-27 15:57:45.720281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.506 [2024-09-27 15:57:45.720289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.506 qpair failed and we were unable to recover it. 00:39:05.506 [2024-09-27 15:57:45.720614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.506 [2024-09-27 15:57:45.720622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.506 qpair failed and we were unable to recover it. 00:39:05.506 [2024-09-27 15:57:45.720948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.506 [2024-09-27 15:57:45.720956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.506 qpair failed and we were unable to recover it. 00:39:05.506 [2024-09-27 15:57:45.721164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.506 [2024-09-27 15:57:45.721171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.506 qpair failed and we were unable to recover it. 00:39:05.506 [2024-09-27 15:57:45.721511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.506 [2024-09-27 15:57:45.721518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.506 qpair failed and we were unable to recover it. 00:39:05.506 [2024-09-27 15:57:45.721838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.506 [2024-09-27 15:57:45.721845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.506 qpair failed and we were unable to recover it. 00:39:05.506 [2024-09-27 15:57:45.722048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.506 [2024-09-27 15:57:45.722056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.506 qpair failed and we were unable to recover it. 00:39:05.506 [2024-09-27 15:57:45.722425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.506 [2024-09-27 15:57:45.722432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.506 qpair failed and we were unable to recover it. 00:39:05.506 [2024-09-27 15:57:45.722756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.506 [2024-09-27 15:57:45.722763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.506 qpair failed and we were unable to recover it. 00:39:05.506 [2024-09-27 15:57:45.723092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.506 [2024-09-27 15:57:45.723100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.506 qpair failed and we were unable to recover it. 00:39:05.506 [2024-09-27 15:57:45.723390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.506 [2024-09-27 15:57:45.723398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.506 qpair failed and we were unable to recover it. 00:39:05.506 [2024-09-27 15:57:45.723808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.506 [2024-09-27 15:57:45.723817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.506 qpair failed and we were unable to recover it. 00:39:05.506 [2024-09-27 15:57:45.724119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.506 [2024-09-27 15:57:45.724130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.506 qpair failed and we were unable to recover it. 00:39:05.506 [2024-09-27 15:57:45.724437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.506 [2024-09-27 15:57:45.724445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.506 qpair failed and we were unable to recover it. 00:39:05.506 [2024-09-27 15:57:45.724787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.506 [2024-09-27 15:57:45.724794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.506 qpair failed and we were unable to recover it. 00:39:05.506 [2024-09-27 15:57:45.725099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.506 [2024-09-27 15:57:45.725107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.506 qpair failed and we were unable to recover it. 00:39:05.506 [2024-09-27 15:57:45.725440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.506 [2024-09-27 15:57:45.725448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.506 qpair failed and we were unable to recover it. 00:39:05.507 [2024-09-27 15:57:45.725661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.507 [2024-09-27 15:57:45.725669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.507 qpair failed and we were unable to recover it. 00:39:05.507 [2024-09-27 15:57:45.726042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.507 [2024-09-27 15:57:45.726049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.507 qpair failed and we were unable to recover it. 00:39:05.507 [2024-09-27 15:57:45.726398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.507 [2024-09-27 15:57:45.726406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.507 qpair failed and we were unable to recover it. 00:39:05.507 [2024-09-27 15:57:45.726738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.507 [2024-09-27 15:57:45.726745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.507 qpair failed and we were unable to recover it. 00:39:05.507 [2024-09-27 15:57:45.726924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.507 [2024-09-27 15:57:45.726932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.507 qpair failed and we were unable to recover it. 00:39:05.507 [2024-09-27 15:57:45.727175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.507 [2024-09-27 15:57:45.727183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.507 qpair failed and we were unable to recover it. 00:39:05.507 [2024-09-27 15:57:45.727400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.507 [2024-09-27 15:57:45.727409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.507 qpair failed and we were unable to recover it. 00:39:05.507 [2024-09-27 15:57:45.727725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.507 [2024-09-27 15:57:45.727732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.507 qpair failed and we were unable to recover it. 00:39:05.507 [2024-09-27 15:57:45.728043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.507 [2024-09-27 15:57:45.728051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.507 qpair failed and we were unable to recover it. 00:39:05.507 [2024-09-27 15:57:45.728387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.507 [2024-09-27 15:57:45.728394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.507 qpair failed and we were unable to recover it. 00:39:05.507 [2024-09-27 15:57:45.728698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.507 [2024-09-27 15:57:45.728705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.507 qpair failed and we were unable to recover it. 00:39:05.507 [2024-09-27 15:57:45.729033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.507 [2024-09-27 15:57:45.729040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.507 qpair failed and we were unable to recover it. 00:39:05.507 [2024-09-27 15:57:45.729356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.507 [2024-09-27 15:57:45.729364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.507 qpair failed and we were unable to recover it. 00:39:05.507 [2024-09-27 15:57:45.729687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.507 [2024-09-27 15:57:45.729694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.507 qpair failed and we were unable to recover it. 00:39:05.507 [2024-09-27 15:57:45.730012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.507 [2024-09-27 15:57:45.730020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.507 qpair failed and we were unable to recover it. 00:39:05.507 [2024-09-27 15:57:45.730195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.507 [2024-09-27 15:57:45.730204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.507 qpair failed and we were unable to recover it. 00:39:05.507 [2024-09-27 15:57:45.730581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.507 [2024-09-27 15:57:45.730588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.507 qpair failed and we were unable to recover it. 00:39:05.507 [2024-09-27 15:57:45.730909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.507 [2024-09-27 15:57:45.730917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.507 qpair failed and we were unable to recover it. 00:39:05.507 [2024-09-27 15:57:45.731216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.507 [2024-09-27 15:57:45.731223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.507 qpair failed and we were unable to recover it. 00:39:05.507 [2024-09-27 15:57:45.731528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.507 [2024-09-27 15:57:45.731536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.507 qpair failed and we were unable to recover it. 00:39:05.507 [2024-09-27 15:57:45.731754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.507 [2024-09-27 15:57:45.731763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.507 qpair failed and we were unable to recover it. 00:39:05.507 [2024-09-27 15:57:45.732167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.507 [2024-09-27 15:57:45.732175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.507 qpair failed and we were unable to recover it. 00:39:05.507 [2024-09-27 15:57:45.732472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.507 [2024-09-27 15:57:45.732480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.507 qpair failed and we were unable to recover it. 00:39:05.507 [2024-09-27 15:57:45.732808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.507 [2024-09-27 15:57:45.732815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.507 qpair failed and we were unable to recover it. 00:39:05.507 [2024-09-27 15:57:45.733142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.507 [2024-09-27 15:57:45.733150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.507 qpair failed and we were unable to recover it. 00:39:05.507 [2024-09-27 15:57:45.733484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.507 [2024-09-27 15:57:45.733492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.507 qpair failed and we were unable to recover it. 00:39:05.507 [2024-09-27 15:57:45.733826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.507 [2024-09-27 15:57:45.733834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.507 qpair failed and we were unable to recover it. 00:39:05.507 [2024-09-27 15:57:45.734161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.507 [2024-09-27 15:57:45.734169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.507 qpair failed and we were unable to recover it. 00:39:05.507 [2024-09-27 15:57:45.734498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.507 [2024-09-27 15:57:45.734506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.507 qpair failed and we were unable to recover it. 00:39:05.507 [2024-09-27 15:57:45.734819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.507 [2024-09-27 15:57:45.734827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.507 qpair failed and we were unable to recover it. 00:39:05.507 [2024-09-27 15:57:45.735227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.507 [2024-09-27 15:57:45.735235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.507 qpair failed and we were unable to recover it. 00:39:05.507 [2024-09-27 15:57:45.735555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.507 [2024-09-27 15:57:45.735563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.507 qpair failed and we were unable to recover it. 00:39:05.507 [2024-09-27 15:57:45.735967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.507 [2024-09-27 15:57:45.735975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.507 qpair failed and we were unable to recover it. 00:39:05.507 [2024-09-27 15:57:45.736193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.507 [2024-09-27 15:57:45.736201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.507 qpair failed and we were unable to recover it. 00:39:05.507 [2024-09-27 15:57:45.736397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.507 [2024-09-27 15:57:45.736407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.507 qpair failed and we were unable to recover it. 00:39:05.507 [2024-09-27 15:57:45.736705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.507 [2024-09-27 15:57:45.736713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.507 qpair failed and we were unable to recover it. 00:39:05.507 [2024-09-27 15:57:45.737030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.507 [2024-09-27 15:57:45.737040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.507 qpair failed and we were unable to recover it. 00:39:05.508 [2024-09-27 15:57:45.737352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.508 [2024-09-27 15:57:45.737359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.508 qpair failed and we were unable to recover it. 00:39:05.508 [2024-09-27 15:57:45.737681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.508 [2024-09-27 15:57:45.737688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.508 qpair failed and we were unable to recover it. 00:39:05.508 [2024-09-27 15:57:45.738029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.508 [2024-09-27 15:57:45.738036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.508 qpair failed and we were unable to recover it. 00:39:05.508 [2024-09-27 15:57:45.738358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.508 [2024-09-27 15:57:45.738366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.508 qpair failed and we were unable to recover it. 00:39:05.508 [2024-09-27 15:57:45.738592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.508 [2024-09-27 15:57:45.738600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.508 qpair failed and we were unable to recover it. 00:39:05.508 [2024-09-27 15:57:45.738931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.508 [2024-09-27 15:57:45.738939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.508 qpair failed and we were unable to recover it. 00:39:05.508 [2024-09-27 15:57:45.739344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.508 [2024-09-27 15:57:45.739351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.508 qpair failed and we were unable to recover it. 00:39:05.508 [2024-09-27 15:57:45.739659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.508 [2024-09-27 15:57:45.739667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.508 qpair failed and we were unable to recover it. 00:39:05.508 [2024-09-27 15:57:45.739965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.508 [2024-09-27 15:57:45.739973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.508 qpair failed and we were unable to recover it. 00:39:05.508 [2024-09-27 15:57:45.740303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.508 [2024-09-27 15:57:45.740310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.508 qpair failed and we were unable to recover it. 00:39:05.508 [2024-09-27 15:57:45.740631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.508 [2024-09-27 15:57:45.740638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.508 qpair failed and we were unable to recover it. 00:39:05.508 [2024-09-27 15:57:45.740859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.508 [2024-09-27 15:57:45.740866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.508 qpair failed and we were unable to recover it. 00:39:05.508 [2024-09-27 15:57:45.741202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.508 [2024-09-27 15:57:45.741209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.508 qpair failed and we were unable to recover it. 00:39:05.508 [2024-09-27 15:57:45.741509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.508 [2024-09-27 15:57:45.741517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.508 qpair failed and we were unable to recover it. 00:39:05.508 [2024-09-27 15:57:45.741757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.508 [2024-09-27 15:57:45.741764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.508 qpair failed and we were unable to recover it. 00:39:05.508 [2024-09-27 15:57:45.742084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.508 [2024-09-27 15:57:45.742092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.508 qpair failed and we were unable to recover it. 00:39:05.508 [2024-09-27 15:57:45.742426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.508 [2024-09-27 15:57:45.742433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.508 qpair failed and we were unable to recover it. 00:39:05.508 [2024-09-27 15:57:45.742729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.508 [2024-09-27 15:57:45.742736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.508 qpair failed and we were unable to recover it. 00:39:05.508 [2024-09-27 15:57:45.742967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.508 [2024-09-27 15:57:45.742975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.508 qpair failed and we were unable to recover it. 00:39:05.508 [2024-09-27 15:57:45.743270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.508 [2024-09-27 15:57:45.743277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.508 qpair failed and we were unable to recover it. 00:39:05.508 [2024-09-27 15:57:45.743647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.508 [2024-09-27 15:57:45.743654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.508 qpair failed and we were unable to recover it. 00:39:05.508 [2024-09-27 15:57:45.743839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.508 [2024-09-27 15:57:45.743848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.508 qpair failed and we were unable to recover it. 00:39:05.508 [2024-09-27 15:57:45.744219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.508 [2024-09-27 15:57:45.744226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.508 qpair failed and we were unable to recover it. 00:39:05.508 [2024-09-27 15:57:45.744579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.508 [2024-09-27 15:57:45.744587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.508 qpair failed and we were unable to recover it. 00:39:05.508 [2024-09-27 15:57:45.744922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.508 [2024-09-27 15:57:45.744930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.508 qpair failed and we were unable to recover it. 00:39:05.508 [2024-09-27 15:57:45.745267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.508 [2024-09-27 15:57:45.745274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.508 qpair failed and we were unable to recover it. 00:39:05.508 [2024-09-27 15:57:45.745591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.508 [2024-09-27 15:57:45.745605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.508 qpair failed and we were unable to recover it. 00:39:05.508 [2024-09-27 15:57:45.745917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.508 [2024-09-27 15:57:45.745927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.508 qpair failed and we were unable to recover it. 00:39:05.508 [2024-09-27 15:57:45.746243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.508 [2024-09-27 15:57:45.746251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.508 qpair failed and we were unable to recover it. 00:39:05.508 [2024-09-27 15:57:45.746577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.508 [2024-09-27 15:57:45.746585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.508 qpair failed and we were unable to recover it. 00:39:05.508 [2024-09-27 15:57:45.746909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.508 [2024-09-27 15:57:45.746917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.508 qpair failed and we were unable to recover it. 00:39:05.508 [2024-09-27 15:57:45.747284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.508 [2024-09-27 15:57:45.747293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.508 qpair failed and we were unable to recover it. 00:39:05.508 [2024-09-27 15:57:45.747615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.508 [2024-09-27 15:57:45.747623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.508 qpair failed and we were unable to recover it. 00:39:05.508 [2024-09-27 15:57:45.747941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.508 [2024-09-27 15:57:45.747949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.508 qpair failed and we were unable to recover it. 00:39:05.508 [2024-09-27 15:57:45.748284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.508 [2024-09-27 15:57:45.748291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.508 qpair failed and we were unable to recover it. 00:39:05.508 [2024-09-27 15:57:45.748596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.508 [2024-09-27 15:57:45.748604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.508 qpair failed and we were unable to recover it. 00:39:05.508 [2024-09-27 15:57:45.748922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.509 [2024-09-27 15:57:45.748930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.509 qpair failed and we were unable to recover it. 00:39:05.509 [2024-09-27 15:57:45.749270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.509 [2024-09-27 15:57:45.749278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.509 qpair failed and we were unable to recover it. 00:39:05.509 [2024-09-27 15:57:45.749594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.509 [2024-09-27 15:57:45.749602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.509 qpair failed and we were unable to recover it. 00:39:05.509 [2024-09-27 15:57:45.749811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.509 [2024-09-27 15:57:45.749818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.509 qpair failed and we were unable to recover it. 00:39:05.509 [2024-09-27 15:57:45.750132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.509 [2024-09-27 15:57:45.750140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.509 qpair failed and we were unable to recover it. 00:39:05.509 [2024-09-27 15:57:45.750475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.509 [2024-09-27 15:57:45.750482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.509 qpair failed and we were unable to recover it. 00:39:05.509 [2024-09-27 15:57:45.750809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.509 [2024-09-27 15:57:45.750816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.509 qpair failed and we were unable to recover it. 00:39:05.509 [2024-09-27 15:57:45.751189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.509 [2024-09-27 15:57:45.751197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.509 qpair failed and we were unable to recover it. 00:39:05.509 [2024-09-27 15:57:45.751498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.509 [2024-09-27 15:57:45.751506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.509 qpair failed and we were unable to recover it. 00:39:05.509 [2024-09-27 15:57:45.751832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.509 [2024-09-27 15:57:45.751841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.509 qpair failed and we were unable to recover it. 00:39:05.509 [2024-09-27 15:57:45.752165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.509 [2024-09-27 15:57:45.752174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.509 qpair failed and we were unable to recover it. 00:39:05.509 [2024-09-27 15:57:45.752497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.509 [2024-09-27 15:57:45.752506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.509 qpair failed and we were unable to recover it. 00:39:05.509 [2024-09-27 15:57:45.752831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.509 [2024-09-27 15:57:45.752840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.509 qpair failed and we were unable to recover it. 00:39:05.509 [2024-09-27 15:57:45.753167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.509 [2024-09-27 15:57:45.753177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.509 qpair failed and we were unable to recover it. 00:39:05.509 [2024-09-27 15:57:45.753508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.509 [2024-09-27 15:57:45.753518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.509 qpair failed and we were unable to recover it. 00:39:05.509 [2024-09-27 15:57:45.753688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.509 [2024-09-27 15:57:45.753698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.509 qpair failed and we were unable to recover it. 00:39:05.509 [2024-09-27 15:57:45.754042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.509 [2024-09-27 15:57:45.754050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.509 qpair failed and we were unable to recover it. 00:39:05.509 [2024-09-27 15:57:45.754369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.509 [2024-09-27 15:57:45.754379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.509 qpair failed and we were unable to recover it. 00:39:05.509 [2024-09-27 15:57:45.754564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.509 [2024-09-27 15:57:45.754573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.509 qpair failed and we were unable to recover it. 00:39:05.509 [2024-09-27 15:57:45.754801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.509 [2024-09-27 15:57:45.754809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.509 qpair failed and we were unable to recover it. 00:39:05.509 [2024-09-27 15:57:45.755033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.509 [2024-09-27 15:57:45.755041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.509 qpair failed and we were unable to recover it. 00:39:05.509 [2024-09-27 15:57:45.755377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.509 [2024-09-27 15:57:45.755384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.509 qpair failed and we were unable to recover it. 00:39:05.509 [2024-09-27 15:57:45.755720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.509 [2024-09-27 15:57:45.755727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.509 qpair failed and we were unable to recover it. 00:39:05.509 [2024-09-27 15:57:45.756063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.509 [2024-09-27 15:57:45.756072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.509 qpair failed and we were unable to recover it. 00:39:05.509 [2024-09-27 15:57:45.756353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.509 [2024-09-27 15:57:45.756361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.509 qpair failed and we were unable to recover it. 00:39:05.509 [2024-09-27 15:57:45.756691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.509 [2024-09-27 15:57:45.756699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.509 qpair failed and we were unable to recover it. 00:39:05.509 [2024-09-27 15:57:45.757025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.509 [2024-09-27 15:57:45.757033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.509 qpair failed and we were unable to recover it. 00:39:05.509 [2024-09-27 15:57:45.757248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.509 [2024-09-27 15:57:45.757256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.509 qpair failed and we were unable to recover it. 00:39:05.509 [2024-09-27 15:57:45.757562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.509 [2024-09-27 15:57:45.757570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.509 qpair failed and we were unable to recover it. 00:39:05.509 [2024-09-27 15:57:45.757904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.509 [2024-09-27 15:57:45.757912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.509 qpair failed and we were unable to recover it. 00:39:05.509 [2024-09-27 15:57:45.758215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.509 [2024-09-27 15:57:45.758222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.509 qpair failed and we were unable to recover it. 00:39:05.509 [2024-09-27 15:57:45.758440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.509 [2024-09-27 15:57:45.758449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.509 qpair failed and we were unable to recover it. 00:39:05.509 [2024-09-27 15:57:45.758727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.509 [2024-09-27 15:57:45.758734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.509 qpair failed and we were unable to recover it. 00:39:05.509 [2024-09-27 15:57:45.758992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.509 [2024-09-27 15:57:45.759000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.509 qpair failed and we were unable to recover it. 00:39:05.509 [2024-09-27 15:57:45.759345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.509 [2024-09-27 15:57:45.759352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.509 qpair failed and we were unable to recover it. 00:39:05.509 [2024-09-27 15:57:45.759675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.509 [2024-09-27 15:57:45.759683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.509 qpair failed and we were unable to recover it. 00:39:05.509 [2024-09-27 15:57:45.759892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.509 [2024-09-27 15:57:45.759906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.509 qpair failed and we were unable to recover it. 00:39:05.509 [2024-09-27 15:57:45.760251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.509 [2024-09-27 15:57:45.760259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.510 qpair failed and we were unable to recover it. 00:39:05.510 [2024-09-27 15:57:45.760449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.510 [2024-09-27 15:57:45.760457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.510 qpair failed and we were unable to recover it. 00:39:05.510 [2024-09-27 15:57:45.760779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.510 [2024-09-27 15:57:45.760788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.510 qpair failed and we were unable to recover it. 00:39:05.510 [2024-09-27 15:57:45.760992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.510 [2024-09-27 15:57:45.761001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.510 qpair failed and we were unable to recover it. 00:39:05.510 [2024-09-27 15:57:45.761165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.510 [2024-09-27 15:57:45.761174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.510 qpair failed and we were unable to recover it. 00:39:05.510 [2024-09-27 15:57:45.761498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.510 [2024-09-27 15:57:45.761505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.510 qpair failed and we were unable to recover it. 00:39:05.510 [2024-09-27 15:57:45.761801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.510 [2024-09-27 15:57:45.761809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.510 qpair failed and we were unable to recover it. 00:39:05.510 [2024-09-27 15:57:45.762135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.510 [2024-09-27 15:57:45.762142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.510 qpair failed and we were unable to recover it. 00:39:05.510 [2024-09-27 15:57:45.762545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.510 [2024-09-27 15:57:45.762554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.510 qpair failed and we were unable to recover it. 00:39:05.510 [2024-09-27 15:57:45.762860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.510 [2024-09-27 15:57:45.762869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.510 qpair failed and we were unable to recover it. 00:39:05.510 [2024-09-27 15:57:45.763078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.510 [2024-09-27 15:57:45.763086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.510 qpair failed and we were unable to recover it. 00:39:05.510 [2024-09-27 15:57:45.763427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.510 [2024-09-27 15:57:45.763435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.510 qpair failed and we were unable to recover it. 00:39:05.510 [2024-09-27 15:57:45.763763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.510 [2024-09-27 15:57:45.763771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.510 qpair failed and we were unable to recover it. 00:39:05.510 [2024-09-27 15:57:45.763964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.510 [2024-09-27 15:57:45.763973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.510 qpair failed and we were unable to recover it. 00:39:05.510 [2024-09-27 15:57:45.764298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.510 [2024-09-27 15:57:45.764307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.510 qpair failed and we were unable to recover it. 00:39:05.510 [2024-09-27 15:57:45.764635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.510 [2024-09-27 15:57:45.764645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.510 qpair failed and we were unable to recover it. 00:39:05.510 [2024-09-27 15:57:45.764962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.510 [2024-09-27 15:57:45.764970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.510 qpair failed and we were unable to recover it. 00:39:05.510 [2024-09-27 15:57:45.765317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.510 [2024-09-27 15:57:45.765325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.510 qpair failed and we were unable to recover it. 00:39:05.510 [2024-09-27 15:57:45.765643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.510 [2024-09-27 15:57:45.765650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.510 qpair failed and we were unable to recover it. 00:39:05.510 [2024-09-27 15:57:45.765842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.510 [2024-09-27 15:57:45.765850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.510 qpair failed and we were unable to recover it. 00:39:05.510 [2024-09-27 15:57:45.766043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.510 [2024-09-27 15:57:45.766051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.510 qpair failed and we were unable to recover it. 00:39:05.510 [2024-09-27 15:57:45.766280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.510 [2024-09-27 15:57:45.766288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.510 qpair failed and we were unable to recover it. 00:39:05.510 [2024-09-27 15:57:45.766655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.510 [2024-09-27 15:57:45.766662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.510 qpair failed and we were unable to recover it. 00:39:05.510 [2024-09-27 15:57:45.767084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.510 [2024-09-27 15:57:45.767093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.510 qpair failed and we were unable to recover it. 00:39:05.510 [2024-09-27 15:57:45.767457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.510 [2024-09-27 15:57:45.767464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.510 qpair failed and we were unable to recover it. 00:39:05.510 [2024-09-27 15:57:45.767678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.510 [2024-09-27 15:57:45.767686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.510 qpair failed and we were unable to recover it. 00:39:05.510 [2024-09-27 15:57:45.768008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.510 [2024-09-27 15:57:45.768016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.510 qpair failed and we were unable to recover it. 00:39:05.510 [2024-09-27 15:57:45.768357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.510 [2024-09-27 15:57:45.768365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.510 qpair failed and we were unable to recover it. 00:39:05.510 [2024-09-27 15:57:45.768689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.510 [2024-09-27 15:57:45.768696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.510 qpair failed and we were unable to recover it. 00:39:05.510 [2024-09-27 15:57:45.768882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.510 [2024-09-27 15:57:45.768889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.510 qpair failed and we were unable to recover it. 00:39:05.510 [2024-09-27 15:57:45.769204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.510 [2024-09-27 15:57:45.769212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.510 qpair failed and we were unable to recover it. 00:39:05.510 [2024-09-27 15:57:45.769511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.510 [2024-09-27 15:57:45.769519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.510 qpair failed and we were unable to recover it. 00:39:05.510 [2024-09-27 15:57:45.769835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.510 [2024-09-27 15:57:45.769842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.510 qpair failed and we were unable to recover it. 00:39:05.510 [2024-09-27 15:57:45.770172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.510 [2024-09-27 15:57:45.770179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.510 qpair failed and we were unable to recover it. 00:39:05.510 [2024-09-27 15:57:45.770367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.510 [2024-09-27 15:57:45.770375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.510 qpair failed and we were unable to recover it. 00:39:05.510 [2024-09-27 15:57:45.770590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.510 [2024-09-27 15:57:45.770598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.510 qpair failed and we were unable to recover it. 00:39:05.510 [2024-09-27 15:57:45.770922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.510 [2024-09-27 15:57:45.770931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.510 qpair failed and we were unable to recover it. 00:39:05.510 [2024-09-27 15:57:45.771299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.510 [2024-09-27 15:57:45.771307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.511 qpair failed and we were unable to recover it. 00:39:05.511 [2024-09-27 15:57:45.771628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.511 [2024-09-27 15:57:45.771636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.511 qpair failed and we were unable to recover it. 00:39:05.511 [2024-09-27 15:57:45.771980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.511 [2024-09-27 15:57:45.771988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.511 qpair failed and we were unable to recover it. 00:39:05.511 [2024-09-27 15:57:45.772316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.511 [2024-09-27 15:57:45.772324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.511 qpair failed and we were unable to recover it. 00:39:05.511 [2024-09-27 15:57:45.772641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.511 [2024-09-27 15:57:45.772649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.511 qpair failed and we were unable to recover it. 00:39:05.511 [2024-09-27 15:57:45.772863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.511 [2024-09-27 15:57:45.772872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.511 qpair failed and we were unable to recover it. 00:39:05.511 [2024-09-27 15:57:45.773077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.511 [2024-09-27 15:57:45.773086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.511 qpair failed and we were unable to recover it. 00:39:05.511 [2024-09-27 15:57:45.773292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.511 [2024-09-27 15:57:45.773299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.511 qpair failed and we were unable to recover it. 00:39:05.511 [2024-09-27 15:57:45.773513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.511 [2024-09-27 15:57:45.773520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.511 qpair failed and we were unable to recover it. 00:39:05.511 [2024-09-27 15:57:45.773732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.511 [2024-09-27 15:57:45.773740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.511 qpair failed and we were unable to recover it. 00:39:05.511 [2024-09-27 15:57:45.773989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.511 [2024-09-27 15:57:45.773998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.511 qpair failed and we were unable to recover it. 00:39:05.511 [2024-09-27 15:57:45.774353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.511 [2024-09-27 15:57:45.774363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.511 qpair failed and we were unable to recover it. 00:39:05.511 [2024-09-27 15:57:45.774657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.511 [2024-09-27 15:57:45.774664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.511 qpair failed and we were unable to recover it. 00:39:05.511 [2024-09-27 15:57:45.774990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.511 [2024-09-27 15:57:45.774998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.511 qpair failed and we were unable to recover it. 00:39:05.511 [2024-09-27 15:57:45.775173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.511 [2024-09-27 15:57:45.775181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.511 qpair failed and we were unable to recover it. 00:39:05.511 [2024-09-27 15:57:45.775463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.511 [2024-09-27 15:57:45.775471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.511 qpair failed and we were unable to recover it. 00:39:05.511 [2024-09-27 15:57:45.775792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.511 [2024-09-27 15:57:45.775799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.511 qpair failed and we were unable to recover it. 00:39:05.511 [2024-09-27 15:57:45.776100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.511 [2024-09-27 15:57:45.776108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.511 qpair failed and we were unable to recover it. 00:39:05.511 [2024-09-27 15:57:45.776428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.511 [2024-09-27 15:57:45.776436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.511 qpair failed and we were unable to recover it. 00:39:05.511 [2024-09-27 15:57:45.776766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.511 [2024-09-27 15:57:45.776773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.511 qpair failed and we were unable to recover it. 00:39:05.511 [2024-09-27 15:57:45.776971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.511 [2024-09-27 15:57:45.776979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.511 qpair failed and we were unable to recover it. 00:39:05.511 [2024-09-27 15:57:45.777161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.511 [2024-09-27 15:57:45.777169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.511 qpair failed and we were unable to recover it. 00:39:05.511 [2024-09-27 15:57:45.777510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.511 [2024-09-27 15:57:45.777518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.511 qpair failed and we were unable to recover it. 00:39:05.511 [2024-09-27 15:57:45.777846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.511 [2024-09-27 15:57:45.777854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.511 qpair failed and we were unable to recover it. 00:39:05.511 [2024-09-27 15:57:45.778202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.511 [2024-09-27 15:57:45.778211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.511 qpair failed and we were unable to recover it. 00:39:05.511 [2024-09-27 15:57:45.778538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.511 [2024-09-27 15:57:45.778546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.511 qpair failed and we were unable to recover it. 00:39:05.511 [2024-09-27 15:57:45.778874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.511 [2024-09-27 15:57:45.778882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.511 qpair failed and we were unable to recover it. 00:39:05.511 [2024-09-27 15:57:45.779211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.511 [2024-09-27 15:57:45.779219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.511 qpair failed and we were unable to recover it. 00:39:05.511 [2024-09-27 15:57:45.779391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.511 [2024-09-27 15:57:45.779399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.511 qpair failed and we were unable to recover it. 00:39:05.511 [2024-09-27 15:57:45.779757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.511 [2024-09-27 15:57:45.779764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.511 qpair failed and we were unable to recover it. 00:39:05.511 [2024-09-27 15:57:45.780186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.511 [2024-09-27 15:57:45.780194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.511 qpair failed and we were unable to recover it. 00:39:05.511 [2024-09-27 15:57:45.780495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.512 [2024-09-27 15:57:45.780503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.512 qpair failed and we were unable to recover it. 00:39:05.512 [2024-09-27 15:57:45.780831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.512 [2024-09-27 15:57:45.780838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.512 qpair failed and we were unable to recover it. 00:39:05.512 [2024-09-27 15:57:45.781146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.512 [2024-09-27 15:57:45.781154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.512 qpair failed and we were unable to recover it. 00:39:05.512 [2024-09-27 15:57:45.781530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.512 [2024-09-27 15:57:45.781538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.512 qpair failed and we were unable to recover it. 00:39:05.512 [2024-09-27 15:57:45.781772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.512 [2024-09-27 15:57:45.781779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.512 qpair failed and we were unable to recover it. 00:39:05.512 [2024-09-27 15:57:45.782110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.512 [2024-09-27 15:57:45.782118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.512 qpair failed and we were unable to recover it. 00:39:05.512 [2024-09-27 15:57:45.782422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.512 [2024-09-27 15:57:45.782438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.512 qpair failed and we were unable to recover it. 00:39:05.512 [2024-09-27 15:57:45.782754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.512 [2024-09-27 15:57:45.782764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.512 qpair failed and we were unable to recover it. 00:39:05.512 [2024-09-27 15:57:45.783088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.512 [2024-09-27 15:57:45.783096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.512 qpair failed and we were unable to recover it. 00:39:05.512 [2024-09-27 15:57:45.783417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.512 [2024-09-27 15:57:45.783426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.512 qpair failed and we were unable to recover it. 00:39:05.512 [2024-09-27 15:57:45.783591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.512 [2024-09-27 15:57:45.783600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.512 qpair failed and we were unable to recover it. 00:39:05.512 [2024-09-27 15:57:45.783850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.512 [2024-09-27 15:57:45.783860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.512 qpair failed and we were unable to recover it. 00:39:05.512 [2024-09-27 15:57:45.784159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.512 [2024-09-27 15:57:45.784167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.512 qpair failed and we were unable to recover it. 00:39:05.512 [2024-09-27 15:57:45.784472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.512 [2024-09-27 15:57:45.784480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.512 qpair failed and we were unable to recover it. 00:39:05.512 [2024-09-27 15:57:45.784657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.512 [2024-09-27 15:57:45.784666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.512 qpair failed and we were unable to recover it. 00:39:05.512 [2024-09-27 15:57:45.784966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.512 [2024-09-27 15:57:45.784973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.512 qpair failed and we were unable to recover it. 00:39:05.512 [2024-09-27 15:57:45.785326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.512 [2024-09-27 15:57:45.785333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.512 qpair failed and we were unable to recover it. 00:39:05.512 [2024-09-27 15:57:45.785666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.512 [2024-09-27 15:57:45.785674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.512 qpair failed and we were unable to recover it. 00:39:05.512 [2024-09-27 15:57:45.785870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.512 [2024-09-27 15:57:45.785878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.512 qpair failed and we were unable to recover it. 00:39:05.512 [2024-09-27 15:57:45.786097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.512 [2024-09-27 15:57:45.786105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.512 qpair failed and we were unable to recover it. 00:39:05.512 [2024-09-27 15:57:45.786436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.512 [2024-09-27 15:57:45.786444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.512 qpair failed and we were unable to recover it. 00:39:05.512 [2024-09-27 15:57:45.786754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.512 [2024-09-27 15:57:45.786762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.512 qpair failed and we were unable to recover it. 00:39:05.512 [2024-09-27 15:57:45.786953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.512 [2024-09-27 15:57:45.786962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.512 qpair failed and we were unable to recover it. 00:39:05.512 [2024-09-27 15:57:45.787289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.512 [2024-09-27 15:57:45.787298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.512 qpair failed and we were unable to recover it. 00:39:05.512 [2024-09-27 15:57:45.787628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.512 [2024-09-27 15:57:45.787635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.512 qpair failed and we were unable to recover it. 00:39:05.512 [2024-09-27 15:57:45.787961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.512 [2024-09-27 15:57:45.787969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.512 qpair failed and we were unable to recover it. 00:39:05.512 [2024-09-27 15:57:45.788258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.512 [2024-09-27 15:57:45.788266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.512 qpair failed and we were unable to recover it. 00:39:05.512 [2024-09-27 15:57:45.788498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.512 [2024-09-27 15:57:45.788505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.512 qpair failed and we were unable to recover it. 00:39:05.512 [2024-09-27 15:57:45.788835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.512 [2024-09-27 15:57:45.788843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.512 qpair failed and we were unable to recover it. 00:39:05.512 [2024-09-27 15:57:45.789046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.512 [2024-09-27 15:57:45.789056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.512 qpair failed and we were unable to recover it. 00:39:05.512 [2024-09-27 15:57:45.789397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.512 [2024-09-27 15:57:45.789405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.512 qpair failed and we were unable to recover it. 00:39:05.512 [2024-09-27 15:57:45.789730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.512 [2024-09-27 15:57:45.789737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.512 qpair failed and we were unable to recover it. 00:39:05.512 [2024-09-27 15:57:45.790070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.512 [2024-09-27 15:57:45.790078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.512 qpair failed and we were unable to recover it. 00:39:05.512 [2024-09-27 15:57:45.790401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.512 [2024-09-27 15:57:45.790409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.512 qpair failed and we were unable to recover it. 00:39:05.512 [2024-09-27 15:57:45.790745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.513 [2024-09-27 15:57:45.790754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.513 qpair failed and we were unable to recover it. 00:39:05.513 [2024-09-27 15:57:45.791069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.513 [2024-09-27 15:57:45.791078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.513 qpair failed and we were unable to recover it. 00:39:05.513 [2024-09-27 15:57:45.791417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.513 [2024-09-27 15:57:45.791425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.513 qpair failed and we were unable to recover it. 00:39:05.513 [2024-09-27 15:57:45.791635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.513 [2024-09-27 15:57:45.791643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.513 qpair failed and we were unable to recover it. 00:39:05.513 [2024-09-27 15:57:45.791965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.513 [2024-09-27 15:57:45.791973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.513 qpair failed and we were unable to recover it. 00:39:05.513 [2024-09-27 15:57:45.792304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.513 [2024-09-27 15:57:45.792311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.513 qpair failed and we were unable to recover it. 00:39:05.513 [2024-09-27 15:57:45.792484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.513 [2024-09-27 15:57:45.792494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.513 qpair failed and we were unable to recover it. 00:39:05.513 [2024-09-27 15:57:45.792841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.513 [2024-09-27 15:57:45.792850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.513 qpair failed and we were unable to recover it. 00:39:05.513 [2024-09-27 15:57:45.793084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.513 [2024-09-27 15:57:45.793092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.513 qpair failed and we were unable to recover it. 00:39:05.513 [2024-09-27 15:57:45.793291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.513 [2024-09-27 15:57:45.793299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.513 qpair failed and we were unable to recover it. 00:39:05.513 [2024-09-27 15:57:45.793685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.513 [2024-09-27 15:57:45.793692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.513 qpair failed and we were unable to recover it. 00:39:05.513 [2024-09-27 15:57:45.793971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.513 [2024-09-27 15:57:45.793979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.513 qpair failed and we were unable to recover it. 00:39:05.513 [2024-09-27 15:57:45.794276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.513 [2024-09-27 15:57:45.794284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.513 qpair failed and we were unable to recover it. 00:39:05.513 [2024-09-27 15:57:45.794621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.513 [2024-09-27 15:57:45.794629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.513 qpair failed and we were unable to recover it. 00:39:05.513 [2024-09-27 15:57:45.794953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.513 [2024-09-27 15:57:45.794962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.513 qpair failed and we were unable to recover it. 00:39:05.513 [2024-09-27 15:57:45.795284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.513 [2024-09-27 15:57:45.795291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.513 qpair failed and we were unable to recover it. 00:39:05.513 [2024-09-27 15:57:45.795598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.513 [2024-09-27 15:57:45.795606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.513 qpair failed and we were unable to recover it. 00:39:05.513 [2024-09-27 15:57:45.795814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.513 [2024-09-27 15:57:45.795822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.513 qpair failed and we were unable to recover it. 00:39:05.513 [2024-09-27 15:57:45.796002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.513 [2024-09-27 15:57:45.796010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.513 qpair failed and we were unable to recover it. 00:39:05.513 [2024-09-27 15:57:45.796387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.513 [2024-09-27 15:57:45.796394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.513 qpair failed and we were unable to recover it. 00:39:05.513 [2024-09-27 15:57:45.796578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.513 [2024-09-27 15:57:45.796586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.513 qpair failed and we were unable to recover it. 00:39:05.513 [2024-09-27 15:57:45.796909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.513 [2024-09-27 15:57:45.796917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.513 qpair failed and we were unable to recover it. 00:39:05.513 [2024-09-27 15:57:45.797097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.513 [2024-09-27 15:57:45.797105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.513 qpair failed and we were unable to recover it. 00:39:05.513 [2024-09-27 15:57:45.797393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.513 [2024-09-27 15:57:45.797401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.513 qpair failed and we were unable to recover it. 00:39:05.513 [2024-09-27 15:57:45.797605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.513 [2024-09-27 15:57:45.797613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.513 qpair failed and we were unable to recover it. 00:39:05.513 [2024-09-27 15:57:45.797822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.513 [2024-09-27 15:57:45.797831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.513 qpair failed and we were unable to recover it. 00:39:05.513 [2024-09-27 15:57:45.798020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.513 [2024-09-27 15:57:45.798030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.513 qpair failed and we were unable to recover it. 00:39:05.513 [2024-09-27 15:57:45.798300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.513 [2024-09-27 15:57:45.798309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.513 qpair failed and we were unable to recover it. 00:39:05.513 [2024-09-27 15:57:45.798634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.513 [2024-09-27 15:57:45.798643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.513 qpair failed and we were unable to recover it. 00:39:05.513 [2024-09-27 15:57:45.798969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.513 [2024-09-27 15:57:45.798978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.513 qpair failed and we were unable to recover it. 00:39:05.513 [2024-09-27 15:57:45.799157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.513 [2024-09-27 15:57:45.799165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.513 qpair failed and we were unable to recover it. 00:39:05.513 [2024-09-27 15:57:45.799511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.513 [2024-09-27 15:57:45.799519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.513 qpair failed and we were unable to recover it. 00:39:05.513 [2024-09-27 15:57:45.799836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.513 [2024-09-27 15:57:45.799844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.513 qpair failed and we were unable to recover it. 00:39:05.513 [2024-09-27 15:57:45.800180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.513 [2024-09-27 15:57:45.800188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.513 qpair failed and we were unable to recover it. 00:39:05.513 [2024-09-27 15:57:45.800414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.513 [2024-09-27 15:57:45.800421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.513 qpair failed and we were unable to recover it. 00:39:05.513 [2024-09-27 15:57:45.800753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.513 [2024-09-27 15:57:45.800761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.513 qpair failed and we were unable to recover it. 00:39:05.513 [2024-09-27 15:57:45.801093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.513 [2024-09-27 15:57:45.801101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.514 qpair failed and we were unable to recover it. 00:39:05.514 [2024-09-27 15:57:45.801438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.514 [2024-09-27 15:57:45.801447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.514 qpair failed and we were unable to recover it. 00:39:05.514 [2024-09-27 15:57:45.801765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.514 [2024-09-27 15:57:45.801773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.514 qpair failed and we were unable to recover it. 00:39:05.514 [2024-09-27 15:57:45.802098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.514 [2024-09-27 15:57:45.802106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.514 qpair failed and we were unable to recover it. 00:39:05.514 [2024-09-27 15:57:45.802317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.514 [2024-09-27 15:57:45.802325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.514 qpair failed and we were unable to recover it. 00:39:05.514 [2024-09-27 15:57:45.802630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.514 [2024-09-27 15:57:45.802641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.514 qpair failed and we were unable to recover it. 00:39:05.514 [2024-09-27 15:57:45.802974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.514 [2024-09-27 15:57:45.802982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.514 qpair failed and we were unable to recover it. 00:39:05.514 [2024-09-27 15:57:45.803287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.514 [2024-09-27 15:57:45.803294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.514 qpair failed and we were unable to recover it. 00:39:05.514 [2024-09-27 15:57:45.803611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.514 [2024-09-27 15:57:45.803619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.514 qpair failed and we were unable to recover it. 00:39:05.514 [2024-09-27 15:57:45.803947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.514 [2024-09-27 15:57:45.803955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.514 qpair failed and we were unable to recover it. 00:39:05.514 [2024-09-27 15:57:45.804251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.514 [2024-09-27 15:57:45.804258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.514 qpair failed and we were unable to recover it. 00:39:05.514 [2024-09-27 15:57:45.804587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.514 [2024-09-27 15:57:45.804594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.514 qpair failed and we were unable to recover it. 00:39:05.514 [2024-09-27 15:57:45.804931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.514 [2024-09-27 15:57:45.804938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.514 qpair failed and we were unable to recover it. 00:39:05.514 [2024-09-27 15:57:45.805252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.514 [2024-09-27 15:57:45.805260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.514 qpair failed and we were unable to recover it. 00:39:05.514 [2024-09-27 15:57:45.805605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.514 [2024-09-27 15:57:45.805612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.514 qpair failed and we were unable to recover it. 00:39:05.514 [2024-09-27 15:57:45.805908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.514 [2024-09-27 15:57:45.805917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.514 qpair failed and we were unable to recover it. 00:39:05.514 [2024-09-27 15:57:45.806244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.514 [2024-09-27 15:57:45.806252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.514 qpair failed and we were unable to recover it. 00:39:05.514 [2024-09-27 15:57:45.806660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.514 [2024-09-27 15:57:45.806669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.514 qpair failed and we were unable to recover it. 00:39:05.514 [2024-09-27 15:57:45.807015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.514 [2024-09-27 15:57:45.807024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.514 qpair failed and we were unable to recover it. 00:39:05.514 [2024-09-27 15:57:45.807345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.514 [2024-09-27 15:57:45.807356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.514 qpair failed and we were unable to recover it. 00:39:05.514 [2024-09-27 15:57:45.807670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.514 [2024-09-27 15:57:45.807677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.514 qpair failed and we were unable to recover it. 00:39:05.514 [2024-09-27 15:57:45.807970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.514 [2024-09-27 15:57:45.807978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.514 qpair failed and we were unable to recover it. 00:39:05.514 [2024-09-27 15:57:45.808293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.514 [2024-09-27 15:57:45.808300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.514 qpair failed and we were unable to recover it. 00:39:05.514 [2024-09-27 15:57:45.808582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.514 [2024-09-27 15:57:45.808591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.514 qpair failed and we were unable to recover it. 00:39:05.514 [2024-09-27 15:57:45.808917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.514 [2024-09-27 15:57:45.808926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.514 qpair failed and we were unable to recover it. 00:39:05.514 [2024-09-27 15:57:45.809044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.514 [2024-09-27 15:57:45.809053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.514 qpair failed and we were unable to recover it. 00:39:05.514 [2024-09-27 15:57:45.809371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.514 [2024-09-27 15:57:45.809378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.514 qpair failed and we were unable to recover it. 00:39:05.514 [2024-09-27 15:57:45.809705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.514 [2024-09-27 15:57:45.809714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.514 qpair failed and we were unable to recover it. 00:39:05.514 [2024-09-27 15:57:45.810043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.514 [2024-09-27 15:57:45.810051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.514 qpair failed and we were unable to recover it. 00:39:05.514 [2024-09-27 15:57:45.810378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.514 [2024-09-27 15:57:45.810386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.514 qpair failed and we were unable to recover it. 00:39:05.514 [2024-09-27 15:57:45.810714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.514 [2024-09-27 15:57:45.810722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.514 qpair failed and we were unable to recover it. 00:39:05.514 [2024-09-27 15:57:45.811026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.514 [2024-09-27 15:57:45.811035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.514 qpair failed and we were unable to recover it. 00:39:05.514 [2024-09-27 15:57:45.811210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.514 [2024-09-27 15:57:45.811221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.514 qpair failed and we were unable to recover it. 00:39:05.514 [2024-09-27 15:57:45.811593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.514 [2024-09-27 15:57:45.811602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.514 qpair failed and we were unable to recover it. 00:39:05.514 [2024-09-27 15:57:45.811961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.514 [2024-09-27 15:57:45.811969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.514 qpair failed and we were unable to recover it. 00:39:05.514 [2024-09-27 15:57:45.812191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.515 [2024-09-27 15:57:45.812200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.515 qpair failed and we were unable to recover it. 00:39:05.515 [2024-09-27 15:57:45.812536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.515 [2024-09-27 15:57:45.812544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.515 qpair failed and we were unable to recover it. 00:39:05.515 [2024-09-27 15:57:45.812844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.515 [2024-09-27 15:57:45.812854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.515 qpair failed and we were unable to recover it. 00:39:05.515 [2024-09-27 15:57:45.813139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.515 [2024-09-27 15:57:45.813148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.515 qpair failed and we were unable to recover it. 00:39:05.515 [2024-09-27 15:57:45.813485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.515 [2024-09-27 15:57:45.813494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.515 qpair failed and we were unable to recover it. 00:39:05.515 [2024-09-27 15:57:45.813700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.515 [2024-09-27 15:57:45.813710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.515 qpair failed and we were unable to recover it. 00:39:05.515 [2024-09-27 15:57:45.813987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.515 [2024-09-27 15:57:45.813995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.515 qpair failed and we were unable to recover it. 00:39:05.515 [2024-09-27 15:57:45.814317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.515 [2024-09-27 15:57:45.814324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.515 qpair failed and we were unable to recover it. 00:39:05.515 [2024-09-27 15:57:45.814652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.515 [2024-09-27 15:57:45.814661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.515 qpair failed and we were unable to recover it. 00:39:05.515 [2024-09-27 15:57:45.814869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.515 [2024-09-27 15:57:45.814877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.515 qpair failed and we were unable to recover it. 00:39:05.515 [2024-09-27 15:57:45.815178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.515 [2024-09-27 15:57:45.815187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.515 qpair failed and we were unable to recover it. 00:39:05.515 [2024-09-27 15:57:45.815528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.515 [2024-09-27 15:57:45.815538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.515 qpair failed and we were unable to recover it. 00:39:05.515 [2024-09-27 15:57:45.815740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.515 [2024-09-27 15:57:45.815749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.515 qpair failed and we were unable to recover it. 00:39:05.515 [2024-09-27 15:57:45.816076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.515 [2024-09-27 15:57:45.816085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.515 qpair failed and we were unable to recover it. 00:39:05.515 [2024-09-27 15:57:45.816405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.515 [2024-09-27 15:57:45.816413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.515 qpair failed and we were unable to recover it. 00:39:05.515 [2024-09-27 15:57:45.816729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.515 [2024-09-27 15:57:45.816737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.515 qpair failed and we were unable to recover it. 00:39:05.515 [2024-09-27 15:57:45.816951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.515 [2024-09-27 15:57:45.816959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.515 qpair failed and we were unable to recover it. 00:39:05.515 [2024-09-27 15:57:45.817189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.515 [2024-09-27 15:57:45.817197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.515 qpair failed and we were unable to recover it. 00:39:05.515 [2024-09-27 15:57:45.817518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.515 [2024-09-27 15:57:45.817526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.515 qpair failed and we were unable to recover it. 00:39:05.515 [2024-09-27 15:57:45.817876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.515 [2024-09-27 15:57:45.817886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.515 qpair failed and we were unable to recover it. 00:39:05.515 [2024-09-27 15:57:45.818271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.515 [2024-09-27 15:57:45.818279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.515 qpair failed and we were unable to recover it. 00:39:05.515 [2024-09-27 15:57:45.818597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.515 [2024-09-27 15:57:45.818605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.515 qpair failed and we were unable to recover it. 00:39:05.515 [2024-09-27 15:57:45.818925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.515 [2024-09-27 15:57:45.818933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.515 qpair failed and we were unable to recover it. 00:39:05.515 [2024-09-27 15:57:45.819264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.515 [2024-09-27 15:57:45.819272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.515 qpair failed and we were unable to recover it. 00:39:05.515 [2024-09-27 15:57:45.819638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.515 [2024-09-27 15:57:45.819647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.515 qpair failed and we were unable to recover it. 00:39:05.515 [2024-09-27 15:57:45.819943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.515 [2024-09-27 15:57:45.819951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.515 qpair failed and we were unable to recover it. 00:39:05.515 [2024-09-27 15:57:45.820249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.515 [2024-09-27 15:57:45.820257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.515 qpair failed and we were unable to recover it. 00:39:05.515 [2024-09-27 15:57:45.820591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.515 [2024-09-27 15:57:45.820599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.515 qpair failed and we were unable to recover it. 00:39:05.515 [2024-09-27 15:57:45.820778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.515 [2024-09-27 15:57:45.820788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.515 qpair failed and we were unable to recover it. 00:39:05.515 [2024-09-27 15:57:45.821116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.515 [2024-09-27 15:57:45.821124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.515 qpair failed and we were unable to recover it. 00:39:05.515 [2024-09-27 15:57:45.821446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.515 [2024-09-27 15:57:45.821454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.515 qpair failed and we were unable to recover it. 00:39:05.515 [2024-09-27 15:57:45.821774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.515 [2024-09-27 15:57:45.821782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.515 qpair failed and we were unable to recover it. 00:39:05.515 [2024-09-27 15:57:45.822068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.515 [2024-09-27 15:57:45.822076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.515 qpair failed and we were unable to recover it. 00:39:05.515 [2024-09-27 15:57:45.822449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.515 [2024-09-27 15:57:45.822457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.515 qpair failed and we were unable to recover it. 00:39:05.515 [2024-09-27 15:57:45.822760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.515 [2024-09-27 15:57:45.822768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.515 qpair failed and we were unable to recover it. 00:39:05.515 [2024-09-27 15:57:45.823055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.515 [2024-09-27 15:57:45.823063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.515 qpair failed and we were unable to recover it. 00:39:05.515 [2024-09-27 15:57:45.823376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.515 [2024-09-27 15:57:45.823384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.515 qpair failed and we were unable to recover it. 00:39:05.515 [2024-09-27 15:57:45.823707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.516 [2024-09-27 15:57:45.823715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.516 qpair failed and we were unable to recover it. 00:39:05.516 [2024-09-27 15:57:45.824036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.516 [2024-09-27 15:57:45.824044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.516 qpair failed and we were unable to recover it. 00:39:05.516 [2024-09-27 15:57:45.824366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.516 [2024-09-27 15:57:45.824374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.516 qpair failed and we were unable to recover it. 00:39:05.516 [2024-09-27 15:57:45.824593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.516 [2024-09-27 15:57:45.824600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.516 qpair failed and we were unable to recover it. 00:39:05.516 [2024-09-27 15:57:45.824944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.516 [2024-09-27 15:57:45.824952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.516 qpair failed and we were unable to recover it. 00:39:05.516 [2024-09-27 15:57:45.825281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.516 [2024-09-27 15:57:45.825290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.516 qpair failed and we were unable to recover it. 00:39:05.516 [2024-09-27 15:57:45.825651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.516 [2024-09-27 15:57:45.825660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.516 qpair failed and we were unable to recover it. 00:39:05.516 [2024-09-27 15:57:45.825976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.516 [2024-09-27 15:57:45.825985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.516 qpair failed and we were unable to recover it. 00:39:05.516 [2024-09-27 15:57:45.826305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.516 [2024-09-27 15:57:45.826313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.516 qpair failed and we were unable to recover it. 00:39:05.516 [2024-09-27 15:57:45.826635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.516 [2024-09-27 15:57:45.826643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.516 qpair failed and we were unable to recover it. 00:39:05.516 [2024-09-27 15:57:45.826824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.516 [2024-09-27 15:57:45.826832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.516 qpair failed and we were unable to recover it. 00:39:05.516 [2024-09-27 15:57:45.827251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.516 [2024-09-27 15:57:45.827259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.516 qpair failed and we were unable to recover it. 00:39:05.516 [2024-09-27 15:57:45.827561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.516 [2024-09-27 15:57:45.827569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.516 qpair failed and we were unable to recover it. 00:39:05.516 [2024-09-27 15:57:45.827893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.516 [2024-09-27 15:57:45.827907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.516 qpair failed and we were unable to recover it. 00:39:05.516 [2024-09-27 15:57:45.828221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.516 [2024-09-27 15:57:45.828229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.516 qpair failed and we were unable to recover it. 00:39:05.516 [2024-09-27 15:57:45.828427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.516 [2024-09-27 15:57:45.828437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.516 qpair failed and we were unable to recover it. 00:39:05.516 [2024-09-27 15:57:45.828776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.516 [2024-09-27 15:57:45.828783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.516 qpair failed and we were unable to recover it. 00:39:05.516 [2024-09-27 15:57:45.829154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.516 [2024-09-27 15:57:45.829163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.516 qpair failed and we were unable to recover it. 00:39:05.516 [2024-09-27 15:57:45.829478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.516 [2024-09-27 15:57:45.829486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.516 qpair failed and we were unable to recover it. 00:39:05.516 [2024-09-27 15:57:45.829808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.516 [2024-09-27 15:57:45.829816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.516 qpair failed and we were unable to recover it. 00:39:05.516 [2024-09-27 15:57:45.830142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.516 [2024-09-27 15:57:45.830150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.516 qpair failed and we were unable to recover it. 00:39:05.516 [2024-09-27 15:57:45.830462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.516 [2024-09-27 15:57:45.830470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.516 qpair failed and we were unable to recover it. 00:39:05.516 [2024-09-27 15:57:45.830786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.516 [2024-09-27 15:57:45.830794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.516 qpair failed and we were unable to recover it. 00:39:05.516 [2024-09-27 15:57:45.831114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.516 [2024-09-27 15:57:45.831123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.516 qpair failed and we were unable to recover it. 00:39:05.516 [2024-09-27 15:57:45.831285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.516 [2024-09-27 15:57:45.831293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.516 qpair failed and we were unable to recover it. 00:39:05.516 [2024-09-27 15:57:45.831564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.516 [2024-09-27 15:57:45.831572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.516 qpair failed and we were unable to recover it. 00:39:05.516 [2024-09-27 15:57:45.831875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.516 [2024-09-27 15:57:45.831883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.516 qpair failed and we were unable to recover it. 00:39:05.516 [2024-09-27 15:57:45.832210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.516 [2024-09-27 15:57:45.832218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.516 qpair failed and we were unable to recover it. 00:39:05.516 [2024-09-27 15:57:45.832505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.516 [2024-09-27 15:57:45.832514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.516 qpair failed and we were unable to recover it. 00:39:05.516 [2024-09-27 15:57:45.832715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.516 [2024-09-27 15:57:45.832724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.517 qpair failed and we were unable to recover it. 00:39:05.517 [2024-09-27 15:57:45.832941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.517 [2024-09-27 15:57:45.832949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.517 qpair failed and we were unable to recover it. 00:39:05.517 [2024-09-27 15:57:45.833155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.517 [2024-09-27 15:57:45.833164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.517 qpair failed and we were unable to recover it. 00:39:05.517 [2024-09-27 15:57:45.833526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.517 [2024-09-27 15:57:45.833533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.517 qpair failed and we were unable to recover it. 00:39:05.517 [2024-09-27 15:57:45.833857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.517 [2024-09-27 15:57:45.833865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.517 qpair failed and we were unable to recover it. 00:39:05.517 [2024-09-27 15:57:45.834151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.517 [2024-09-27 15:57:45.834162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.517 qpair failed and we were unable to recover it. 00:39:05.517 [2024-09-27 15:57:45.834504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.517 [2024-09-27 15:57:45.834512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.517 qpair failed and we were unable to recover it. 00:39:05.517 [2024-09-27 15:57:45.834844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.517 [2024-09-27 15:57:45.834854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.517 qpair failed and we were unable to recover it. 00:39:05.517 [2024-09-27 15:57:45.835185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.517 [2024-09-27 15:57:45.835195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.517 qpair failed and we were unable to recover it. 00:39:05.517 [2024-09-27 15:57:45.835510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.517 [2024-09-27 15:57:45.835518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.517 qpair failed and we were unable to recover it. 00:39:05.517 [2024-09-27 15:57:45.835832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.517 [2024-09-27 15:57:45.835841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.517 qpair failed and we were unable to recover it. 00:39:05.517 [2024-09-27 15:57:45.836166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.517 [2024-09-27 15:57:45.836174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.517 qpair failed and we were unable to recover it. 00:39:05.517 [2024-09-27 15:57:45.836493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.517 [2024-09-27 15:57:45.836502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.517 qpair failed and we were unable to recover it. 00:39:05.517 [2024-09-27 15:57:45.836821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.517 [2024-09-27 15:57:45.836830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.517 qpair failed and we were unable to recover it. 00:39:05.517 [2024-09-27 15:57:45.837147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.517 [2024-09-27 15:57:45.837156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.517 qpair failed and we were unable to recover it. 00:39:05.517 [2024-09-27 15:57:45.837459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.517 [2024-09-27 15:57:45.837466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.517 qpair failed and we were unable to recover it. 00:39:05.517 [2024-09-27 15:57:45.837793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.517 [2024-09-27 15:57:45.837802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.517 qpair failed and we were unable to recover it. 00:39:05.517 [2024-09-27 15:57:45.838142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.517 [2024-09-27 15:57:45.838152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.517 qpair failed and we were unable to recover it. 00:39:05.517 [2024-09-27 15:57:45.838469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.517 [2024-09-27 15:57:45.838478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.517 qpair failed and we were unable to recover it. 00:39:05.517 [2024-09-27 15:57:45.838798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.517 [2024-09-27 15:57:45.838808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.517 qpair failed and we were unable to recover it. 00:39:05.517 [2024-09-27 15:57:45.839131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.517 [2024-09-27 15:57:45.839140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.517 qpair failed and we were unable to recover it. 00:39:05.517 [2024-09-27 15:57:45.839509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.517 [2024-09-27 15:57:45.839518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.517 qpair failed and we were unable to recover it. 00:39:05.517 [2024-09-27 15:57:45.839840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.517 [2024-09-27 15:57:45.839850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.517 qpair failed and we were unable to recover it. 00:39:05.517 [2024-09-27 15:57:45.840166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.517 [2024-09-27 15:57:45.840176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.517 qpair failed and we were unable to recover it. 00:39:05.517 [2024-09-27 15:57:45.840496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.517 [2024-09-27 15:57:45.840505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.517 qpair failed and we were unable to recover it. 00:39:05.517 [2024-09-27 15:57:45.840811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.517 [2024-09-27 15:57:45.840821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.517 qpair failed and we were unable to recover it. 00:39:05.517 [2024-09-27 15:57:45.841209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.517 [2024-09-27 15:57:45.841221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.517 qpair failed and we were unable to recover it. 00:39:05.517 [2024-09-27 15:57:45.841535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.517 [2024-09-27 15:57:45.841544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.517 qpair failed and we were unable to recover it. 00:39:05.517 [2024-09-27 15:57:45.841785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.517 [2024-09-27 15:57:45.841794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.517 qpair failed and we were unable to recover it. 00:39:05.517 [2024-09-27 15:57:45.842022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.517 [2024-09-27 15:57:45.842032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.517 qpair failed and we were unable to recover it. 00:39:05.517 [2024-09-27 15:57:45.842212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.517 [2024-09-27 15:57:45.842220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.517 qpair failed and we were unable to recover it. 00:39:05.517 [2024-09-27 15:57:45.842537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.517 [2024-09-27 15:57:45.842546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.517 qpair failed and we were unable to recover it. 00:39:05.517 [2024-09-27 15:57:45.842871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.517 [2024-09-27 15:57:45.842880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.517 qpair failed and we were unable to recover it. 00:39:05.517 [2024-09-27 15:57:45.843261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.517 [2024-09-27 15:57:45.843270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.517 qpair failed and we were unable to recover it. 00:39:05.517 [2024-09-27 15:57:45.843589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.517 [2024-09-27 15:57:45.843598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.517 qpair failed and we were unable to recover it. 00:39:05.517 [2024-09-27 15:57:45.843907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.518 [2024-09-27 15:57:45.843918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.518 qpair failed and we were unable to recover it. 00:39:05.518 [2024-09-27 15:57:45.844206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.518 [2024-09-27 15:57:45.844215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.518 qpair failed and we were unable to recover it. 00:39:05.518 [2024-09-27 15:57:45.844536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.518 [2024-09-27 15:57:45.844546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.518 qpair failed and we were unable to recover it. 00:39:05.518 [2024-09-27 15:57:45.844864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.518 [2024-09-27 15:57:45.844874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.518 qpair failed and we were unable to recover it. 00:39:05.518 [2024-09-27 15:57:45.845058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.518 [2024-09-27 15:57:45.845070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.518 qpair failed and we were unable to recover it. 00:39:05.518 [2024-09-27 15:57:45.845403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.518 [2024-09-27 15:57:45.845414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.518 qpair failed and we were unable to recover it. 00:39:05.518 [2024-09-27 15:57:45.845728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.518 [2024-09-27 15:57:45.845737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.518 qpair failed and we were unable to recover it. 00:39:05.518 [2024-09-27 15:57:45.845911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.518 [2024-09-27 15:57:45.845921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.518 qpair failed and we were unable to recover it. 00:39:05.518 [2024-09-27 15:57:45.846254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.518 [2024-09-27 15:57:45.846264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.518 qpair failed and we were unable to recover it. 00:39:05.518 [2024-09-27 15:57:45.846611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.518 [2024-09-27 15:57:45.846621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.518 qpair failed and we were unable to recover it. 00:39:05.518 [2024-09-27 15:57:45.846947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.518 [2024-09-27 15:57:45.846956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.518 qpair failed and we were unable to recover it. 00:39:05.518 [2024-09-27 15:57:45.847289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.518 [2024-09-27 15:57:45.847298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.518 qpair failed and we were unable to recover it. 00:39:05.518 [2024-09-27 15:57:45.847585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.518 [2024-09-27 15:57:45.847594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.518 qpair failed and we were unable to recover it. 00:39:05.518 [2024-09-27 15:57:45.847957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.518 [2024-09-27 15:57:45.847966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.518 qpair failed and we were unable to recover it. 00:39:05.518 [2024-09-27 15:57:45.848308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.518 [2024-09-27 15:57:45.848318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.518 qpair failed and we were unable to recover it. 00:39:05.518 [2024-09-27 15:57:45.848533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.518 [2024-09-27 15:57:45.848542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.518 qpair failed and we were unable to recover it. 00:39:05.518 [2024-09-27 15:57:45.848879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.518 [2024-09-27 15:57:45.848888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.518 qpair failed and we were unable to recover it. 00:39:05.518 [2024-09-27 15:57:45.849229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.518 [2024-09-27 15:57:45.849239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.518 qpair failed and we were unable to recover it. 00:39:05.518 [2024-09-27 15:57:45.849616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.518 [2024-09-27 15:57:45.849630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.518 qpair failed and we were unable to recover it. 00:39:05.518 [2024-09-27 15:57:45.850023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.518 [2024-09-27 15:57:45.850034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.518 qpair failed and we were unable to recover it. 00:39:05.518 [2024-09-27 15:57:45.850373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.518 [2024-09-27 15:57:45.850384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.518 qpair failed and we were unable to recover it. 00:39:05.518 [2024-09-27 15:57:45.850704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.518 [2024-09-27 15:57:45.850715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.518 qpair failed and we were unable to recover it. 00:39:05.518 [2024-09-27 15:57:45.850925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.518 [2024-09-27 15:57:45.850934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.518 qpair failed and we were unable to recover it. 00:39:05.518 [2024-09-27 15:57:45.851158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.518 [2024-09-27 15:57:45.851168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.518 qpair failed and we were unable to recover it. 00:39:05.518 [2024-09-27 15:57:45.851497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.518 [2024-09-27 15:57:45.851506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.518 qpair failed and we were unable to recover it. 00:39:05.518 [2024-09-27 15:57:45.851821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.518 [2024-09-27 15:57:45.851831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.518 qpair failed and we were unable to recover it. 00:39:05.518 [2024-09-27 15:57:45.852152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.518 [2024-09-27 15:57:45.852161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.518 qpair failed and we were unable to recover it. 00:39:05.518 [2024-09-27 15:57:45.852476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.518 [2024-09-27 15:57:45.852485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.518 qpair failed and we were unable to recover it. 00:39:05.518 [2024-09-27 15:57:45.852662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.518 [2024-09-27 15:57:45.852671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.518 qpair failed and we were unable to recover it. 00:39:05.518 [2024-09-27 15:57:45.853011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.518 [2024-09-27 15:57:45.853022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.518 qpair failed and we were unable to recover it. 00:39:05.518 [2024-09-27 15:57:45.853226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.518 [2024-09-27 15:57:45.853235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.518 qpair failed and we were unable to recover it. 00:39:05.518 [2024-09-27 15:57:45.853558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.518 [2024-09-27 15:57:45.853567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.518 qpair failed and we were unable to recover it. 00:39:05.518 [2024-09-27 15:57:45.853908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.518 [2024-09-27 15:57:45.853918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.518 qpair failed and we were unable to recover it. 00:39:05.518 [2024-09-27 15:57:45.854229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.518 [2024-09-27 15:57:45.854238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.518 qpair failed and we were unable to recover it. 00:39:05.518 [2024-09-27 15:57:45.854560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.519 [2024-09-27 15:57:45.854570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.519 qpair failed and we were unable to recover it. 00:39:05.519 [2024-09-27 15:57:45.854909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.519 [2024-09-27 15:57:45.854919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.519 qpair failed and we were unable to recover it. 00:39:05.519 [2024-09-27 15:57:45.855230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.519 [2024-09-27 15:57:45.855238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.519 qpair failed and we were unable to recover it. 00:39:05.519 [2024-09-27 15:57:45.855558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.519 [2024-09-27 15:57:45.855567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.519 qpair failed and we were unable to recover it. 00:39:05.519 [2024-09-27 15:57:45.855880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.519 [2024-09-27 15:57:45.855889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.519 qpair failed and we were unable to recover it. 00:39:05.519 [2024-09-27 15:57:45.856206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.519 [2024-09-27 15:57:45.856215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.519 qpair failed and we were unable to recover it. 00:39:05.519 [2024-09-27 15:57:45.856517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.519 [2024-09-27 15:57:45.856527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.519 qpair failed and we were unable to recover it. 00:39:05.519 [2024-09-27 15:57:45.856703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.519 [2024-09-27 15:57:45.856715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.519 qpair failed and we were unable to recover it. 00:39:05.519 [2024-09-27 15:57:45.857048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.519 [2024-09-27 15:57:45.857057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.519 qpair failed and we were unable to recover it. 00:39:05.519 [2024-09-27 15:57:45.857382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.519 [2024-09-27 15:57:45.857394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.519 qpair failed and we were unable to recover it. 00:39:05.519 [2024-09-27 15:57:45.857708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.519 [2024-09-27 15:57:45.857720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.519 qpair failed and we were unable to recover it. 00:39:05.519 [2024-09-27 15:57:45.858047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.519 [2024-09-27 15:57:45.858058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.519 qpair failed and we were unable to recover it. 00:39:05.519 [2024-09-27 15:57:45.858383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.519 [2024-09-27 15:57:45.858393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.519 qpair failed and we were unable to recover it. 00:39:05.519 [2024-09-27 15:57:45.858711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.519 [2024-09-27 15:57:45.858720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.519 qpair failed and we were unable to recover it. 00:39:05.519 [2024-09-27 15:57:45.859042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.519 [2024-09-27 15:57:45.859050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.519 qpair failed and we were unable to recover it. 00:39:05.519 [2024-09-27 15:57:45.859360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.519 [2024-09-27 15:57:45.859368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.519 qpair failed and we were unable to recover it. 00:39:05.519 [2024-09-27 15:57:45.859679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.519 [2024-09-27 15:57:45.859687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.519 qpair failed and we were unable to recover it. 00:39:05.519 [2024-09-27 15:57:45.859908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.519 [2024-09-27 15:57:45.859916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.519 qpair failed and we were unable to recover it. 00:39:05.519 [2024-09-27 15:57:45.860251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.519 [2024-09-27 15:57:45.860261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.519 qpair failed and we were unable to recover it. 00:39:05.519 [2024-09-27 15:57:45.860591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.519 [2024-09-27 15:57:45.860598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.519 qpair failed and we were unable to recover it. 00:39:05.519 [2024-09-27 15:57:45.860973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.519 [2024-09-27 15:57:45.860980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.519 qpair failed and we were unable to recover it. 00:39:05.519 [2024-09-27 15:57:45.861285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.519 [2024-09-27 15:57:45.861293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.519 qpair failed and we were unable to recover it. 00:39:05.519 [2024-09-27 15:57:45.861569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.519 [2024-09-27 15:57:45.861577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.519 qpair failed and we were unable to recover it. 00:39:05.519 [2024-09-27 15:57:45.861901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.519 [2024-09-27 15:57:45.861909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.519 qpair failed and we were unable to recover it. 00:39:05.519 [2024-09-27 15:57:45.862098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.519 [2024-09-27 15:57:45.862109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.519 qpair failed and we were unable to recover it. 00:39:05.519 [2024-09-27 15:57:45.862441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.519 [2024-09-27 15:57:45.862449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.519 qpair failed and we were unable to recover it. 00:39:05.519 [2024-09-27 15:57:45.862653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.519 [2024-09-27 15:57:45.862663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.519 qpair failed and we were unable to recover it. 00:39:05.519 [2024-09-27 15:57:45.863023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.519 [2024-09-27 15:57:45.863033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.519 qpair failed and we were unable to recover it. 00:39:05.519 [2024-09-27 15:57:45.863248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.519 [2024-09-27 15:57:45.863257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.519 qpair failed and we were unable to recover it. 00:39:05.519 [2024-09-27 15:57:45.863577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.519 [2024-09-27 15:57:45.863586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.519 qpair failed and we were unable to recover it. 00:39:05.519 [2024-09-27 15:57:45.863905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.519 [2024-09-27 15:57:45.863915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.519 qpair failed and we were unable to recover it. 00:39:05.519 [2024-09-27 15:57:45.864254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.519 [2024-09-27 15:57:45.864262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.519 qpair failed and we were unable to recover it. 00:39:05.519 [2024-09-27 15:57:45.864576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.519 [2024-09-27 15:57:45.864583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.519 qpair failed and we were unable to recover it. 00:39:05.519 [2024-09-27 15:57:45.864786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.519 [2024-09-27 15:57:45.864794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.519 qpair failed and we were unable to recover it. 00:39:05.519 [2024-09-27 15:57:45.865034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.519 [2024-09-27 15:57:45.865043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.519 qpair failed and we were unable to recover it. 00:39:05.519 [2024-09-27 15:57:45.865370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.519 [2024-09-27 15:57:45.865377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.519 qpair failed and we were unable to recover it. 00:39:05.519 [2024-09-27 15:57:45.865699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.519 [2024-09-27 15:57:45.865708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.519 qpair failed and we were unable to recover it. 00:39:05.519 [2024-09-27 15:57:45.866029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.520 [2024-09-27 15:57:45.866037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.520 qpair failed and we were unable to recover it. 00:39:05.520 [2024-09-27 15:57:45.866330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.520 [2024-09-27 15:57:45.866338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.520 qpair failed and we were unable to recover it. 00:39:05.520 [2024-09-27 15:57:45.866672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.520 [2024-09-27 15:57:45.866680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.520 qpair failed and we were unable to recover it. 00:39:05.520 [2024-09-27 15:57:45.866987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.520 [2024-09-27 15:57:45.866996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.520 qpair failed and we were unable to recover it. 00:39:05.520 [2024-09-27 15:57:45.867299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.520 [2024-09-27 15:57:45.867306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.520 qpair failed and we were unable to recover it. 00:39:05.520 [2024-09-27 15:57:45.867627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.520 [2024-09-27 15:57:45.867636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.520 qpair failed and we were unable to recover it. 00:39:05.520 [2024-09-27 15:57:45.867963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.520 [2024-09-27 15:57:45.867973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.520 qpair failed and we were unable to recover it. 00:39:05.520 [2024-09-27 15:57:45.868284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.520 [2024-09-27 15:57:45.868292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.520 qpair failed and we were unable to recover it. 00:39:05.520 [2024-09-27 15:57:45.868618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.520 [2024-09-27 15:57:45.868627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.520 qpair failed and we were unable to recover it. 00:39:05.520 [2024-09-27 15:57:45.868944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.520 [2024-09-27 15:57:45.868953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.520 qpair failed and we were unable to recover it. 00:39:05.520 [2024-09-27 15:57:45.869234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.520 [2024-09-27 15:57:45.869241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.520 qpair failed and we were unable to recover it. 00:39:05.520 [2024-09-27 15:57:45.869551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.520 [2024-09-27 15:57:45.869559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.520 qpair failed and we were unable to recover it. 00:39:05.520 [2024-09-27 15:57:45.869880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.520 [2024-09-27 15:57:45.869890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.520 qpair failed and we were unable to recover it. 00:39:05.520 [2024-09-27 15:57:45.870291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.520 [2024-09-27 15:57:45.870299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.520 qpair failed and we were unable to recover it. 00:39:05.520 [2024-09-27 15:57:45.870591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.520 [2024-09-27 15:57:45.870600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.520 qpair failed and we were unable to recover it. 00:39:05.520 [2024-09-27 15:57:45.870984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.520 [2024-09-27 15:57:45.870998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.520 qpair failed and we were unable to recover it. 00:39:05.520 [2024-09-27 15:57:45.871326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.520 [2024-09-27 15:57:45.871336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.520 qpair failed and we were unable to recover it. 00:39:05.520 [2024-09-27 15:57:45.871665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.520 [2024-09-27 15:57:45.871674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.520 qpair failed and we were unable to recover it. 00:39:05.520 [2024-09-27 15:57:45.871978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.520 [2024-09-27 15:57:45.871986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.520 qpair failed and we were unable to recover it. 00:39:05.520 [2024-09-27 15:57:45.872311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.520 [2024-09-27 15:57:45.872319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.520 qpair failed and we were unable to recover it. 00:39:05.520 [2024-09-27 15:57:45.872635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.520 [2024-09-27 15:57:45.872642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.520 qpair failed and we were unable to recover it. 00:39:05.520 [2024-09-27 15:57:45.872956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.520 [2024-09-27 15:57:45.872963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.520 qpair failed and we were unable to recover it. 00:39:05.520 [2024-09-27 15:57:45.873295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.520 [2024-09-27 15:57:45.873303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.520 qpair failed and we were unable to recover it. 00:39:05.520 [2024-09-27 15:57:45.873627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.520 [2024-09-27 15:57:45.873635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.520 qpair failed and we were unable to recover it. 00:39:05.520 [2024-09-27 15:57:45.873966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.520 [2024-09-27 15:57:45.873975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.520 qpair failed and we were unable to recover it. 00:39:05.520 [2024-09-27 15:57:45.874301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.520 [2024-09-27 15:57:45.874309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.520 qpair failed and we were unable to recover it. 00:39:05.520 [2024-09-27 15:57:45.874618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.520 [2024-09-27 15:57:45.874626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.520 qpair failed and we were unable to recover it. 00:39:05.520 [2024-09-27 15:57:45.874952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.520 [2024-09-27 15:57:45.874960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.520 qpair failed and we were unable to recover it. 00:39:05.520 [2024-09-27 15:57:45.875349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.520 [2024-09-27 15:57:45.875357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.520 qpair failed and we were unable to recover it. 00:39:05.520 [2024-09-27 15:57:45.875681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.520 [2024-09-27 15:57:45.875689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.520 qpair failed and we were unable to recover it. 00:39:05.520 [2024-09-27 15:57:45.876017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.520 [2024-09-27 15:57:45.876025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.520 qpair failed and we were unable to recover it. 00:39:05.520 [2024-09-27 15:57:45.876345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.520 [2024-09-27 15:57:45.876353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.520 qpair failed and we were unable to recover it. 00:39:05.520 [2024-09-27 15:57:45.876680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.520 [2024-09-27 15:57:45.876687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.520 qpair failed and we were unable to recover it. 00:39:05.520 [2024-09-27 15:57:45.877023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.520 [2024-09-27 15:57:45.877031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.520 qpair failed and we were unable to recover it. 00:39:05.520 [2024-09-27 15:57:45.877365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.520 [2024-09-27 15:57:45.877373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.520 qpair failed and we were unable to recover it. 00:39:05.520 [2024-09-27 15:57:45.877690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.520 [2024-09-27 15:57:45.877697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.520 qpair failed and we were unable to recover it. 00:39:05.520 [2024-09-27 15:57:45.878063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.520 [2024-09-27 15:57:45.878071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.520 qpair failed and we were unable to recover it. 00:39:05.520 [2024-09-27 15:57:45.878383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.521 [2024-09-27 15:57:45.878391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.521 qpair failed and we were unable to recover it. 00:39:05.521 [2024-09-27 15:57:45.878722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.521 [2024-09-27 15:57:45.878730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.521 qpair failed and we were unable to recover it. 00:39:05.521 [2024-09-27 15:57:45.879043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.521 [2024-09-27 15:57:45.879053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.521 qpair failed and we were unable to recover it. 00:39:05.521 [2024-09-27 15:57:45.879381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.521 [2024-09-27 15:57:45.879391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.521 qpair failed and we were unable to recover it. 00:39:05.521 [2024-09-27 15:57:45.879715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.521 [2024-09-27 15:57:45.879726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.521 qpair failed and we were unable to recover it. 00:39:05.521 [2024-09-27 15:57:45.880053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.521 [2024-09-27 15:57:45.880065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.521 qpair failed and we were unable to recover it. 00:39:05.521 [2024-09-27 15:57:45.880373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.521 [2024-09-27 15:57:45.880382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.521 qpair failed and we were unable to recover it. 00:39:05.521 [2024-09-27 15:57:45.880692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.521 [2024-09-27 15:57:45.880701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.521 qpair failed and we were unable to recover it. 00:39:05.521 [2024-09-27 15:57:45.881039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.521 [2024-09-27 15:57:45.881048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.521 qpair failed and we were unable to recover it. 00:39:05.521 [2024-09-27 15:57:45.881343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.521 [2024-09-27 15:57:45.881352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.521 qpair failed and we were unable to recover it. 00:39:05.521 [2024-09-27 15:57:45.881681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.521 [2024-09-27 15:57:45.881690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.521 qpair failed and we were unable to recover it. 00:39:05.521 [2024-09-27 15:57:45.882011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.521 [2024-09-27 15:57:45.882020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.521 qpair failed and we were unable to recover it. 00:39:05.521 [2024-09-27 15:57:45.882338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.521 [2024-09-27 15:57:45.882346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.521 qpair failed and we were unable to recover it. 00:39:05.521 [2024-09-27 15:57:45.882670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.521 [2024-09-27 15:57:45.882680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.521 qpair failed and we were unable to recover it. 00:39:05.521 [2024-09-27 15:57:45.882992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.521 [2024-09-27 15:57:45.883000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.521 qpair failed and we were unable to recover it. 00:39:05.521 [2024-09-27 15:57:45.883316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.521 [2024-09-27 15:57:45.883324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.521 qpair failed and we were unable to recover it. 00:39:05.521 [2024-09-27 15:57:45.883646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.521 [2024-09-27 15:57:45.883654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.521 qpair failed and we were unable to recover it. 00:39:05.521 [2024-09-27 15:57:45.883987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.521 [2024-09-27 15:57:45.883995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.521 qpair failed and we were unable to recover it. 00:39:05.521 [2024-09-27 15:57:45.884320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.521 [2024-09-27 15:57:45.884330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.521 qpair failed and we were unable to recover it. 00:39:05.521 [2024-09-27 15:57:45.884649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.521 [2024-09-27 15:57:45.884661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.521 qpair failed and we were unable to recover it. 00:39:05.521 [2024-09-27 15:57:45.884966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.521 [2024-09-27 15:57:45.884973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.521 qpair failed and we were unable to recover it. 00:39:05.521 [2024-09-27 15:57:45.885292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.521 [2024-09-27 15:57:45.885301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.521 qpair failed and we were unable to recover it. 00:39:05.521 [2024-09-27 15:57:45.885478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.521 [2024-09-27 15:57:45.885490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.521 qpair failed and we were unable to recover it. 00:39:05.521 [2024-09-27 15:57:45.885795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.521 [2024-09-27 15:57:45.885804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.521 qpair failed and we were unable to recover it. 00:39:05.521 [2024-09-27 15:57:45.886135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.521 [2024-09-27 15:57:45.886144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.521 qpair failed and we were unable to recover it. 00:39:05.521 [2024-09-27 15:57:45.886461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.521 [2024-09-27 15:57:45.886472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.521 qpair failed and we were unable to recover it. 00:39:05.521 [2024-09-27 15:57:45.886788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.521 [2024-09-27 15:57:45.886798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.521 qpair failed and we were unable to recover it. 00:39:05.521 [2024-09-27 15:57:45.887104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.521 [2024-09-27 15:57:45.887114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.521 qpair failed and we were unable to recover it. 00:39:05.521 [2024-09-27 15:57:45.887430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.521 [2024-09-27 15:57:45.887438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.521 qpair failed and we were unable to recover it. 00:39:05.521 [2024-09-27 15:57:45.887651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.521 [2024-09-27 15:57:45.887662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.521 qpair failed and we were unable to recover it. 00:39:05.521 [2024-09-27 15:57:45.887977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.521 [2024-09-27 15:57:45.887986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.521 qpair failed and we were unable to recover it. 00:39:05.521 [2024-09-27 15:57:45.888313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.521 [2024-09-27 15:57:45.888322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.521 qpair failed and we were unable to recover it. 00:39:05.521 [2024-09-27 15:57:45.888637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.521 [2024-09-27 15:57:45.888647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.521 qpair failed and we were unable to recover it. 00:39:05.521 [2024-09-27 15:57:45.888966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.522 [2024-09-27 15:57:45.888975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.522 qpair failed and we were unable to recover it. 00:39:05.522 [2024-09-27 15:57:45.889306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.522 [2024-09-27 15:57:45.889315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.522 qpair failed and we were unable to recover it. 00:39:05.522 [2024-09-27 15:57:45.889639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.522 [2024-09-27 15:57:45.889646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.522 qpair failed and we were unable to recover it. 00:39:05.522 [2024-09-27 15:57:45.889980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.522 [2024-09-27 15:57:45.889988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.522 qpair failed and we were unable to recover it. 00:39:05.522 [2024-09-27 15:57:45.890173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.522 [2024-09-27 15:57:45.890181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.522 qpair failed and we were unable to recover it. 00:39:05.522 [2024-09-27 15:57:45.890551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.522 [2024-09-27 15:57:45.890561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.522 qpair failed and we were unable to recover it. 00:39:05.522 [2024-09-27 15:57:45.890884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.522 [2024-09-27 15:57:45.890901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.522 qpair failed and we were unable to recover it. 00:39:05.522 [2024-09-27 15:57:45.891240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.522 [2024-09-27 15:57:45.891248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.522 qpair failed and we were unable to recover it. 00:39:05.522 [2024-09-27 15:57:45.891452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.522 [2024-09-27 15:57:45.891460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.522 qpair failed and we were unable to recover it. 00:39:05.522 [2024-09-27 15:57:45.891819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.522 [2024-09-27 15:57:45.891826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.522 qpair failed and we were unable to recover it. 00:39:05.522 [2024-09-27 15:57:45.892151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.522 [2024-09-27 15:57:45.892161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.522 qpair failed and we were unable to recover it. 00:39:05.522 [2024-09-27 15:57:45.892476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.522 [2024-09-27 15:57:45.892484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.522 qpair failed and we were unable to recover it. 00:39:05.522 [2024-09-27 15:57:45.892791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.522 [2024-09-27 15:57:45.892799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.522 qpair failed and we were unable to recover it. 00:39:05.522 [2024-09-27 15:57:45.893129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.522 [2024-09-27 15:57:45.893137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.522 qpair failed and we were unable to recover it. 00:39:05.522 [2024-09-27 15:57:45.893518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.522 [2024-09-27 15:57:45.893527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.522 qpair failed and we were unable to recover it. 00:39:05.522 [2024-09-27 15:57:45.893843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.522 [2024-09-27 15:57:45.893851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.522 qpair failed and we were unable to recover it. 00:39:05.522 [2024-09-27 15:57:45.894182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.522 [2024-09-27 15:57:45.894191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.522 qpair failed and we were unable to recover it. 00:39:05.522 [2024-09-27 15:57:45.894405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.522 [2024-09-27 15:57:45.894414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.522 qpair failed and we were unable to recover it. 00:39:05.522 [2024-09-27 15:57:45.894755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.522 [2024-09-27 15:57:45.894763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.522 qpair failed and we were unable to recover it. 00:39:05.522 [2024-09-27 15:57:45.895102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.522 [2024-09-27 15:57:45.895112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.522 qpair failed and we were unable to recover it. 00:39:05.522 [2024-09-27 15:57:45.895433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.522 [2024-09-27 15:57:45.895441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.522 qpair failed and we were unable to recover it. 00:39:05.522 [2024-09-27 15:57:45.895752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.522 [2024-09-27 15:57:45.895760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.522 qpair failed and we were unable to recover it. 00:39:05.522 [2024-09-27 15:57:45.896082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.522 [2024-09-27 15:57:45.896091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.522 qpair failed and we were unable to recover it. 00:39:05.522 [2024-09-27 15:57:45.896440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.522 [2024-09-27 15:57:45.896449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.522 qpair failed and we were unable to recover it. 00:39:05.522 [2024-09-27 15:57:45.896745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.522 [2024-09-27 15:57:45.896752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.522 qpair failed and we were unable to recover it. 00:39:05.522 [2024-09-27 15:57:45.897057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.522 [2024-09-27 15:57:45.897065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.522 qpair failed and we were unable to recover it. 00:39:05.522 [2024-09-27 15:57:45.897390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.522 [2024-09-27 15:57:45.897398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.522 qpair failed and we were unable to recover it. 00:39:05.522 [2024-09-27 15:57:45.897716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.522 [2024-09-27 15:57:45.897725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.522 qpair failed and we were unable to recover it. 00:39:05.522 [2024-09-27 15:57:45.898039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.522 [2024-09-27 15:57:45.898048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.522 qpair failed and we were unable to recover it. 00:39:05.522 [2024-09-27 15:57:45.898378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.522 [2024-09-27 15:57:45.898387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.522 qpair failed and we were unable to recover it. 00:39:05.522 [2024-09-27 15:57:45.898694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.522 [2024-09-27 15:57:45.898703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.522 qpair failed and we were unable to recover it. 00:39:05.522 [2024-09-27 15:57:45.899033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.522 [2024-09-27 15:57:45.899042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.522 qpair failed and we were unable to recover it. 00:39:05.522 [2024-09-27 15:57:45.899366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.522 [2024-09-27 15:57:45.899373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.522 qpair failed and we were unable to recover it. 00:39:05.522 [2024-09-27 15:57:45.899701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.522 [2024-09-27 15:57:45.899709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.522 qpair failed and we were unable to recover it. 00:39:05.522 [2024-09-27 15:57:45.900034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.522 [2024-09-27 15:57:45.900043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.522 qpair failed and we were unable to recover it. 00:39:05.522 [2024-09-27 15:57:45.900351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.522 [2024-09-27 15:57:45.900359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.522 qpair failed and we were unable to recover it. 00:39:05.522 [2024-09-27 15:57:45.900662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.522 [2024-09-27 15:57:45.900670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.522 qpair failed and we were unable to recover it. 00:39:05.523 [2024-09-27 15:57:45.900963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.523 [2024-09-27 15:57:45.900974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.523 qpair failed and we were unable to recover it. 00:39:05.523 [2024-09-27 15:57:45.901305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.523 [2024-09-27 15:57:45.901316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.523 qpair failed and we were unable to recover it. 00:39:05.523 [2024-09-27 15:57:45.901635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.523 [2024-09-27 15:57:45.901643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.523 qpair failed and we were unable to recover it. 00:39:05.523 [2024-09-27 15:57:45.901959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.523 [2024-09-27 15:57:45.901968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.523 qpair failed and we were unable to recover it. 00:39:05.523 [2024-09-27 15:57:45.902297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.523 [2024-09-27 15:57:45.902304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.523 qpair failed and we were unable to recover it. 00:39:05.523 [2024-09-27 15:57:45.902618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.523 [2024-09-27 15:57:45.902626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.523 qpair failed and we were unable to recover it. 00:39:05.523 [2024-09-27 15:57:45.902931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.523 [2024-09-27 15:57:45.902939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.523 qpair failed and we were unable to recover it. 00:39:05.523 [2024-09-27 15:57:45.903276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.523 [2024-09-27 15:57:45.903286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.523 qpair failed and we were unable to recover it. 00:39:05.523 [2024-09-27 15:57:45.903605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.523 [2024-09-27 15:57:45.903614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.523 qpair failed and we were unable to recover it. 00:39:05.523 [2024-09-27 15:57:45.903944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.523 [2024-09-27 15:57:45.903952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.523 qpair failed and we were unable to recover it. 00:39:05.523 [2024-09-27 15:57:45.904240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.523 [2024-09-27 15:57:45.904247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.523 qpair failed and we were unable to recover it. 00:39:05.523 [2024-09-27 15:57:45.904575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.523 [2024-09-27 15:57:45.904583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.523 qpair failed and we were unable to recover it. 00:39:05.523 [2024-09-27 15:57:45.904804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.523 [2024-09-27 15:57:45.904815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.523 qpair failed and we were unable to recover it. 00:39:05.523 [2024-09-27 15:57:45.905123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.523 [2024-09-27 15:57:45.905131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.523 qpair failed and we were unable to recover it. 00:39:05.523 [2024-09-27 15:57:45.905453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.523 [2024-09-27 15:57:45.905463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.523 qpair failed and we were unable to recover it. 00:39:05.523 [2024-09-27 15:57:45.905789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.523 [2024-09-27 15:57:45.905798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.523 qpair failed and we were unable to recover it. 00:39:05.523 [2024-09-27 15:57:45.905998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.523 [2024-09-27 15:57:45.906007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.523 qpair failed and we were unable to recover it. 00:39:05.523 [2024-09-27 15:57:45.906381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.523 [2024-09-27 15:57:45.906390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.523 qpair failed and we were unable to recover it. 00:39:05.523 [2024-09-27 15:57:45.906703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.523 [2024-09-27 15:57:45.906711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.523 qpair failed and we were unable to recover it. 00:39:05.523 [2024-09-27 15:57:45.907010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.523 [2024-09-27 15:57:45.907017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.523 qpair failed and we were unable to recover it. 00:39:05.523 [2024-09-27 15:57:45.907332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.523 [2024-09-27 15:57:45.907340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.523 qpair failed and we were unable to recover it. 00:39:05.523 [2024-09-27 15:57:45.907671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.523 [2024-09-27 15:57:45.907680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.523 qpair failed and we were unable to recover it. 00:39:05.523 [2024-09-27 15:57:45.908049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.523 [2024-09-27 15:57:45.908058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.523 qpair failed and we were unable to recover it. 00:39:05.523 [2024-09-27 15:57:45.908377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.523 [2024-09-27 15:57:45.908386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.523 qpair failed and we were unable to recover it. 00:39:05.523 [2024-09-27 15:57:45.908708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.523 [2024-09-27 15:57:45.908717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.523 qpair failed and we were unable to recover it. 00:39:05.523 [2024-09-27 15:57:45.909036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.523 [2024-09-27 15:57:45.909045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.523 qpair failed and we were unable to recover it. 00:39:05.523 [2024-09-27 15:57:45.909372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.523 [2024-09-27 15:57:45.909382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.523 qpair failed and we were unable to recover it. 00:39:05.523 [2024-09-27 15:57:45.909703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.523 [2024-09-27 15:57:45.909713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.523 qpair failed and we were unable to recover it. 00:39:05.523 [2024-09-27 15:57:45.910021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.523 [2024-09-27 15:57:45.910030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.523 qpair failed and we were unable to recover it. 00:39:05.523 [2024-09-27 15:57:45.910349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.523 [2024-09-27 15:57:45.910356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.523 qpair failed and we were unable to recover it. 00:39:05.523 [2024-09-27 15:57:45.910681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.523 [2024-09-27 15:57:45.910692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.523 qpair failed and we were unable to recover it. 00:39:05.523 [2024-09-27 15:57:45.911007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.523 [2024-09-27 15:57:45.911015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.523 qpair failed and we were unable to recover it. 00:39:05.523 [2024-09-27 15:57:45.911322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.523 [2024-09-27 15:57:45.911331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.523 qpair failed and we were unable to recover it. 00:39:05.523 [2024-09-27 15:57:45.911667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.523 [2024-09-27 15:57:45.911674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.523 qpair failed and we were unable to recover it. 00:39:05.523 [2024-09-27 15:57:45.911997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.523 [2024-09-27 15:57:45.912005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.523 qpair failed and we were unable to recover it. 00:39:05.523 [2024-09-27 15:57:45.912321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.523 [2024-09-27 15:57:45.912330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.523 qpair failed and we were unable to recover it. 00:39:05.523 [2024-09-27 15:57:45.912638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.523 [2024-09-27 15:57:45.912646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.523 qpair failed and we were unable to recover it. 00:39:05.523 [2024-09-27 15:57:45.912972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.524 [2024-09-27 15:57:45.912981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.524 qpair failed and we were unable to recover it. 00:39:05.524 [2024-09-27 15:57:45.913161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.524 [2024-09-27 15:57:45.913170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.524 qpair failed and we were unable to recover it. 00:39:05.524 [2024-09-27 15:57:45.913467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.524 [2024-09-27 15:57:45.913477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.524 qpair failed and we were unable to recover it. 00:39:05.524 [2024-09-27 15:57:45.913793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.524 [2024-09-27 15:57:45.913802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.524 qpair failed and we were unable to recover it. 00:39:05.524 [2024-09-27 15:57:45.914095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.524 [2024-09-27 15:57:45.914104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.524 qpair failed and we were unable to recover it. 00:39:05.524 [2024-09-27 15:57:45.914449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.524 [2024-09-27 15:57:45.914457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.524 qpair failed and we were unable to recover it. 00:39:05.524 [2024-09-27 15:57:45.914783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.524 [2024-09-27 15:57:45.914790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.524 qpair failed and we were unable to recover it. 00:39:05.524 [2024-09-27 15:57:45.915105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.524 [2024-09-27 15:57:45.915113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.524 qpair failed and we were unable to recover it. 00:39:05.524 [2024-09-27 15:57:45.915329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.524 [2024-09-27 15:57:45.915338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.524 qpair failed and we were unable to recover it. 00:39:05.524 [2024-09-27 15:57:45.915675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.524 [2024-09-27 15:57:45.915682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.524 qpair failed and we were unable to recover it. 00:39:05.524 [2024-09-27 15:57:45.916016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.524 [2024-09-27 15:57:45.916024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.524 qpair failed and we were unable to recover it. 00:39:05.524 [2024-09-27 15:57:45.916356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.524 [2024-09-27 15:57:45.916364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.524 qpair failed and we were unable to recover it. 00:39:05.524 [2024-09-27 15:57:45.916660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.524 [2024-09-27 15:57:45.916668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.524 qpair failed and we were unable to recover it. 00:39:05.524 [2024-09-27 15:57:45.916975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.524 [2024-09-27 15:57:45.916982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.524 qpair failed and we were unable to recover it. 00:39:05.524 [2024-09-27 15:57:45.917166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.524 [2024-09-27 15:57:45.917173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.524 qpair failed and we were unable to recover it. 00:39:05.524 [2024-09-27 15:57:45.917367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.524 [2024-09-27 15:57:45.917374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.524 qpair failed and we were unable to recover it. 00:39:05.524 [2024-09-27 15:57:45.917717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.524 [2024-09-27 15:57:45.917726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.524 qpair failed and we were unable to recover it. 00:39:05.524 [2024-09-27 15:57:45.918046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.524 [2024-09-27 15:57:45.918054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.524 qpair failed and we were unable to recover it. 00:39:05.524 [2024-09-27 15:57:45.918343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.524 [2024-09-27 15:57:45.918351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.524 qpair failed and we were unable to recover it. 00:39:05.524 [2024-09-27 15:57:45.918677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.524 [2024-09-27 15:57:45.918685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.524 qpair failed and we were unable to recover it. 00:39:05.524 [2024-09-27 15:57:45.918979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.524 [2024-09-27 15:57:45.918990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.524 qpair failed and we were unable to recover it. 00:39:05.524 [2024-09-27 15:57:45.919189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.524 [2024-09-27 15:57:45.919198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.524 qpair failed and we were unable to recover it. 00:39:05.524 [2024-09-27 15:57:45.919522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.524 [2024-09-27 15:57:45.919529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.524 qpair failed and we were unable to recover it. 00:39:05.524 [2024-09-27 15:57:45.919830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.524 [2024-09-27 15:57:45.919838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.524 qpair failed and we were unable to recover it. 00:39:05.524 [2024-09-27 15:57:45.920164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.524 [2024-09-27 15:57:45.920172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.524 qpair failed and we were unable to recover it. 00:39:05.524 [2024-09-27 15:57:45.920410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.524 [2024-09-27 15:57:45.920418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.524 qpair failed and we were unable to recover it. 00:39:05.524 [2024-09-27 15:57:45.920742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.524 [2024-09-27 15:57:45.920750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.524 qpair failed and we were unable to recover it. 00:39:05.524 [2024-09-27 15:57:45.921061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.524 [2024-09-27 15:57:45.921069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.524 qpair failed and we were unable to recover it. 00:39:05.524 [2024-09-27 15:57:45.921466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.524 [2024-09-27 15:57:45.921474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.524 qpair failed and we were unable to recover it. 00:39:05.524 [2024-09-27 15:57:45.921803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.524 [2024-09-27 15:57:45.921812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.524 qpair failed and we were unable to recover it. 00:39:05.524 [2024-09-27 15:57:45.922189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.524 [2024-09-27 15:57:45.922197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.524 qpair failed and we were unable to recover it. 00:39:05.524 [2024-09-27 15:57:45.922414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.524 [2024-09-27 15:57:45.922422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.524 qpair failed and we were unable to recover it. 00:39:05.524 [2024-09-27 15:57:45.922621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.524 [2024-09-27 15:57:45.922628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.524 qpair failed and we were unable to recover it. 00:39:05.524 [2024-09-27 15:57:45.922970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.524 [2024-09-27 15:57:45.922978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.524 qpair failed and we were unable to recover it. 00:39:05.524 [2024-09-27 15:57:45.923153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.524 [2024-09-27 15:57:45.923161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.524 qpair failed and we were unable to recover it. 00:39:05.524 [2024-09-27 15:57:45.923463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.524 [2024-09-27 15:57:45.923471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.524 qpair failed and we were unable to recover it. 00:39:05.524 [2024-09-27 15:57:45.923755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.524 [2024-09-27 15:57:45.923764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.524 qpair failed and we were unable to recover it. 00:39:05.525 [2024-09-27 15:57:45.924071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.525 [2024-09-27 15:57:45.924079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.525 qpair failed and we were unable to recover it. 00:39:05.525 [2024-09-27 15:57:45.924441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.525 [2024-09-27 15:57:45.924450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.525 qpair failed and we were unable to recover it. 00:39:05.525 [2024-09-27 15:57:45.924747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.525 [2024-09-27 15:57:45.924754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.525 qpair failed and we were unable to recover it. 00:39:05.525 [2024-09-27 15:57:45.925048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.525 [2024-09-27 15:57:45.925056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.525 qpair failed and we were unable to recover it. 00:39:05.525 [2024-09-27 15:57:45.925279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.525 [2024-09-27 15:57:45.925288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.525 qpair failed and we were unable to recover it. 00:39:05.525 [2024-09-27 15:57:45.925568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.525 [2024-09-27 15:57:45.925576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.525 qpair failed and we were unable to recover it. 00:39:05.525 [2024-09-27 15:57:45.925965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.525 [2024-09-27 15:57:45.925974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.525 qpair failed and we were unable to recover it. 00:39:05.525 [2024-09-27 15:57:45.926302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.525 [2024-09-27 15:57:45.926310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.525 qpair failed and we were unable to recover it. 00:39:05.525 [2024-09-27 15:57:45.926627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.525 [2024-09-27 15:57:45.926634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.525 qpair failed and we were unable to recover it. 00:39:05.525 [2024-09-27 15:57:45.927029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.525 [2024-09-27 15:57:45.927037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.525 qpair failed and we were unable to recover it. 00:39:05.525 [2024-09-27 15:57:45.927360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.525 [2024-09-27 15:57:45.927368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.525 qpair failed and we were unable to recover it. 00:39:05.525 [2024-09-27 15:57:45.927675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.525 [2024-09-27 15:57:45.927682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.525 qpair failed and we were unable to recover it. 00:39:05.525 [2024-09-27 15:57:45.928012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.525 [2024-09-27 15:57:45.928021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.525 qpair failed and we were unable to recover it. 00:39:05.525 [2024-09-27 15:57:45.928356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.525 [2024-09-27 15:57:45.928364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.525 qpair failed and we were unable to recover it. 00:39:05.525 [2024-09-27 15:57:45.928691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.525 [2024-09-27 15:57:45.928699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.525 qpair failed and we were unable to recover it. 00:39:05.525 [2024-09-27 15:57:45.929005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.525 [2024-09-27 15:57:45.929013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.525 qpair failed and we were unable to recover it. 00:39:05.525 [2024-09-27 15:57:45.929337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.525 [2024-09-27 15:57:45.929350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.525 qpair failed and we were unable to recover it. 00:39:05.525 [2024-09-27 15:57:45.929641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.525 [2024-09-27 15:57:45.929651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.525 qpair failed and we were unable to recover it. 00:39:05.525 [2024-09-27 15:57:45.929965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.525 [2024-09-27 15:57:45.929974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.525 qpair failed and we were unable to recover it. 00:39:05.525 [2024-09-27 15:57:45.930228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.525 [2024-09-27 15:57:45.930236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.525 qpair failed and we were unable to recover it. 00:39:05.525 [2024-09-27 15:57:45.930552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.525 [2024-09-27 15:57:45.930560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.525 qpair failed and we were unable to recover it. 00:39:05.525 [2024-09-27 15:57:45.930739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.525 [2024-09-27 15:57:45.930747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.525 qpair failed and we were unable to recover it. 00:39:05.525 [2024-09-27 15:57:45.931086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.525 [2024-09-27 15:57:45.931094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.525 qpair failed and we were unable to recover it. 00:39:05.525 [2024-09-27 15:57:45.931393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.525 [2024-09-27 15:57:45.931401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.525 qpair failed and we were unable to recover it. 00:39:05.525 [2024-09-27 15:57:45.931689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.525 [2024-09-27 15:57:45.931697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.525 qpair failed and we were unable to recover it. 00:39:05.525 [2024-09-27 15:57:45.932036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.525 [2024-09-27 15:57:45.932044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.525 qpair failed and we were unable to recover it. 00:39:05.525 [2024-09-27 15:57:45.932345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.525 [2024-09-27 15:57:45.932353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.525 qpair failed and we were unable to recover it. 00:39:05.525 [2024-09-27 15:57:45.932690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.525 [2024-09-27 15:57:45.932698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.525 qpair failed and we were unable to recover it. 00:39:05.525 [2024-09-27 15:57:45.933027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.525 [2024-09-27 15:57:45.933035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.525 qpair failed and we were unable to recover it. 00:39:05.525 [2024-09-27 15:57:45.933232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.525 [2024-09-27 15:57:45.933240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.525 qpair failed and we were unable to recover it. 00:39:05.525 [2024-09-27 15:57:45.933594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.525 [2024-09-27 15:57:45.933602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.525 qpair failed and we were unable to recover it. 00:39:05.525 [2024-09-27 15:57:45.933951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.525 [2024-09-27 15:57:45.933959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.525 qpair failed and we were unable to recover it. 00:39:05.525 [2024-09-27 15:57:45.934283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.525 [2024-09-27 15:57:45.934291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.525 qpair failed and we were unable to recover it. 00:39:05.525 [2024-09-27 15:57:45.934613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.525 [2024-09-27 15:57:45.934621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.525 qpair failed and we were unable to recover it. 00:39:05.525 [2024-09-27 15:57:45.934943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.525 [2024-09-27 15:57:45.934951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.525 qpair failed and we were unable to recover it. 00:39:05.525 [2024-09-27 15:57:45.935278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.525 [2024-09-27 15:57:45.935286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.525 qpair failed and we were unable to recover it. 00:39:05.525 [2024-09-27 15:57:45.935611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.525 [2024-09-27 15:57:45.935619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.525 qpair failed and we were unable to recover it. 00:39:05.525 [2024-09-27 15:57:45.935830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.526 [2024-09-27 15:57:45.935838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.526 qpair failed and we were unable to recover it. 00:39:05.526 [2024-09-27 15:57:45.936104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.526 [2024-09-27 15:57:45.936112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.526 qpair failed and we were unable to recover it. 00:39:05.526 [2024-09-27 15:57:45.936453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.526 [2024-09-27 15:57:45.936462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.526 qpair failed and we were unable to recover it. 00:39:05.526 [2024-09-27 15:57:45.936787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.526 [2024-09-27 15:57:45.936796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.526 qpair failed and we were unable to recover it. 00:39:05.526 [2024-09-27 15:57:45.936922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.526 [2024-09-27 15:57:45.936930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.526 qpair failed and we were unable to recover it. 00:39:05.526 [2024-09-27 15:57:45.937194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.526 [2024-09-27 15:57:45.937204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.526 qpair failed and we were unable to recover it. 00:39:05.526 [2024-09-27 15:57:45.937530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.526 [2024-09-27 15:57:45.937540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.526 qpair failed and we were unable to recover it. 00:39:05.526 [2024-09-27 15:57:45.937852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.526 [2024-09-27 15:57:45.937860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.526 qpair failed and we were unable to recover it. 00:39:05.526 [2024-09-27 15:57:45.938263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.526 [2024-09-27 15:57:45.938272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.526 qpair failed and we were unable to recover it. 00:39:05.526 [2024-09-27 15:57:45.938571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.526 [2024-09-27 15:57:45.938580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.526 qpair failed and we were unable to recover it. 00:39:05.526 [2024-09-27 15:57:45.938915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.526 [2024-09-27 15:57:45.938924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.526 qpair failed and we were unable to recover it. 00:39:05.526 [2024-09-27 15:57:45.939267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.526 [2024-09-27 15:57:45.939276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.526 qpair failed and we were unable to recover it. 00:39:05.526 [2024-09-27 15:57:45.939497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.526 [2024-09-27 15:57:45.939505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.526 qpair failed and we were unable to recover it. 00:39:05.526 [2024-09-27 15:57:45.939899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.526 [2024-09-27 15:57:45.939908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.526 qpair failed and we were unable to recover it. 00:39:05.526 [2024-09-27 15:57:45.940101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.526 [2024-09-27 15:57:45.940112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.526 qpair failed and we were unable to recover it. 00:39:05.526 [2024-09-27 15:57:45.940462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.526 [2024-09-27 15:57:45.940470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.526 qpair failed and we were unable to recover it. 00:39:05.526 [2024-09-27 15:57:45.940795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.526 [2024-09-27 15:57:45.940803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.526 qpair failed and we were unable to recover it. 00:39:05.526 [2024-09-27 15:57:45.940974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.526 [2024-09-27 15:57:45.940982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.526 qpair failed and we were unable to recover it. 00:39:05.526 [2024-09-27 15:57:45.941174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.526 [2024-09-27 15:57:45.941182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.526 qpair failed and we were unable to recover it. 00:39:05.526 [2024-09-27 15:57:45.941510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.526 [2024-09-27 15:57:45.941517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.526 qpair failed and we were unable to recover it. 00:39:05.526 [2024-09-27 15:57:45.941828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.526 [2024-09-27 15:57:45.941837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.526 qpair failed and we were unable to recover it. 00:39:05.526 [2024-09-27 15:57:45.942120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.526 [2024-09-27 15:57:45.942128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.526 qpair failed and we were unable to recover it. 00:39:05.526 [2024-09-27 15:57:45.942459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.526 [2024-09-27 15:57:45.942467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.526 qpair failed and we were unable to recover it. 00:39:05.526 [2024-09-27 15:57:45.942789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.526 [2024-09-27 15:57:45.942797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.526 qpair failed and we were unable to recover it. 00:39:05.526 [2024-09-27 15:57:45.943123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.526 [2024-09-27 15:57:45.943132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.526 qpair failed and we were unable to recover it. 00:39:05.526 [2024-09-27 15:57:45.943456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.526 [2024-09-27 15:57:45.943464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.526 qpair failed and we were unable to recover it. 00:39:05.526 [2024-09-27 15:57:45.943793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.526 [2024-09-27 15:57:45.943801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.526 qpair failed and we were unable to recover it. 00:39:05.526 [2024-09-27 15:57:45.944114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.526 [2024-09-27 15:57:45.944123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.526 qpair failed and we were unable to recover it. 00:39:05.526 [2024-09-27 15:57:45.944410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.526 [2024-09-27 15:57:45.944419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.526 qpair failed and we were unable to recover it. 00:39:05.526 [2024-09-27 15:57:45.944746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.526 [2024-09-27 15:57:45.944754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.526 qpair failed and we were unable to recover it. 00:39:05.526 [2024-09-27 15:57:45.944939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.526 [2024-09-27 15:57:45.944947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.526 qpair failed and we were unable to recover it. 00:39:05.526 [2024-09-27 15:57:45.945279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.526 [2024-09-27 15:57:45.945288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.526 qpair failed and we were unable to recover it. 00:39:05.526 [2024-09-27 15:57:45.945687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.526 [2024-09-27 15:57:45.945696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.526 qpair failed and we were unable to recover it. 00:39:05.526 [2024-09-27 15:57:45.946016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.527 [2024-09-27 15:57:45.946024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.527 qpair failed and we were unable to recover it. 00:39:05.527 [2024-09-27 15:57:45.946345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.527 [2024-09-27 15:57:45.946356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.527 qpair failed and we were unable to recover it. 00:39:05.527 [2024-09-27 15:57:45.946674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.527 [2024-09-27 15:57:45.946681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.527 qpair failed and we were unable to recover it. 00:39:05.527 [2024-09-27 15:57:45.946859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.527 [2024-09-27 15:57:45.946867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.527 qpair failed and we were unable to recover it. 00:39:05.527 [2024-09-27 15:57:45.947195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.527 [2024-09-27 15:57:45.947204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.527 qpair failed and we were unable to recover it. 00:39:05.527 [2024-09-27 15:57:45.947540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.527 [2024-09-27 15:57:45.947549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.527 qpair failed and we were unable to recover it. 00:39:05.527 [2024-09-27 15:57:45.947725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.527 [2024-09-27 15:57:45.947732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.527 qpair failed and we were unable to recover it. 00:39:05.527 [2024-09-27 15:57:45.947970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.527 [2024-09-27 15:57:45.947978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.527 qpair failed and we were unable to recover it. 00:39:05.527 [2024-09-27 15:57:45.948206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.527 [2024-09-27 15:57:45.948216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.527 qpair failed and we were unable to recover it. 00:39:05.527 [2024-09-27 15:57:45.948495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.527 [2024-09-27 15:57:45.948503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.527 qpair failed and we were unable to recover it. 00:39:05.527 [2024-09-27 15:57:45.948705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.527 [2024-09-27 15:57:45.948713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.527 qpair failed and we were unable to recover it. 00:39:05.527 [2024-09-27 15:57:45.949087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.527 [2024-09-27 15:57:45.949095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.527 qpair failed and we were unable to recover it. 00:39:05.527 [2024-09-27 15:57:45.949437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.527 [2024-09-27 15:57:45.949446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.527 qpair failed and we were unable to recover it. 00:39:05.527 [2024-09-27 15:57:45.949769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.527 [2024-09-27 15:57:45.949779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.527 qpair failed and we were unable to recover it. 00:39:05.527 [2024-09-27 15:57:45.950076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.527 [2024-09-27 15:57:45.950085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.527 qpair failed and we were unable to recover it. 00:39:05.527 [2024-09-27 15:57:45.950407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.527 [2024-09-27 15:57:45.950416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.527 qpair failed and we were unable to recover it. 00:39:05.527 [2024-09-27 15:57:45.950740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.527 [2024-09-27 15:57:45.950749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.527 qpair failed and we were unable to recover it. 00:39:05.527 [2024-09-27 15:57:45.951071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.527 [2024-09-27 15:57:45.951080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.527 qpair failed and we were unable to recover it. 00:39:05.527 [2024-09-27 15:57:45.951403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.527 [2024-09-27 15:57:45.951413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.527 qpair failed and we were unable to recover it. 00:39:05.527 [2024-09-27 15:57:45.951732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.527 [2024-09-27 15:57:45.951740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.527 qpair failed and we were unable to recover it. 00:39:05.527 [2024-09-27 15:57:45.952055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.527 [2024-09-27 15:57:45.952063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.527 qpair failed and we were unable to recover it. 00:39:05.527 [2024-09-27 15:57:45.952381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.527 [2024-09-27 15:57:45.952390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.527 qpair failed and we were unable to recover it. 00:39:05.527 [2024-09-27 15:57:45.952715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.527 [2024-09-27 15:57:45.952724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.527 qpair failed and we were unable to recover it. 00:39:05.527 [2024-09-27 15:57:45.953040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.527 [2024-09-27 15:57:45.953048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.527 qpair failed and we were unable to recover it. 00:39:05.527 [2024-09-27 15:57:45.953369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.527 [2024-09-27 15:57:45.953377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.527 qpair failed and we were unable to recover it. 00:39:05.527 [2024-09-27 15:57:45.953704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.527 [2024-09-27 15:57:45.953713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.527 qpair failed and we were unable to recover it. 00:39:05.527 [2024-09-27 15:57:45.954039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.527 [2024-09-27 15:57:45.954047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.527 qpair failed and we were unable to recover it. 00:39:05.527 [2024-09-27 15:57:45.954264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.527 [2024-09-27 15:57:45.954271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.527 qpair failed and we were unable to recover it. 00:39:05.527 [2024-09-27 15:57:45.954605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.527 [2024-09-27 15:57:45.954612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.527 qpair failed and we were unable to recover it. 00:39:05.527 [2024-09-27 15:57:45.954846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.527 [2024-09-27 15:57:45.954853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.527 qpair failed and we were unable to recover it. 00:39:05.527 [2024-09-27 15:57:45.955193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.527 [2024-09-27 15:57:45.955201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.527 qpair failed and we were unable to recover it. 00:39:05.527 [2024-09-27 15:57:45.955519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.527 [2024-09-27 15:57:45.955527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.527 qpair failed and we were unable to recover it. 00:39:05.527 [2024-09-27 15:57:45.955843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.527 [2024-09-27 15:57:45.955851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.527 qpair failed and we were unable to recover it. 00:39:05.527 [2024-09-27 15:57:45.956159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.527 [2024-09-27 15:57:45.956167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.527 qpair failed and we were unable to recover it. 00:39:05.527 [2024-09-27 15:57:45.956537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.527 [2024-09-27 15:57:45.956544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.527 qpair failed and we were unable to recover it. 00:39:05.527 [2024-09-27 15:57:45.956839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.528 [2024-09-27 15:57:45.956849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.528 qpair failed and we were unable to recover it. 00:39:05.528 [2024-09-27 15:57:45.957182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.528 [2024-09-27 15:57:45.957190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.528 qpair failed and we were unable to recover it. 00:39:05.528 [2024-09-27 15:57:45.957507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.528 [2024-09-27 15:57:45.957515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.528 qpair failed and we were unable to recover it. 00:39:05.528 [2024-09-27 15:57:45.957839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.528 [2024-09-27 15:57:45.957846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.528 qpair failed and we were unable to recover it. 00:39:05.528 [2024-09-27 15:57:45.958068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.528 [2024-09-27 15:57:45.958076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.528 qpair failed and we were unable to recover it. 00:39:05.528 [2024-09-27 15:57:45.958370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.528 [2024-09-27 15:57:45.958380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.528 qpair failed and we were unable to recover it. 00:39:05.528 [2024-09-27 15:57:45.958733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.528 [2024-09-27 15:57:45.958742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.528 qpair failed and we were unable to recover it. 00:39:05.528 [2024-09-27 15:57:45.959055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.528 [2024-09-27 15:57:45.959062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.528 qpair failed and we were unable to recover it. 00:39:05.528 [2024-09-27 15:57:45.959369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.528 [2024-09-27 15:57:45.959377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.528 qpair failed and we were unable to recover it. 00:39:05.528 [2024-09-27 15:57:45.959709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.528 [2024-09-27 15:57:45.959716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.528 qpair failed and we were unable to recover it. 00:39:05.528 [2024-09-27 15:57:45.960029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.528 [2024-09-27 15:57:45.960037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.528 qpair failed and we were unable to recover it. 00:39:05.528 [2024-09-27 15:57:45.960364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.528 [2024-09-27 15:57:45.960372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.528 qpair failed and we were unable to recover it. 00:39:05.528 [2024-09-27 15:57:45.960688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.528 [2024-09-27 15:57:45.960697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.528 qpair failed and we were unable to recover it. 00:39:05.528 [2024-09-27 15:57:45.961017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.528 [2024-09-27 15:57:45.961025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.528 qpair failed and we were unable to recover it. 00:39:05.528 [2024-09-27 15:57:45.961358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.528 [2024-09-27 15:57:45.961366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.528 qpair failed and we were unable to recover it. 00:39:05.528 [2024-09-27 15:57:45.961684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.528 [2024-09-27 15:57:45.961692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.528 qpair failed and we were unable to recover it. 00:39:05.528 [2024-09-27 15:57:45.961861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.528 [2024-09-27 15:57:45.961869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.528 qpair failed and we were unable to recover it. 00:39:05.528 [2024-09-27 15:57:45.962116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.528 [2024-09-27 15:57:45.962124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.528 qpair failed and we were unable to recover it. 00:39:05.528 [2024-09-27 15:57:45.962445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.528 [2024-09-27 15:57:45.962452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.528 qpair failed and we were unable to recover it. 00:39:05.528 [2024-09-27 15:57:45.962774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.528 [2024-09-27 15:57:45.962783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.528 qpair failed and we were unable to recover it. 00:39:05.528 [2024-09-27 15:57:45.962983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.528 [2024-09-27 15:57:45.962991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.528 qpair failed and we were unable to recover it. 00:39:05.528 [2024-09-27 15:57:45.963345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.528 [2024-09-27 15:57:45.963352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.528 qpair failed and we were unable to recover it. 00:39:05.528 [2024-09-27 15:57:45.963666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.528 [2024-09-27 15:57:45.963674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.528 qpair failed and we were unable to recover it. 00:39:05.528 [2024-09-27 15:57:45.963987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.528 [2024-09-27 15:57:45.963994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.528 qpair failed and we were unable to recover it. 00:39:05.528 [2024-09-27 15:57:45.964317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.528 [2024-09-27 15:57:45.964325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.528 qpair failed and we were unable to recover it. 00:39:05.528 [2024-09-27 15:57:45.964642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.528 [2024-09-27 15:57:45.964651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.528 qpair failed and we were unable to recover it. 00:39:05.528 [2024-09-27 15:57:45.964975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.528 [2024-09-27 15:57:45.964985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.528 qpair failed and we were unable to recover it. 00:39:05.528 [2024-09-27 15:57:45.965305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.528 [2024-09-27 15:57:45.965313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.528 qpair failed and we were unable to recover it. 00:39:05.528 [2024-09-27 15:57:45.965644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.528 [2024-09-27 15:57:45.965652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.528 qpair failed and we were unable to recover it. 00:39:05.528 [2024-09-27 15:57:45.965976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.528 [2024-09-27 15:57:45.965984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.528 qpair failed and we were unable to recover it. 00:39:05.528 [2024-09-27 15:57:45.966296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.528 [2024-09-27 15:57:45.966304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.528 qpair failed and we were unable to recover it. 00:39:05.528 [2024-09-27 15:57:45.966479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.528 [2024-09-27 15:57:45.966488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.528 qpair failed and we were unable to recover it. 00:39:05.528 [2024-09-27 15:57:45.966844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.528 [2024-09-27 15:57:45.966851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.528 qpair failed and we were unable to recover it. 00:39:05.528 [2024-09-27 15:57:45.967175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.528 [2024-09-27 15:57:45.967183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.529 qpair failed and we were unable to recover it. 00:39:05.529 [2024-09-27 15:57:45.967514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.529 [2024-09-27 15:57:45.967521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.529 qpair failed and we were unable to recover it. 00:39:05.529 [2024-09-27 15:57:45.967827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.529 [2024-09-27 15:57:45.967837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.529 qpair failed and we were unable to recover it. 00:39:05.529 [2024-09-27 15:57:45.968156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.529 [2024-09-27 15:57:45.968165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.529 qpair failed and we were unable to recover it. 00:39:05.529 [2024-09-27 15:57:45.968480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.529 [2024-09-27 15:57:45.968488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.529 qpair failed and we were unable to recover it. 00:39:05.529 [2024-09-27 15:57:45.968677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.529 [2024-09-27 15:57:45.968687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.529 qpair failed and we were unable to recover it. 00:39:05.529 [2024-09-27 15:57:45.969011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.529 [2024-09-27 15:57:45.969019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.529 qpair failed and we were unable to recover it. 00:39:05.529 [2024-09-27 15:57:45.969354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.529 [2024-09-27 15:57:45.969362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.529 qpair failed and we were unable to recover it. 00:39:05.529 [2024-09-27 15:57:45.969684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.529 [2024-09-27 15:57:45.969692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.529 qpair failed and we were unable to recover it. 00:39:05.529 [2024-09-27 15:57:45.970016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.529 [2024-09-27 15:57:45.970025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.529 qpair failed and we were unable to recover it. 00:39:05.529 [2024-09-27 15:57:45.970231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.529 [2024-09-27 15:57:45.970240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.529 qpair failed and we were unable to recover it. 00:39:05.529 [2024-09-27 15:57:45.970628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.529 [2024-09-27 15:57:45.970635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.529 qpair failed and we were unable to recover it. 00:39:05.529 [2024-09-27 15:57:45.970955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.529 [2024-09-27 15:57:45.970963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.529 qpair failed and we were unable to recover it. 00:39:05.529 [2024-09-27 15:57:45.971284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.529 [2024-09-27 15:57:45.971292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.529 qpair failed and we were unable to recover it. 00:39:05.529 [2024-09-27 15:57:45.971689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.529 [2024-09-27 15:57:45.971698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.529 qpair failed and we were unable to recover it. 00:39:05.529 [2024-09-27 15:57:45.971844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.529 [2024-09-27 15:57:45.971853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.529 qpair failed and we were unable to recover it. 00:39:05.529 [2024-09-27 15:57:45.972141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.529 [2024-09-27 15:57:45.972149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.529 qpair failed and we were unable to recover it. 00:39:05.529 [2024-09-27 15:57:45.972360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.529 [2024-09-27 15:57:45.972369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.529 qpair failed and we were unable to recover it. 00:39:05.529 [2024-09-27 15:57:45.972580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.529 [2024-09-27 15:57:45.972588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.529 qpair failed and we were unable to recover it. 00:39:05.529 [2024-09-27 15:57:45.972908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.529 [2024-09-27 15:57:45.972917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.529 qpair failed and we were unable to recover it. 00:39:05.529 [2024-09-27 15:57:45.973232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.529 [2024-09-27 15:57:45.973239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.529 qpair failed and we were unable to recover it. 00:39:05.529 [2024-09-27 15:57:45.973562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.529 [2024-09-27 15:57:45.973569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.529 qpair failed and we were unable to recover it. 00:39:05.529 [2024-09-27 15:57:45.973906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.529 [2024-09-27 15:57:45.973914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.529 qpair failed and we were unable to recover it. 00:39:05.529 [2024-09-27 15:57:45.974216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.529 [2024-09-27 15:57:45.974224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.529 qpair failed and we were unable to recover it. 00:39:05.529 [2024-09-27 15:57:45.974549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.529 [2024-09-27 15:57:45.974557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.529 qpair failed and we were unable to recover it. 00:39:05.806 [2024-09-27 15:57:45.974876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.806 [2024-09-27 15:57:45.974888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.806 qpair failed and we were unable to recover it. 00:39:05.806 [2024-09-27 15:57:45.975224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.806 [2024-09-27 15:57:45.975234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.806 qpair failed and we were unable to recover it. 00:39:05.806 [2024-09-27 15:57:45.975553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.806 [2024-09-27 15:57:45.975565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.806 qpair failed and we were unable to recover it. 00:39:05.806 [2024-09-27 15:57:45.975880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.806 [2024-09-27 15:57:45.975889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.806 qpair failed and we were unable to recover it. 00:39:05.806 [2024-09-27 15:57:45.976222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.806 [2024-09-27 15:57:45.976230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.806 qpair failed and we were unable to recover it. 00:39:05.806 [2024-09-27 15:57:45.976547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.806 [2024-09-27 15:57:45.976555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.806 qpair failed and we were unable to recover it. 00:39:05.806 [2024-09-27 15:57:45.976872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.806 [2024-09-27 15:57:45.976881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.806 qpair failed and we were unable to recover it. 00:39:05.806 [2024-09-27 15:57:45.977093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.806 [2024-09-27 15:57:45.977102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.806 qpair failed and we were unable to recover it. 00:39:05.806 [2024-09-27 15:57:45.977439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.806 [2024-09-27 15:57:45.977448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.806 qpair failed and we were unable to recover it. 00:39:05.806 [2024-09-27 15:57:45.977779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.806 [2024-09-27 15:57:45.977789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.806 qpair failed and we were unable to recover it. 00:39:05.806 [2024-09-27 15:57:45.978013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.806 [2024-09-27 15:57:45.978025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.806 qpair failed and we were unable to recover it. 00:39:05.806 [2024-09-27 15:57:45.978344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.806 [2024-09-27 15:57:45.978354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.806 qpair failed and we were unable to recover it. 00:39:05.806 [2024-09-27 15:57:45.978693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.806 [2024-09-27 15:57:45.978702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.806 qpair failed and we were unable to recover it. 00:39:05.806 [2024-09-27 15:57:45.978993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.806 [2024-09-27 15:57:45.979001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.806 qpair failed and we were unable to recover it. 00:39:05.806 [2024-09-27 15:57:45.979318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.806 [2024-09-27 15:57:45.979328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.806 qpair failed and we were unable to recover it. 00:39:05.806 [2024-09-27 15:57:45.979561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.806 [2024-09-27 15:57:45.979570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.806 qpair failed and we were unable to recover it. 00:39:05.806 [2024-09-27 15:57:45.979932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.806 [2024-09-27 15:57:45.979940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.806 qpair failed and we were unable to recover it. 00:39:05.806 [2024-09-27 15:57:45.980234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.806 [2024-09-27 15:57:45.980244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.806 qpair failed and we were unable to recover it. 00:39:05.806 [2024-09-27 15:57:45.980568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.806 [2024-09-27 15:57:45.980576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.806 qpair failed and we were unable to recover it. 00:39:05.806 [2024-09-27 15:57:45.980906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.806 [2024-09-27 15:57:45.980914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.806 qpair failed and we were unable to recover it. 00:39:05.806 [2024-09-27 15:57:45.981269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.806 [2024-09-27 15:57:45.981276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.806 qpair failed and we were unable to recover it. 00:39:05.806 [2024-09-27 15:57:45.981591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.806 [2024-09-27 15:57:45.981599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.806 qpair failed and we were unable to recover it. 00:39:05.806 [2024-09-27 15:57:45.981932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.806 [2024-09-27 15:57:45.981939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.806 qpair failed and we were unable to recover it. 00:39:05.806 [2024-09-27 15:57:45.982175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.806 [2024-09-27 15:57:45.982182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.806 qpair failed and we were unable to recover it. 00:39:05.806 [2024-09-27 15:57:45.982502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.806 [2024-09-27 15:57:45.982510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.806 qpair failed and we were unable to recover it. 00:39:05.806 [2024-09-27 15:57:45.982831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.806 [2024-09-27 15:57:45.982839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.806 qpair failed and we were unable to recover it. 00:39:05.806 [2024-09-27 15:57:45.983163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.806 [2024-09-27 15:57:45.983171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.806 qpair failed and we were unable to recover it. 00:39:05.806 [2024-09-27 15:57:45.983336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.806 [2024-09-27 15:57:45.983344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.806 qpair failed and we were unable to recover it. 00:39:05.806 [2024-09-27 15:57:45.983741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.806 [2024-09-27 15:57:45.983748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.806 qpair failed and we were unable to recover it. 00:39:05.806 [2024-09-27 15:57:45.984071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.806 [2024-09-27 15:57:45.984079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.806 qpair failed and we were unable to recover it. 00:39:05.806 [2024-09-27 15:57:45.984399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.806 [2024-09-27 15:57:45.984406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.806 qpair failed and we were unable to recover it. 00:39:05.806 [2024-09-27 15:57:45.984734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.806 [2024-09-27 15:57:45.984742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.806 qpair failed and we were unable to recover it. 00:39:05.806 [2024-09-27 15:57:45.985063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.806 [2024-09-27 15:57:45.985073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.806 qpair failed and we were unable to recover it. 00:39:05.806 [2024-09-27 15:57:45.985234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.806 [2024-09-27 15:57:45.985243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.806 qpair failed and we were unable to recover it. 00:39:05.806 [2024-09-27 15:57:45.985567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.806 [2024-09-27 15:57:45.985575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.806 qpair failed and we were unable to recover it. 00:39:05.806 [2024-09-27 15:57:45.985993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.806 [2024-09-27 15:57:45.986003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.806 qpair failed and we were unable to recover it. 00:39:05.806 [2024-09-27 15:57:45.986388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.806 [2024-09-27 15:57:45.986397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.806 qpair failed and we were unable to recover it. 00:39:05.806 [2024-09-27 15:57:45.986722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.806 [2024-09-27 15:57:45.986732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.806 qpair failed and we were unable to recover it. 00:39:05.806 [2024-09-27 15:57:45.987047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.806 [2024-09-27 15:57:45.987055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.806 qpair failed and we were unable to recover it. 00:39:05.806 [2024-09-27 15:57:45.987375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.807 [2024-09-27 15:57:45.987384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.807 qpair failed and we were unable to recover it. 00:39:05.807 [2024-09-27 15:57:45.987708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.807 [2024-09-27 15:57:45.987716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.807 qpair failed and we were unable to recover it. 00:39:05.807 [2024-09-27 15:57:45.988029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.807 [2024-09-27 15:57:45.988038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.807 qpair failed and we were unable to recover it. 00:39:05.807 [2024-09-27 15:57:45.988364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.807 [2024-09-27 15:57:45.988372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.807 qpair failed and we were unable to recover it. 00:39:05.807 [2024-09-27 15:57:45.988703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.807 [2024-09-27 15:57:45.988710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.807 qpair failed and we were unable to recover it. 00:39:05.807 [2024-09-27 15:57:45.989003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.807 [2024-09-27 15:57:45.989011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.807 qpair failed and we were unable to recover it. 00:39:05.807 [2024-09-27 15:57:45.989371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.807 [2024-09-27 15:57:45.989379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.807 qpair failed and we were unable to recover it. 00:39:05.807 [2024-09-27 15:57:45.989693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.807 [2024-09-27 15:57:45.989703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.807 qpair failed and we were unable to recover it. 00:39:05.807 [2024-09-27 15:57:45.990022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.807 [2024-09-27 15:57:45.990031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.807 qpair failed and we were unable to recover it. 00:39:05.807 [2024-09-27 15:57:45.990315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.807 [2024-09-27 15:57:45.990323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.807 qpair failed and we were unable to recover it. 00:39:05.807 [2024-09-27 15:57:45.990641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.807 [2024-09-27 15:57:45.990648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.807 qpair failed and we were unable to recover it. 00:39:05.807 [2024-09-27 15:57:45.990973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.807 [2024-09-27 15:57:45.990981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.807 qpair failed and we were unable to recover it. 00:39:05.807 [2024-09-27 15:57:45.991300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.807 [2024-09-27 15:57:45.991308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.807 qpair failed and we were unable to recover it. 00:39:05.807 [2024-09-27 15:57:45.991610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.807 [2024-09-27 15:57:45.991618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.807 qpair failed and we were unable to recover it. 00:39:05.807 [2024-09-27 15:57:45.991941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.807 [2024-09-27 15:57:45.991950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.807 qpair failed and we were unable to recover it. 00:39:05.807 [2024-09-27 15:57:45.992134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.807 [2024-09-27 15:57:45.992142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.807 qpair failed and we were unable to recover it. 00:39:05.807 [2024-09-27 15:57:45.992520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.807 [2024-09-27 15:57:45.992527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.807 qpair failed and we were unable to recover it. 00:39:05.807 [2024-09-27 15:57:45.992819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.807 [2024-09-27 15:57:45.992827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.807 qpair failed and we were unable to recover it. 00:39:05.807 [2024-09-27 15:57:45.993114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.807 [2024-09-27 15:57:45.993121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.807 qpair failed and we were unable to recover it. 00:39:05.807 [2024-09-27 15:57:45.993453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.807 [2024-09-27 15:57:45.993461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.807 qpair failed and we were unable to recover it. 00:39:05.807 [2024-09-27 15:57:45.993774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.807 [2024-09-27 15:57:45.993783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.807 qpair failed and we were unable to recover it. 00:39:05.807 [2024-09-27 15:57:45.994010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.807 [2024-09-27 15:57:45.994017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.807 qpair failed and we were unable to recover it. 00:39:05.807 [2024-09-27 15:57:45.994332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.807 [2024-09-27 15:57:45.994340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.807 qpair failed and we were unable to recover it. 00:39:05.807 [2024-09-27 15:57:45.994665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.807 [2024-09-27 15:57:45.994674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.807 qpair failed and we were unable to recover it. 00:39:05.807 [2024-09-27 15:57:45.994996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.807 [2024-09-27 15:57:45.995005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.807 qpair failed and we were unable to recover it. 00:39:05.807 [2024-09-27 15:57:45.995224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.807 [2024-09-27 15:57:45.995233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.807 qpair failed and we were unable to recover it. 00:39:05.807 [2024-09-27 15:57:45.995542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.807 [2024-09-27 15:57:45.995551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.807 qpair failed and we were unable to recover it. 00:39:05.807 [2024-09-27 15:57:45.995876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.807 [2024-09-27 15:57:45.995883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.807 qpair failed and we were unable to recover it. 00:39:05.807 [2024-09-27 15:57:45.996207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.807 [2024-09-27 15:57:45.996215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.807 qpair failed and we were unable to recover it. 00:39:05.807 [2024-09-27 15:57:45.996593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.807 [2024-09-27 15:57:45.996600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.807 qpair failed and we were unable to recover it. 00:39:05.807 [2024-09-27 15:57:45.996900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.807 [2024-09-27 15:57:45.996908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.807 qpair failed and we were unable to recover it. 00:39:05.807 [2024-09-27 15:57:45.997234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.807 [2024-09-27 15:57:45.997242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.807 qpair failed and we were unable to recover it. 00:39:05.807 [2024-09-27 15:57:45.997566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.807 [2024-09-27 15:57:45.997575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.807 qpair failed and we were unable to recover it. 00:39:05.807 [2024-09-27 15:57:45.997863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.807 [2024-09-27 15:57:45.997871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.807 qpair failed and we were unable to recover it. 00:39:05.807 [2024-09-27 15:57:45.998189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.807 [2024-09-27 15:57:45.998197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.807 qpair failed and we were unable to recover it. 00:39:05.807 [2024-09-27 15:57:45.998522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.807 [2024-09-27 15:57:45.998529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.807 qpair failed and we were unable to recover it. 00:39:05.807 [2024-09-27 15:57:45.998728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.807 [2024-09-27 15:57:45.998735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.807 qpair failed and we were unable to recover it. 00:39:05.807 [2024-09-27 15:57:45.999099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.807 [2024-09-27 15:57:45.999108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.808 qpair failed and we were unable to recover it. 00:39:05.808 [2024-09-27 15:57:45.999481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.808 [2024-09-27 15:57:45.999490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.808 qpair failed and we were unable to recover it. 00:39:05.808 [2024-09-27 15:57:45.999796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.808 [2024-09-27 15:57:45.999805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.808 qpair failed and we were unable to recover it. 00:39:05.808 [2024-09-27 15:57:46.000133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.808 [2024-09-27 15:57:46.000141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.808 qpair failed and we were unable to recover it. 00:39:05.808 [2024-09-27 15:57:46.000338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.808 [2024-09-27 15:57:46.000347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.808 qpair failed and we were unable to recover it. 00:39:05.808 [2024-09-27 15:57:46.000552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.808 [2024-09-27 15:57:46.000562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.808 qpair failed and we were unable to recover it. 00:39:05.808 [2024-09-27 15:57:46.000844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.808 [2024-09-27 15:57:46.000853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.808 qpair failed and we were unable to recover it. 00:39:05.808 [2024-09-27 15:57:46.001202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.808 [2024-09-27 15:57:46.001211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.808 qpair failed and we were unable to recover it. 00:39:05.808 [2024-09-27 15:57:46.001415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.808 [2024-09-27 15:57:46.001425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.808 qpair failed and we were unable to recover it. 00:39:05.808 [2024-09-27 15:57:46.001761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.808 [2024-09-27 15:57:46.001771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.808 qpair failed and we were unable to recover it. 00:39:05.808 [2024-09-27 15:57:46.002070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.808 [2024-09-27 15:57:46.002080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.808 qpair failed and we were unable to recover it. 00:39:05.808 [2024-09-27 15:57:46.002410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.808 [2024-09-27 15:57:46.002420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.808 qpair failed and we were unable to recover it. 00:39:05.808 [2024-09-27 15:57:46.002747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.808 [2024-09-27 15:57:46.002756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.808 qpair failed and we were unable to recover it. 00:39:05.808 [2024-09-27 15:57:46.003070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.808 [2024-09-27 15:57:46.003079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.808 qpair failed and we were unable to recover it. 00:39:05.808 [2024-09-27 15:57:46.003250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.808 [2024-09-27 15:57:46.003260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.808 qpair failed and we were unable to recover it. 00:39:05.808 [2024-09-27 15:57:46.003586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.808 [2024-09-27 15:57:46.003595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.808 qpair failed and we were unable to recover it. 00:39:05.808 [2024-09-27 15:57:46.003926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.808 [2024-09-27 15:57:46.003936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.808 qpair failed and we were unable to recover it. 00:39:05.808 [2024-09-27 15:57:46.004182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.808 [2024-09-27 15:57:46.004191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.808 qpair failed and we were unable to recover it. 00:39:05.808 [2024-09-27 15:57:46.004573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.808 [2024-09-27 15:57:46.004582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.808 qpair failed and we were unable to recover it. 00:39:05.808 [2024-09-27 15:57:46.004773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.808 [2024-09-27 15:57:46.004783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.808 qpair failed and we were unable to recover it. 00:39:05.808 [2024-09-27 15:57:46.005166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.808 [2024-09-27 15:57:46.005175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.808 qpair failed and we were unable to recover it. 00:39:05.808 [2024-09-27 15:57:46.005507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.808 [2024-09-27 15:57:46.005516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.808 qpair failed and we were unable to recover it. 00:39:05.808 [2024-09-27 15:57:46.005719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.808 [2024-09-27 15:57:46.005728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.808 qpair failed and we were unable to recover it. 00:39:05.808 [2024-09-27 15:57:46.006062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.808 [2024-09-27 15:57:46.006071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.808 qpair failed and we were unable to recover it. 00:39:05.808 [2024-09-27 15:57:46.006388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.808 [2024-09-27 15:57:46.006398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.808 qpair failed and we were unable to recover it. 00:39:05.808 [2024-09-27 15:57:46.006485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.808 [2024-09-27 15:57:46.006493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.808 qpair failed and we were unable to recover it. 00:39:05.808 [2024-09-27 15:57:46.006713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.808 [2024-09-27 15:57:46.006723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.808 qpair failed and we were unable to recover it. 00:39:05.808 [2024-09-27 15:57:46.006939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.808 [2024-09-27 15:57:46.006949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.808 qpair failed and we were unable to recover it. 00:39:05.808 [2024-09-27 15:57:46.007208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.808 [2024-09-27 15:57:46.007218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.808 qpair failed and we were unable to recover it. 00:39:05.808 [2024-09-27 15:57:46.007563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.808 [2024-09-27 15:57:46.007575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.808 qpair failed and we were unable to recover it. 00:39:05.808 [2024-09-27 15:57:46.007772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.808 [2024-09-27 15:57:46.007781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.808 qpair failed and we were unable to recover it. 00:39:05.808 [2024-09-27 15:57:46.008101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.808 [2024-09-27 15:57:46.008111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.808 qpair failed and we were unable to recover it. 00:39:05.808 [2024-09-27 15:57:46.008494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.808 [2024-09-27 15:57:46.008504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.808 qpair failed and we were unable to recover it. 00:39:05.808 [2024-09-27 15:57:46.008870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.808 [2024-09-27 15:57:46.008880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.808 qpair failed and we were unable to recover it. 00:39:05.808 [2024-09-27 15:57:46.009198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.808 [2024-09-27 15:57:46.009208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.808 qpair failed and we were unable to recover it. 00:39:05.808 [2024-09-27 15:57:46.009386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.808 [2024-09-27 15:57:46.009395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.808 qpair failed and we were unable to recover it. 00:39:05.808 [2024-09-27 15:57:46.009737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.808 [2024-09-27 15:57:46.009746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.808 qpair failed and we were unable to recover it. 00:39:05.808 [2024-09-27 15:57:46.010071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.808 [2024-09-27 15:57:46.010081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.808 qpair failed and we were unable to recover it. 00:39:05.808 [2024-09-27 15:57:46.010267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.809 [2024-09-27 15:57:46.010277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.809 qpair failed and we were unable to recover it. 00:39:05.809 [2024-09-27 15:57:46.010617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.809 [2024-09-27 15:57:46.010625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.809 qpair failed and we were unable to recover it. 00:39:05.809 [2024-09-27 15:57:46.010909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.809 [2024-09-27 15:57:46.010919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.809 qpair failed and we were unable to recover it. 00:39:05.809 [2024-09-27 15:57:46.011238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.809 [2024-09-27 15:57:46.011248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.809 qpair failed and we were unable to recover it. 00:39:05.809 [2024-09-27 15:57:46.011575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.809 [2024-09-27 15:57:46.011584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.809 qpair failed and we were unable to recover it. 00:39:05.809 [2024-09-27 15:57:46.011651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.809 [2024-09-27 15:57:46.011660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.809 qpair failed and we were unable to recover it. 00:39:05.809 [2024-09-27 15:57:46.011970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.809 [2024-09-27 15:57:46.011979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.809 qpair failed and we were unable to recover it. 00:39:05.809 [2024-09-27 15:57:46.012328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.809 [2024-09-27 15:57:46.012337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.809 qpair failed and we were unable to recover it. 00:39:05.809 [2024-09-27 15:57:46.012560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.809 [2024-09-27 15:57:46.012570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.809 qpair failed and we were unable to recover it. 00:39:05.809 [2024-09-27 15:57:46.012744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.809 [2024-09-27 15:57:46.012754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.809 qpair failed and we were unable to recover it. 00:39:05.809 [2024-09-27 15:57:46.013011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.809 [2024-09-27 15:57:46.013020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.809 qpair failed and we were unable to recover it. 00:39:05.809 [2024-09-27 15:57:46.013390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.809 [2024-09-27 15:57:46.013400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.809 qpair failed and we were unable to recover it. 00:39:05.809 [2024-09-27 15:57:46.013601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.809 [2024-09-27 15:57:46.013610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.809 qpair failed and we were unable to recover it. 00:39:05.809 [2024-09-27 15:57:46.013766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.809 [2024-09-27 15:57:46.013775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.809 qpair failed and we were unable to recover it. 00:39:05.809 [2024-09-27 15:57:46.014074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.809 [2024-09-27 15:57:46.014084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.809 qpair failed and we were unable to recover it. 00:39:05.809 [2024-09-27 15:57:46.014410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.809 [2024-09-27 15:57:46.014419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.809 qpair failed and we were unable to recover it. 00:39:05.809 [2024-09-27 15:57:46.014554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.809 [2024-09-27 15:57:46.014563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.809 qpair failed and we were unable to recover it. 00:39:05.809 [2024-09-27 15:57:46.014877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.809 [2024-09-27 15:57:46.014886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.809 qpair failed and we were unable to recover it. 00:39:05.809 [2024-09-27 15:57:46.015171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.809 [2024-09-27 15:57:46.015183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.809 qpair failed and we were unable to recover it. 00:39:05.809 [2024-09-27 15:57:46.015527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.809 [2024-09-27 15:57:46.015536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.809 qpair failed and we were unable to recover it. 00:39:05.809 [2024-09-27 15:57:46.015855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.809 [2024-09-27 15:57:46.015866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.809 qpair failed and we were unable to recover it. 00:39:05.809 [2024-09-27 15:57:46.016210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.809 [2024-09-27 15:57:46.016220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.809 qpair failed and we were unable to recover it. 00:39:05.809 [2024-09-27 15:57:46.016536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.809 [2024-09-27 15:57:46.016546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.809 qpair failed and we were unable to recover it. 00:39:05.809 [2024-09-27 15:57:46.016875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.809 [2024-09-27 15:57:46.016884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.809 qpair failed and we were unable to recover it. 00:39:05.809 [2024-09-27 15:57:46.017208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.809 [2024-09-27 15:57:46.017218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.809 qpair failed and we were unable to recover it. 00:39:05.809 [2024-09-27 15:57:46.017541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.809 [2024-09-27 15:57:46.017551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.809 qpair failed and we were unable to recover it. 00:39:05.809 [2024-09-27 15:57:46.017876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.809 [2024-09-27 15:57:46.017885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.809 qpair failed and we were unable to recover it. 00:39:05.809 [2024-09-27 15:57:46.017953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.809 [2024-09-27 15:57:46.017963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.809 qpair failed and we were unable to recover it. 00:39:05.809 [2024-09-27 15:57:46.018116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.809 [2024-09-27 15:57:46.018125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.809 qpair failed and we were unable to recover it. 00:39:05.809 [2024-09-27 15:57:46.018412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.809 [2024-09-27 15:57:46.018422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.809 qpair failed and we were unable to recover it. 00:39:05.809 [2024-09-27 15:57:46.018767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.809 [2024-09-27 15:57:46.018776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.809 qpair failed and we were unable to recover it. 00:39:05.809 [2024-09-27 15:57:46.019084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.809 [2024-09-27 15:57:46.019094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.809 qpair failed and we were unable to recover it. 00:39:05.809 [2024-09-27 15:57:46.019295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.809 [2024-09-27 15:57:46.019305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.809 qpair failed and we were unable to recover it. 00:39:05.809 [2024-09-27 15:57:46.019460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.809 [2024-09-27 15:57:46.019470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.809 qpair failed and we were unable to recover it. 00:39:05.809 [2024-09-27 15:57:46.019837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.809 [2024-09-27 15:57:46.019846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.809 qpair failed and we were unable to recover it. 00:39:05.809 [2024-09-27 15:57:46.020163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.809 [2024-09-27 15:57:46.020174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.809 qpair failed and we were unable to recover it. 00:39:05.809 [2024-09-27 15:57:46.020354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.809 [2024-09-27 15:57:46.020365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.809 qpair failed and we were unable to recover it. 00:39:05.809 [2024-09-27 15:57:46.020767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.810 [2024-09-27 15:57:46.020777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.810 qpair failed and we were unable to recover it. 00:39:05.810 [2024-09-27 15:57:46.020977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.810 [2024-09-27 15:57:46.020987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.810 qpair failed and we were unable to recover it. 00:39:05.810 [2024-09-27 15:57:46.021337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.810 [2024-09-27 15:57:46.021347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.810 qpair failed and we were unable to recover it. 00:39:05.810 [2024-09-27 15:57:46.021534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.810 [2024-09-27 15:57:46.021543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.810 qpair failed and we were unable to recover it. 00:39:05.810 [2024-09-27 15:57:46.021879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.810 [2024-09-27 15:57:46.021889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.810 qpair failed and we were unable to recover it. 00:39:05.810 [2024-09-27 15:57:46.022249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.810 [2024-09-27 15:57:46.022257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.810 qpair failed and we were unable to recover it. 00:39:05.810 [2024-09-27 15:57:46.022439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.810 [2024-09-27 15:57:46.022447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.810 qpair failed and we were unable to recover it. 00:39:05.810 [2024-09-27 15:57:46.022816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.810 [2024-09-27 15:57:46.022825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.810 qpair failed and we were unable to recover it. 00:39:05.810 [2024-09-27 15:57:46.023175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.810 [2024-09-27 15:57:46.023186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.810 qpair failed and we were unable to recover it. 00:39:05.810 [2024-09-27 15:57:46.023507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.810 [2024-09-27 15:57:46.023515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.810 qpair failed and we were unable to recover it. 00:39:05.810 [2024-09-27 15:57:46.023841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.810 [2024-09-27 15:57:46.023848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.810 qpair failed and we were unable to recover it. 00:39:05.810 [2024-09-27 15:57:46.024187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.810 [2024-09-27 15:57:46.024195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.810 qpair failed and we were unable to recover it. 00:39:05.810 [2024-09-27 15:57:46.024501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.810 [2024-09-27 15:57:46.024509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.810 qpair failed and we were unable to recover it. 00:39:05.810 [2024-09-27 15:57:46.024825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.810 [2024-09-27 15:57:46.024832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.810 qpair failed and we were unable to recover it. 00:39:05.810 [2024-09-27 15:57:46.025140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.810 [2024-09-27 15:57:46.025148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.810 qpair failed and we were unable to recover it. 00:39:05.810 [2024-09-27 15:57:46.025471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.810 [2024-09-27 15:57:46.025480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.810 qpair failed and we were unable to recover it. 00:39:05.810 [2024-09-27 15:57:46.025790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.810 [2024-09-27 15:57:46.025798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.810 qpair failed and we were unable to recover it. 00:39:05.810 [2024-09-27 15:57:46.026112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.810 [2024-09-27 15:57:46.026122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.810 qpair failed and we were unable to recover it. 00:39:05.810 [2024-09-27 15:57:46.026460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.810 [2024-09-27 15:57:46.026469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.810 qpair failed and we were unable to recover it. 00:39:05.810 [2024-09-27 15:57:46.026799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.810 [2024-09-27 15:57:46.026809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.810 qpair failed and we were unable to recover it. 00:39:05.810 [2024-09-27 15:57:46.026989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.810 [2024-09-27 15:57:46.026999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.810 qpair failed and we were unable to recover it. 00:39:05.810 [2024-09-27 15:57:46.027171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.810 [2024-09-27 15:57:46.027180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.810 qpair failed and we were unable to recover it. 00:39:05.810 [2024-09-27 15:57:46.027501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.810 [2024-09-27 15:57:46.027509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.810 qpair failed and we were unable to recover it. 00:39:05.810 [2024-09-27 15:57:46.027864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.810 [2024-09-27 15:57:46.027873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.810 qpair failed and we were unable to recover it. 00:39:05.810 [2024-09-27 15:57:46.028193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.810 [2024-09-27 15:57:46.028202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.810 qpair failed and we were unable to recover it. 00:39:05.810 [2024-09-27 15:57:46.028415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.810 [2024-09-27 15:57:46.028425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.810 qpair failed and we were unable to recover it. 00:39:05.810 [2024-09-27 15:57:46.028707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.810 [2024-09-27 15:57:46.028715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.810 qpair failed and we were unable to recover it. 00:39:05.810 [2024-09-27 15:57:46.029057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.810 [2024-09-27 15:57:46.029065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.810 qpair failed and we were unable to recover it. 00:39:05.810 [2024-09-27 15:57:46.029422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.810 [2024-09-27 15:57:46.029431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.810 qpair failed and we were unable to recover it. 00:39:05.810 [2024-09-27 15:57:46.029629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.810 [2024-09-27 15:57:46.029636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.810 qpair failed and we were unable to recover it. 00:39:05.810 [2024-09-27 15:57:46.029974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.810 [2024-09-27 15:57:46.029983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.810 qpair failed and we were unable to recover it. 00:39:05.810 [2024-09-27 15:57:46.030316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.810 [2024-09-27 15:57:46.030325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.810 qpair failed and we were unable to recover it. 00:39:05.811 [2024-09-27 15:57:46.030652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.811 [2024-09-27 15:57:46.030659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.811 qpair failed and we were unable to recover it. 00:39:05.811 [2024-09-27 15:57:46.030986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.811 [2024-09-27 15:57:46.030994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.811 qpair failed and we were unable to recover it. 00:39:05.811 [2024-09-27 15:57:46.031306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.811 [2024-09-27 15:57:46.031313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.811 qpair failed and we were unable to recover it. 00:39:05.811 [2024-09-27 15:57:46.031657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.811 [2024-09-27 15:57:46.031665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.811 qpair failed and we were unable to recover it. 00:39:05.811 [2024-09-27 15:57:46.031983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.811 [2024-09-27 15:57:46.031992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.811 qpair failed and we were unable to recover it. 00:39:05.811 [2024-09-27 15:57:46.032296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.811 [2024-09-27 15:57:46.032304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.811 qpair failed and we were unable to recover it. 00:39:05.811 [2024-09-27 15:57:46.032481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.811 [2024-09-27 15:57:46.032491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.811 qpair failed and we were unable to recover it. 00:39:05.811 [2024-09-27 15:57:46.032832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.811 [2024-09-27 15:57:46.032841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.811 qpair failed and we were unable to recover it. 00:39:05.811 [2024-09-27 15:57:46.033049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.811 [2024-09-27 15:57:46.033057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.811 qpair failed and we were unable to recover it. 00:39:05.811 [2024-09-27 15:57:46.033270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.811 [2024-09-27 15:57:46.033277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.811 qpair failed and we were unable to recover it. 00:39:05.811 [2024-09-27 15:57:46.033608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.811 [2024-09-27 15:57:46.033616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.811 qpair failed and we were unable to recover it. 00:39:05.811 [2024-09-27 15:57:46.033817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.811 [2024-09-27 15:57:46.033824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.811 qpair failed and we were unable to recover it. 00:39:05.811 [2024-09-27 15:57:46.034189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.811 [2024-09-27 15:57:46.034199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.811 qpair failed and we were unable to recover it. 00:39:05.811 [2024-09-27 15:57:46.034525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.811 [2024-09-27 15:57:46.034532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.811 qpair failed and we were unable to recover it. 00:39:05.811 [2024-09-27 15:57:46.034852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.811 [2024-09-27 15:57:46.034861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.811 qpair failed and we were unable to recover it. 00:39:05.811 [2024-09-27 15:57:46.035210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.811 [2024-09-27 15:57:46.035219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.811 qpair failed and we were unable to recover it. 00:39:05.811 [2024-09-27 15:57:46.035423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.811 [2024-09-27 15:57:46.035430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.811 qpair failed and we were unable to recover it. 00:39:05.811 [2024-09-27 15:57:46.035762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.811 [2024-09-27 15:57:46.035771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.811 qpair failed and we were unable to recover it. 00:39:05.811 [2024-09-27 15:57:46.035962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.811 [2024-09-27 15:57:46.035970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.811 qpair failed and we were unable to recover it. 00:39:05.811 [2024-09-27 15:57:46.036308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.811 [2024-09-27 15:57:46.036316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.811 qpair failed and we were unable to recover it. 00:39:05.811 [2024-09-27 15:57:46.036521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.811 [2024-09-27 15:57:46.036529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.811 qpair failed and we were unable to recover it. 00:39:05.811 [2024-09-27 15:57:46.036864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.811 [2024-09-27 15:57:46.036872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.811 qpair failed and we were unable to recover it. 00:39:05.811 [2024-09-27 15:57:46.037165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.811 [2024-09-27 15:57:46.037175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.811 qpair failed and we were unable to recover it. 00:39:05.811 [2024-09-27 15:57:46.037507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.811 [2024-09-27 15:57:46.037515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.811 qpair failed and we were unable to recover it. 00:39:05.811 [2024-09-27 15:57:46.037823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.811 [2024-09-27 15:57:46.037831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.811 qpair failed and we were unable to recover it. 00:39:05.811 [2024-09-27 15:57:46.038164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.811 [2024-09-27 15:57:46.038173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.811 qpair failed and we were unable to recover it. 00:39:05.811 [2024-09-27 15:57:46.038477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.811 [2024-09-27 15:57:46.038485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.811 qpair failed and we were unable to recover it. 00:39:05.811 [2024-09-27 15:57:46.038693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.811 [2024-09-27 15:57:46.038702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.811 qpair failed and we were unable to recover it. 00:39:05.811 [2024-09-27 15:57:46.039042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.811 [2024-09-27 15:57:46.039050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.811 qpair failed and we were unable to recover it. 00:39:05.811 [2024-09-27 15:57:46.039377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.811 [2024-09-27 15:57:46.039387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.811 qpair failed and we were unable to recover it. 00:39:05.811 [2024-09-27 15:57:46.039724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.811 [2024-09-27 15:57:46.039733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.811 qpair failed and we were unable to recover it. 00:39:05.811 [2024-09-27 15:57:46.040066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.811 [2024-09-27 15:57:46.040074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.811 qpair failed and we were unable to recover it. 00:39:05.811 [2024-09-27 15:57:46.040293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.811 [2024-09-27 15:57:46.040310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.811 qpair failed and we were unable to recover it. 00:39:05.811 Read completed with error (sct=0, sc=8) 00:39:05.811 starting I/O failed 00:39:05.811 Read completed with error (sct=0, sc=8) 00:39:05.811 starting I/O failed 00:39:05.811 Read completed with error (sct=0, sc=8) 00:39:05.811 starting I/O failed 00:39:05.811 Read completed with error (sct=0, sc=8) 00:39:05.811 starting I/O failed 00:39:05.811 Read completed with error (sct=0, sc=8) 00:39:05.811 starting I/O failed 00:39:05.811 Read completed with error (sct=0, sc=8) 00:39:05.811 starting I/O failed 00:39:05.811 Read completed with error (sct=0, sc=8) 00:39:05.811 starting I/O failed 00:39:05.811 Read completed with error (sct=0, sc=8) 00:39:05.811 starting I/O failed 00:39:05.811 Read completed with error (sct=0, sc=8) 00:39:05.811 starting I/O failed 00:39:05.811 Read completed with error (sct=0, sc=8) 00:39:05.811 starting I/O failed 00:39:05.811 Write completed with error (sct=0, sc=8) 00:39:05.811 starting I/O failed 00:39:05.811 Read completed with error (sct=0, sc=8) 00:39:05.811 starting I/O failed 00:39:05.811 Read completed with error (sct=0, sc=8) 00:39:05.811 starting I/O failed 00:39:05.811 Write completed with error (sct=0, sc=8) 00:39:05.811 starting I/O failed 00:39:05.811 Read completed with error (sct=0, sc=8) 00:39:05.811 starting I/O failed 00:39:05.812 Write completed with error (sct=0, sc=8) 00:39:05.812 starting I/O failed 00:39:05.812 Write completed with error (sct=0, sc=8) 00:39:05.812 starting I/O failed 00:39:05.812 Write completed with error (sct=0, sc=8) 00:39:05.812 starting I/O failed 00:39:05.812 Read completed with error (sct=0, sc=8) 00:39:05.812 starting I/O failed 00:39:05.812 Write completed with error (sct=0, sc=8) 00:39:05.812 starting I/O failed 00:39:05.812 Read completed with error (sct=0, sc=8) 00:39:05.812 starting I/O failed 00:39:05.812 Read completed with error (sct=0, sc=8) 00:39:05.812 starting I/O failed 00:39:05.812 Read completed with error (sct=0, sc=8) 00:39:05.812 starting I/O failed 00:39:05.812 Read completed with error (sct=0, sc=8) 00:39:05.812 starting I/O failed 00:39:05.812 Write completed with error (sct=0, sc=8) 00:39:05.812 starting I/O failed 00:39:05.812 Read completed with error (sct=0, sc=8) 00:39:05.812 starting I/O failed 00:39:05.812 Read completed with error (sct=0, sc=8) 00:39:05.812 starting I/O failed 00:39:05.812 Read completed with error (sct=0, sc=8) 00:39:05.812 starting I/O failed 00:39:05.812 Read completed with error (sct=0, sc=8) 00:39:05.812 starting I/O failed 00:39:05.812 Write completed with error (sct=0, sc=8) 00:39:05.812 starting I/O failed 00:39:05.812 Read completed with error (sct=0, sc=8) 00:39:05.812 starting I/O failed 00:39:05.812 Write completed with error (sct=0, sc=8) 00:39:05.812 starting I/O failed 00:39:05.812 [2024-09-27 15:57:46.041088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:39:05.812 [2024-09-27 15:57:46.041449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.812 [2024-09-27 15:57:46.041509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:05.812 qpair failed and we were unable to recover it. 00:39:05.812 [2024-09-27 15:57:46.041928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.812 [2024-09-27 15:57:46.041940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.812 qpair failed and we were unable to recover it. 00:39:05.812 [2024-09-27 15:57:46.042333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.812 [2024-09-27 15:57:46.042341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.812 qpair failed and we were unable to recover it. 00:39:05.812 [2024-09-27 15:57:46.042658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.812 [2024-09-27 15:57:46.042667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.812 qpair failed and we were unable to recover it. 00:39:05.812 [2024-09-27 15:57:46.042990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.812 [2024-09-27 15:57:46.043001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.812 qpair failed and we were unable to recover it. 00:39:05.812 [2024-09-27 15:57:46.043331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.812 [2024-09-27 15:57:46.043339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.812 qpair failed and we were unable to recover it. 00:39:05.812 [2024-09-27 15:57:46.043634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.812 [2024-09-27 15:57:46.043641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.812 qpair failed and we were unable to recover it. 00:39:05.812 [2024-09-27 15:57:46.043945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.812 [2024-09-27 15:57:46.043953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.812 qpair failed and we were unable to recover it. 00:39:05.812 [2024-09-27 15:57:46.044121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.812 [2024-09-27 15:57:46.044129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.812 qpair failed and we were unable to recover it. 00:39:05.812 [2024-09-27 15:57:46.044482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.812 [2024-09-27 15:57:46.044491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.812 qpair failed and we were unable to recover it. 00:39:05.812 [2024-09-27 15:57:46.044732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.812 [2024-09-27 15:57:46.044743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.812 qpair failed and we were unable to recover it. 00:39:05.812 [2024-09-27 15:57:46.045066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.812 [2024-09-27 15:57:46.045074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.812 qpair failed and we were unable to recover it. 00:39:05.812 [2024-09-27 15:57:46.045396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.812 [2024-09-27 15:57:46.045404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.812 qpair failed and we were unable to recover it. 00:39:05.812 [2024-09-27 15:57:46.045583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.812 [2024-09-27 15:57:46.045592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.812 qpair failed and we were unable to recover it. 00:39:05.812 [2024-09-27 15:57:46.045790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.812 [2024-09-27 15:57:46.045801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.812 qpair failed and we were unable to recover it. 00:39:05.812 [2024-09-27 15:57:46.046038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.812 [2024-09-27 15:57:46.046047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.812 qpair failed and we were unable to recover it. 00:39:05.812 [2024-09-27 15:57:46.046291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.812 [2024-09-27 15:57:46.046300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.812 qpair failed and we were unable to recover it. 00:39:05.812 [2024-09-27 15:57:46.046648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.812 [2024-09-27 15:57:46.046658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.812 qpair failed and we were unable to recover it. 00:39:05.812 [2024-09-27 15:57:46.046917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.812 [2024-09-27 15:57:46.046927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.812 qpair failed and we were unable to recover it. 00:39:05.812 [2024-09-27 15:57:46.047264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.812 [2024-09-27 15:57:46.047274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.812 qpair failed and we were unable to recover it. 00:39:05.812 [2024-09-27 15:57:46.047335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.812 [2024-09-27 15:57:46.047345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.812 qpair failed and we were unable to recover it. 00:39:05.812 [2024-09-27 15:57:46.047567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.812 [2024-09-27 15:57:46.047577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.812 qpair failed and we were unable to recover it. 00:39:05.812 [2024-09-27 15:57:46.047949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.812 [2024-09-27 15:57:46.047960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.812 qpair failed and we were unable to recover it. 00:39:05.812 [2024-09-27 15:57:46.048269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.812 [2024-09-27 15:57:46.048279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.812 qpair failed and we were unable to recover it. 00:39:05.812 [2024-09-27 15:57:46.048619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.812 [2024-09-27 15:57:46.048627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.812 qpair failed and we were unable to recover it. 00:39:05.812 [2024-09-27 15:57:46.049017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.812 [2024-09-27 15:57:46.049027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.812 qpair failed and we were unable to recover it. 00:39:05.812 [2024-09-27 15:57:46.049966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.812 [2024-09-27 15:57:46.050001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.812 qpair failed and we were unable to recover it. 00:39:05.812 [2024-09-27 15:57:46.050359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.812 [2024-09-27 15:57:46.050370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.812 qpair failed and we were unable to recover it. 00:39:05.812 [2024-09-27 15:57:46.050690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.812 [2024-09-27 15:57:46.050698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.812 qpair failed and we were unable to recover it. 00:39:05.812 [2024-09-27 15:57:46.051022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.812 [2024-09-27 15:57:46.051031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.812 qpair failed and we were unable to recover it. 00:39:05.812 [2024-09-27 15:57:46.051370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.812 [2024-09-27 15:57:46.051379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.812 qpair failed and we were unable to recover it. 00:39:05.812 [2024-09-27 15:57:46.051752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.813 [2024-09-27 15:57:46.051764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.813 qpair failed and we were unable to recover it. 00:39:05.813 [2024-09-27 15:57:46.051976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.813 [2024-09-27 15:57:46.051986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.813 qpair failed and we were unable to recover it. 00:39:05.813 [2024-09-27 15:57:46.052350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.813 [2024-09-27 15:57:46.052358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.813 qpair failed and we were unable to recover it. 00:39:05.813 [2024-09-27 15:57:46.052554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.813 [2024-09-27 15:57:46.052562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.813 qpair failed and we were unable to recover it. 00:39:05.813 [2024-09-27 15:57:46.052913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.813 [2024-09-27 15:57:46.052921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.813 qpair failed and we were unable to recover it. 00:39:05.813 [2024-09-27 15:57:46.053304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.813 [2024-09-27 15:57:46.053312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.813 qpair failed and we were unable to recover it. 00:39:05.813 [2024-09-27 15:57:46.053526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.813 [2024-09-27 15:57:46.053534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.813 qpair failed and we were unable to recover it. 00:39:05.813 [2024-09-27 15:57:46.053852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.813 [2024-09-27 15:57:46.053860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.813 qpair failed and we were unable to recover it. 00:39:05.813 [2024-09-27 15:57:46.054209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.813 [2024-09-27 15:57:46.054218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.813 qpair failed and we were unable to recover it. 00:39:05.813 [2024-09-27 15:57:46.054548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.813 [2024-09-27 15:57:46.054557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.813 qpair failed and we were unable to recover it. 00:39:05.813 [2024-09-27 15:57:46.054749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.813 [2024-09-27 15:57:46.054759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.813 qpair failed and we were unable to recover it. 00:39:05.813 [2024-09-27 15:57:46.054867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.813 [2024-09-27 15:57:46.054874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.813 qpair failed and we were unable to recover it. 00:39:05.813 [2024-09-27 15:57:46.055161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.813 [2024-09-27 15:57:46.055170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.813 qpair failed and we were unable to recover it. 00:39:05.813 [2024-09-27 15:57:46.055504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.813 [2024-09-27 15:57:46.055512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.813 qpair failed and we were unable to recover it. 00:39:05.813 [2024-09-27 15:57:46.055854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.813 [2024-09-27 15:57:46.055862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.813 qpair failed and we were unable to recover it. 00:39:05.813 [2024-09-27 15:57:46.056198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.813 [2024-09-27 15:57:46.056206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.813 qpair failed and we were unable to recover it. 00:39:05.813 [2024-09-27 15:57:46.056608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.813 [2024-09-27 15:57:46.056617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.813 qpair failed and we were unable to recover it. 00:39:05.813 [2024-09-27 15:57:46.056958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.813 [2024-09-27 15:57:46.056967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.813 qpair failed and we were unable to recover it. 00:39:05.813 [2024-09-27 15:57:46.057306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.813 [2024-09-27 15:57:46.057314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.813 qpair failed and we were unable to recover it. 00:39:05.813 [2024-09-27 15:57:46.057659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.813 [2024-09-27 15:57:46.057667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.813 qpair failed and we were unable to recover it. 00:39:05.813 [2024-09-27 15:57:46.057976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.813 [2024-09-27 15:57:46.057984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.813 qpair failed and we were unable to recover it. 00:39:05.813 [2024-09-27 15:57:46.058219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.813 [2024-09-27 15:57:46.058226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.813 qpair failed and we were unable to recover it. 00:39:05.813 [2024-09-27 15:57:46.058549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.813 [2024-09-27 15:57:46.058557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.813 qpair failed and we were unable to recover it. 00:39:05.813 [2024-09-27 15:57:46.058754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.813 [2024-09-27 15:57:46.058762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.813 qpair failed and we were unable to recover it. 00:39:05.813 [2024-09-27 15:57:46.059018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.813 [2024-09-27 15:57:46.059027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.813 qpair failed and we were unable to recover it. 00:39:05.813 [2024-09-27 15:57:46.059157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.813 [2024-09-27 15:57:46.059165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.813 qpair failed and we were unable to recover it. 00:39:05.813 [2024-09-27 15:57:46.059488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.813 [2024-09-27 15:57:46.059497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.813 qpair failed and we were unable to recover it. 00:39:05.813 [2024-09-27 15:57:46.059702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.813 [2024-09-27 15:57:46.059711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.813 qpair failed and we were unable to recover it. 00:39:05.813 [2024-09-27 15:57:46.060098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.813 [2024-09-27 15:57:46.060106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.813 qpair failed and we were unable to recover it. 00:39:05.813 [2024-09-27 15:57:46.060438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.813 [2024-09-27 15:57:46.060446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.813 qpair failed and we were unable to recover it. 00:39:05.813 [2024-09-27 15:57:46.060768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.813 [2024-09-27 15:57:46.060775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.813 qpair failed and we were unable to recover it. 00:39:05.813 [2024-09-27 15:57:46.060985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.813 [2024-09-27 15:57:46.060994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.813 qpair failed and we were unable to recover it. 00:39:05.813 [2024-09-27 15:57:46.061316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.813 [2024-09-27 15:57:46.061323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.813 qpair failed and we were unable to recover it. 00:39:05.813 [2024-09-27 15:57:46.061651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.813 [2024-09-27 15:57:46.061659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.813 qpair failed and we were unable to recover it. 00:39:05.813 [2024-09-27 15:57:46.061967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.813 [2024-09-27 15:57:46.061975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.813 qpair failed and we were unable to recover it. 00:39:05.813 [2024-09-27 15:57:46.062300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.813 [2024-09-27 15:57:46.062308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.813 qpair failed and we were unable to recover it. 00:39:05.813 [2024-09-27 15:57:46.062634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.813 [2024-09-27 15:57:46.062642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.813 qpair failed and we were unable to recover it. 00:39:05.813 [2024-09-27 15:57:46.062951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.813 [2024-09-27 15:57:46.062959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.813 qpair failed and we were unable to recover it. 00:39:05.814 [2024-09-27 15:57:46.063301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.814 [2024-09-27 15:57:46.063309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.814 qpair failed and we were unable to recover it. 00:39:05.814 [2024-09-27 15:57:46.063634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.814 [2024-09-27 15:57:46.063642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.814 qpair failed and we were unable to recover it. 00:39:05.814 [2024-09-27 15:57:46.063964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.814 [2024-09-27 15:57:46.063974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.814 qpair failed and we were unable to recover it. 00:39:05.814 [2024-09-27 15:57:46.064299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.814 [2024-09-27 15:57:46.064308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.814 qpair failed and we were unable to recover it. 00:39:05.814 [2024-09-27 15:57:46.064614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.814 [2024-09-27 15:57:46.064622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.814 qpair failed and we were unable to recover it. 00:39:05.814 [2024-09-27 15:57:46.064823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.814 [2024-09-27 15:57:46.064830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.814 qpair failed and we were unable to recover it. 00:39:05.814 [2024-09-27 15:57:46.065116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.814 [2024-09-27 15:57:46.065124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.814 qpair failed and we were unable to recover it. 00:39:05.814 [2024-09-27 15:57:46.065447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.814 [2024-09-27 15:57:46.065455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.814 qpair failed and we were unable to recover it. 00:39:05.814 [2024-09-27 15:57:46.065759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.814 [2024-09-27 15:57:46.065767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.814 qpair failed and we were unable to recover it. 00:39:05.814 [2024-09-27 15:57:46.066116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.814 [2024-09-27 15:57:46.066125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.814 qpair failed and we were unable to recover it. 00:39:05.814 [2024-09-27 15:57:46.066433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.814 [2024-09-27 15:57:46.066441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.814 qpair failed and we were unable to recover it. 00:39:05.814 [2024-09-27 15:57:46.066780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.814 [2024-09-27 15:57:46.066789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.814 qpair failed and we were unable to recover it. 00:39:05.814 [2024-09-27 15:57:46.067163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.814 [2024-09-27 15:57:46.067173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.814 qpair failed and we were unable to recover it. 00:39:05.814 [2024-09-27 15:57:46.067502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.814 [2024-09-27 15:57:46.067510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.814 qpair failed and we were unable to recover it. 00:39:05.814 [2024-09-27 15:57:46.067839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.814 [2024-09-27 15:57:46.067847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.814 qpair failed and we were unable to recover it. 00:39:05.814 [2024-09-27 15:57:46.068177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.814 [2024-09-27 15:57:46.068185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.814 qpair failed and we were unable to recover it. 00:39:05.814 [2024-09-27 15:57:46.068531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.814 [2024-09-27 15:57:46.068542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.814 qpair failed and we were unable to recover it. 00:39:05.814 [2024-09-27 15:57:46.068766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.814 [2024-09-27 15:57:46.068775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.814 qpair failed and we were unable to recover it. 00:39:05.814 [2024-09-27 15:57:46.069067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.814 [2024-09-27 15:57:46.069075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.814 qpair failed and we were unable to recover it. 00:39:05.814 [2024-09-27 15:57:46.069407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.814 [2024-09-27 15:57:46.069414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.814 qpair failed and we were unable to recover it. 00:39:05.814 [2024-09-27 15:57:46.069724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.814 [2024-09-27 15:57:46.069731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.814 qpair failed and we were unable to recover it. 00:39:05.814 [2024-09-27 15:57:46.070071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.814 [2024-09-27 15:57:46.070079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.814 qpair failed and we were unable to recover it. 00:39:05.814 [2024-09-27 15:57:46.070422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.814 [2024-09-27 15:57:46.070430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.814 qpair failed and we were unable to recover it. 00:39:05.814 [2024-09-27 15:57:46.070707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.814 [2024-09-27 15:57:46.070716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.814 qpair failed and we were unable to recover it. 00:39:05.814 [2024-09-27 15:57:46.071092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.814 [2024-09-27 15:57:46.071102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.814 qpair failed and we were unable to recover it. 00:39:05.814 [2024-09-27 15:57:46.071190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.814 [2024-09-27 15:57:46.071198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.814 qpair failed and we were unable to recover it. 00:39:05.814 [2024-09-27 15:57:46.071488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.814 [2024-09-27 15:57:46.071495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.814 qpair failed and we were unable to recover it. 00:39:05.814 [2024-09-27 15:57:46.071806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.814 [2024-09-27 15:57:46.071814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.814 qpair failed and we were unable to recover it. 00:39:05.814 [2024-09-27 15:57:46.072140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.814 [2024-09-27 15:57:46.072148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.814 qpair failed and we were unable to recover it. 00:39:05.814 [2024-09-27 15:57:46.072475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.814 [2024-09-27 15:57:46.072483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.814 qpair failed and we were unable to recover it. 00:39:05.814 [2024-09-27 15:57:46.072811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.814 [2024-09-27 15:57:46.072822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.814 qpair failed and we were unable to recover it. 00:39:05.814 [2024-09-27 15:57:46.073142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.814 [2024-09-27 15:57:46.073150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.814 qpair failed and we were unable to recover it. 00:39:05.814 [2024-09-27 15:57:46.073488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.814 [2024-09-27 15:57:46.073498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.814 qpair failed and we were unable to recover it. 00:39:05.814 [2024-09-27 15:57:46.073853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.814 [2024-09-27 15:57:46.073862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.814 qpair failed and we were unable to recover it. 00:39:05.814 [2024-09-27 15:57:46.074058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.814 [2024-09-27 15:57:46.074065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.814 qpair failed and we were unable to recover it. 00:39:05.814 [2024-09-27 15:57:46.074363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.814 [2024-09-27 15:57:46.074371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.814 qpair failed and we were unable to recover it. 00:39:05.814 [2024-09-27 15:57:46.074700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.814 [2024-09-27 15:57:46.074708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.814 qpair failed and we were unable to recover it. 00:39:05.814 [2024-09-27 15:57:46.075005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.814 [2024-09-27 15:57:46.075013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.815 qpair failed and we were unable to recover it. 00:39:05.815 [2024-09-27 15:57:46.075316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.815 [2024-09-27 15:57:46.075324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.815 qpair failed and we were unable to recover it. 00:39:05.815 [2024-09-27 15:57:46.075641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.815 [2024-09-27 15:57:46.075648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.815 qpair failed and we were unable to recover it. 00:39:05.815 [2024-09-27 15:57:46.075962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.815 [2024-09-27 15:57:46.075972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.815 qpair failed and we were unable to recover it. 00:39:05.815 [2024-09-27 15:57:46.076070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.815 [2024-09-27 15:57:46.076078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.815 qpair failed and we were unable to recover it. 00:39:05.815 [2024-09-27 15:57:46.076363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.815 [2024-09-27 15:57:46.076372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.815 qpair failed and we were unable to recover it. 00:39:05.815 [2024-09-27 15:57:46.076705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.815 [2024-09-27 15:57:46.076713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.815 qpair failed and we were unable to recover it. 00:39:05.815 [2024-09-27 15:57:46.077048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.815 [2024-09-27 15:57:46.077055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.815 qpair failed and we were unable to recover it. 00:39:05.815 [2024-09-27 15:57:46.077381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.815 [2024-09-27 15:57:46.077389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.815 qpair failed and we were unable to recover it. 00:39:05.815 [2024-09-27 15:57:46.077716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.815 [2024-09-27 15:57:46.077725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.815 qpair failed and we were unable to recover it. 00:39:05.815 [2024-09-27 15:57:46.078055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.815 [2024-09-27 15:57:46.078065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.815 qpair failed and we were unable to recover it. 00:39:05.815 [2024-09-27 15:57:46.078394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.815 [2024-09-27 15:57:46.078403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.815 qpair failed and we were unable to recover it. 00:39:05.815 [2024-09-27 15:57:46.078725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.815 [2024-09-27 15:57:46.078733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.815 qpair failed and we were unable to recover it. 00:39:05.815 [2024-09-27 15:57:46.079055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.815 [2024-09-27 15:57:46.079063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.815 qpair failed and we were unable to recover it. 00:39:05.815 [2024-09-27 15:57:46.079385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.815 [2024-09-27 15:57:46.079393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.815 qpair failed and we were unable to recover it. 00:39:05.815 [2024-09-27 15:57:46.079722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.815 [2024-09-27 15:57:46.079731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.815 qpair failed and we were unable to recover it. 00:39:05.815 [2024-09-27 15:57:46.079930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.815 [2024-09-27 15:57:46.079940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.815 qpair failed and we were unable to recover it. 00:39:05.815 [2024-09-27 15:57:46.080276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.815 [2024-09-27 15:57:46.080285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.815 qpair failed and we were unable to recover it. 00:39:05.815 [2024-09-27 15:57:46.080592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.815 [2024-09-27 15:57:46.080601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.815 qpair failed and we were unable to recover it. 00:39:05.815 [2024-09-27 15:57:46.080926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.815 [2024-09-27 15:57:46.080934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.815 qpair failed and we were unable to recover it. 00:39:05.815 [2024-09-27 15:57:46.081256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.815 [2024-09-27 15:57:46.081266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.815 qpair failed and we were unable to recover it. 00:39:05.815 [2024-09-27 15:57:46.081561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.815 [2024-09-27 15:57:46.081568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.815 qpair failed and we were unable to recover it. 00:39:05.815 [2024-09-27 15:57:46.081877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.815 [2024-09-27 15:57:46.081885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.815 qpair failed and we were unable to recover it. 00:39:05.815 [2024-09-27 15:57:46.082251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.815 [2024-09-27 15:57:46.082259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.815 qpair failed and we were unable to recover it. 00:39:05.815 [2024-09-27 15:57:46.082566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.815 [2024-09-27 15:57:46.082573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.815 qpair failed and we were unable to recover it. 00:39:05.815 [2024-09-27 15:57:46.082900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.815 [2024-09-27 15:57:46.082910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.815 qpair failed and we were unable to recover it. 00:39:05.815 [2024-09-27 15:57:46.083196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.815 [2024-09-27 15:57:46.083205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.815 qpair failed and we were unable to recover it. 00:39:05.815 [2024-09-27 15:57:46.083528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.815 [2024-09-27 15:57:46.083536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.815 qpair failed and we were unable to recover it. 00:39:05.815 [2024-09-27 15:57:46.083751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.815 [2024-09-27 15:57:46.083759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.815 qpair failed and we were unable to recover it. 00:39:05.815 [2024-09-27 15:57:46.084067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.815 [2024-09-27 15:57:46.084083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.815 qpair failed and we were unable to recover it. 00:39:05.815 [2024-09-27 15:57:46.084405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.815 [2024-09-27 15:57:46.084412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.815 qpair failed and we were unable to recover it. 00:39:05.815 [2024-09-27 15:57:46.084733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.815 [2024-09-27 15:57:46.084741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.815 qpair failed and we were unable to recover it. 00:39:05.815 [2024-09-27 15:57:46.085024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.815 [2024-09-27 15:57:46.085032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.815 qpair failed and we were unable to recover it. 00:39:05.815 [2024-09-27 15:57:46.085316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.815 [2024-09-27 15:57:46.085325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.815 qpair failed and we were unable to recover it. 00:39:05.816 [2024-09-27 15:57:46.085650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.816 [2024-09-27 15:57:46.085660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.816 qpair failed and we were unable to recover it. 00:39:05.816 [2024-09-27 15:57:46.086022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.816 [2024-09-27 15:57:46.086031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.816 qpair failed and we were unable to recover it. 00:39:05.816 [2024-09-27 15:57:46.086313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.816 [2024-09-27 15:57:46.086321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.816 qpair failed and we were unable to recover it. 00:39:05.816 [2024-09-27 15:57:46.086658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.816 [2024-09-27 15:57:46.086667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.816 qpair failed and we were unable to recover it. 00:39:05.816 [2024-09-27 15:57:46.087011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.816 [2024-09-27 15:57:46.087019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.816 qpair failed and we were unable to recover it. 00:39:05.816 [2024-09-27 15:57:46.087322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.816 [2024-09-27 15:57:46.087331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.816 qpair failed and we were unable to recover it. 00:39:05.816 [2024-09-27 15:57:46.087551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.816 [2024-09-27 15:57:46.087559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.816 qpair failed and we were unable to recover it. 00:39:05.816 [2024-09-27 15:57:46.087884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.816 [2024-09-27 15:57:46.087899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.816 qpair failed and we were unable to recover it. 00:39:05.816 [2024-09-27 15:57:46.088216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.816 [2024-09-27 15:57:46.088223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.816 qpair failed and we were unable to recover it. 00:39:05.816 [2024-09-27 15:57:46.088546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.816 [2024-09-27 15:57:46.088554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.816 qpair failed and we were unable to recover it. 00:39:05.816 [2024-09-27 15:57:46.088872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.816 [2024-09-27 15:57:46.088879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.816 qpair failed and we were unable to recover it. 00:39:05.816 [2024-09-27 15:57:46.089184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.816 [2024-09-27 15:57:46.089192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.816 qpair failed and we were unable to recover it. 00:39:05.816 [2024-09-27 15:57:46.089520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.816 [2024-09-27 15:57:46.089528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.816 qpair failed and we were unable to recover it. 00:39:05.816 [2024-09-27 15:57:46.089737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.816 [2024-09-27 15:57:46.089745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.816 qpair failed and we were unable to recover it. 00:39:05.816 [2024-09-27 15:57:46.090089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.816 [2024-09-27 15:57:46.090099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.816 qpair failed and we were unable to recover it. 00:39:05.816 [2024-09-27 15:57:46.090428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.816 [2024-09-27 15:57:46.090437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.816 qpair failed and we were unable to recover it. 00:39:05.816 [2024-09-27 15:57:46.090724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.816 [2024-09-27 15:57:46.090731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.816 qpair failed and we were unable to recover it. 00:39:05.816 [2024-09-27 15:57:46.090917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.816 [2024-09-27 15:57:46.090925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.816 qpair failed and we were unable to recover it. 00:39:05.816 [2024-09-27 15:57:46.091179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.816 [2024-09-27 15:57:46.091188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.816 qpair failed and we were unable to recover it. 00:39:05.816 [2024-09-27 15:57:46.091516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.816 [2024-09-27 15:57:46.091526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.816 qpair failed and we were unable to recover it. 00:39:05.816 [2024-09-27 15:57:46.091728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.816 [2024-09-27 15:57:46.091737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.816 qpair failed and we were unable to recover it. 00:39:05.816 [2024-09-27 15:57:46.092058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.816 [2024-09-27 15:57:46.092066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.816 qpair failed and we were unable to recover it. 00:39:05.816 [2024-09-27 15:57:46.092387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.816 [2024-09-27 15:57:46.092397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.816 qpair failed and we were unable to recover it. 00:39:05.816 [2024-09-27 15:57:46.092723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.816 [2024-09-27 15:57:46.092731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.816 qpair failed and we were unable to recover it. 00:39:05.816 [2024-09-27 15:57:46.093056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.816 [2024-09-27 15:57:46.093064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.816 qpair failed and we were unable to recover it. 00:39:05.816 [2024-09-27 15:57:46.093397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.816 [2024-09-27 15:57:46.093406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.816 qpair failed and we were unable to recover it. 00:39:05.816 [2024-09-27 15:57:46.093729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.816 [2024-09-27 15:57:46.093738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.816 qpair failed and we were unable to recover it. 00:39:05.816 [2024-09-27 15:57:46.094064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.816 [2024-09-27 15:57:46.094073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.816 qpair failed and we were unable to recover it. 00:39:05.816 [2024-09-27 15:57:46.094391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.816 [2024-09-27 15:57:46.094399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.816 qpair failed and we were unable to recover it. 00:39:05.816 [2024-09-27 15:57:46.094726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.816 [2024-09-27 15:57:46.094735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.816 qpair failed and we were unable to recover it. 00:39:05.816 [2024-09-27 15:57:46.095021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.816 [2024-09-27 15:57:46.095030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.816 qpair failed and we were unable to recover it. 00:39:05.816 [2024-09-27 15:57:46.095356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.816 [2024-09-27 15:57:46.095363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.816 qpair failed and we were unable to recover it. 00:39:05.816 [2024-09-27 15:57:46.095690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.816 [2024-09-27 15:57:46.095698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.816 qpair failed and we were unable to recover it. 00:39:05.816 [2024-09-27 15:57:46.096023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.816 [2024-09-27 15:57:46.096031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.816 qpair failed and we were unable to recover it. 00:39:05.816 [2024-09-27 15:57:46.096346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.816 [2024-09-27 15:57:46.096354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.816 qpair failed and we were unable to recover it. 00:39:05.816 [2024-09-27 15:57:46.096593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.816 [2024-09-27 15:57:46.096600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.816 qpair failed and we were unable to recover it. 00:39:05.816 [2024-09-27 15:57:46.096943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.816 [2024-09-27 15:57:46.096952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.816 qpair failed and we were unable to recover it. 00:39:05.816 [2024-09-27 15:57:46.097143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.817 [2024-09-27 15:57:46.097152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.817 qpair failed and we were unable to recover it. 00:39:05.817 [2024-09-27 15:57:46.097490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.817 [2024-09-27 15:57:46.097497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.817 qpair failed and we were unable to recover it. 00:39:05.817 [2024-09-27 15:57:46.097815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.817 [2024-09-27 15:57:46.097823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.817 qpair failed and we were unable to recover it. 00:39:05.817 [2024-09-27 15:57:46.098152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.817 [2024-09-27 15:57:46.098160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.817 qpair failed and we were unable to recover it. 00:39:05.817 [2024-09-27 15:57:46.098563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.817 [2024-09-27 15:57:46.098572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.817 qpair failed and we were unable to recover it. 00:39:05.817 [2024-09-27 15:57:46.098902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.817 [2024-09-27 15:57:46.098912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.817 qpair failed and we were unable to recover it. 00:39:05.817 [2024-09-27 15:57:46.099092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.817 [2024-09-27 15:57:46.099099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.817 qpair failed and we were unable to recover it. 00:39:05.817 [2024-09-27 15:57:46.099336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.817 [2024-09-27 15:57:46.099345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.817 qpair failed and we were unable to recover it. 00:39:05.817 [2024-09-27 15:57:46.099655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.817 [2024-09-27 15:57:46.099662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.817 qpair failed and we were unable to recover it. 00:39:05.817 [2024-09-27 15:57:46.099992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.817 [2024-09-27 15:57:46.100000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.817 qpair failed and we were unable to recover it. 00:39:05.817 [2024-09-27 15:57:46.100327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.817 [2024-09-27 15:57:46.100335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.817 qpair failed and we were unable to recover it. 00:39:05.817 [2024-09-27 15:57:46.100661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.817 [2024-09-27 15:57:46.100669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.817 qpair failed and we were unable to recover it. 00:39:05.817 [2024-09-27 15:57:46.100988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.817 [2024-09-27 15:57:46.100996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.817 qpair failed and we were unable to recover it. 00:39:05.817 [2024-09-27 15:57:46.101318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.817 [2024-09-27 15:57:46.101325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.817 qpair failed and we were unable to recover it. 00:39:05.817 [2024-09-27 15:57:46.101649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.817 [2024-09-27 15:57:46.101658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.817 qpair failed and we were unable to recover it. 00:39:05.817 [2024-09-27 15:57:46.101888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.817 [2024-09-27 15:57:46.101904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.817 qpair failed and we were unable to recover it. 00:39:05.817 [2024-09-27 15:57:46.102232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.817 [2024-09-27 15:57:46.102240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.817 qpair failed and we were unable to recover it. 00:39:05.817 [2024-09-27 15:57:46.102559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.817 [2024-09-27 15:57:46.102570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.817 qpair failed and we were unable to recover it. 00:39:05.817 [2024-09-27 15:57:46.102871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.817 [2024-09-27 15:57:46.102878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.817 qpair failed and we were unable to recover it. 00:39:05.817 [2024-09-27 15:57:46.103196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.817 [2024-09-27 15:57:46.103204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.817 qpair failed and we were unable to recover it. 00:39:05.817 [2024-09-27 15:57:46.103445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.817 [2024-09-27 15:57:46.103452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.817 qpair failed and we were unable to recover it. 00:39:05.817 [2024-09-27 15:57:46.103620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.817 [2024-09-27 15:57:46.103628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.817 qpair failed and we were unable to recover it. 00:39:05.817 [2024-09-27 15:57:46.103969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.817 [2024-09-27 15:57:46.103979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.817 qpair failed and we were unable to recover it. 00:39:05.817 [2024-09-27 15:57:46.104229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.817 [2024-09-27 15:57:46.104237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.817 qpair failed and we were unable to recover it. 00:39:05.817 [2024-09-27 15:57:46.104626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.817 [2024-09-27 15:57:46.104633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.817 qpair failed and we were unable to recover it. 00:39:05.817 [2024-09-27 15:57:46.104913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.817 [2024-09-27 15:57:46.104921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.817 qpair failed and we were unable to recover it. 00:39:05.817 [2024-09-27 15:57:46.105271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.817 [2024-09-27 15:57:46.105280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.817 qpair failed and we were unable to recover it. 00:39:05.817 [2024-09-27 15:57:46.105606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.817 [2024-09-27 15:57:46.105613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.817 qpair failed and we were unable to recover it. 00:39:05.817 [2024-09-27 15:57:46.105814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.817 [2024-09-27 15:57:46.105821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.817 qpair failed and we were unable to recover it. 00:39:05.817 [2024-09-27 15:57:46.106101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.817 [2024-09-27 15:57:46.106110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.817 qpair failed and we were unable to recover it. 00:39:05.817 [2024-09-27 15:57:46.106320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.817 [2024-09-27 15:57:46.106329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.817 qpair failed and we were unable to recover it. 00:39:05.817 [2024-09-27 15:57:46.106655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.817 [2024-09-27 15:57:46.106664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.817 qpair failed and we were unable to recover it. 00:39:05.817 [2024-09-27 15:57:46.106998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.817 [2024-09-27 15:57:46.107005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.817 qpair failed and we were unable to recover it. 00:39:05.817 [2024-09-27 15:57:46.107289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.817 [2024-09-27 15:57:46.107297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.817 qpair failed and we were unable to recover it. 00:39:05.817 [2024-09-27 15:57:46.107631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.817 [2024-09-27 15:57:46.107639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.817 qpair failed and we were unable to recover it. 00:39:05.817 [2024-09-27 15:57:46.107835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.817 [2024-09-27 15:57:46.107842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.817 qpair failed and we were unable to recover it. 00:39:05.817 [2024-09-27 15:57:46.108231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.817 [2024-09-27 15:57:46.108241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.817 qpair failed and we were unable to recover it. 00:39:05.817 [2024-09-27 15:57:46.108563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.817 [2024-09-27 15:57:46.108572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.817 qpair failed and we were unable to recover it. 00:39:05.818 [2024-09-27 15:57:46.108904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.818 [2024-09-27 15:57:46.108912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.818 qpair failed and we were unable to recover it. 00:39:05.818 [2024-09-27 15:57:46.109228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.818 [2024-09-27 15:57:46.109236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.818 qpair failed and we were unable to recover it. 00:39:05.818 [2024-09-27 15:57:46.109567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.818 [2024-09-27 15:57:46.109574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.818 qpair failed and we were unable to recover it. 00:39:05.818 [2024-09-27 15:57:46.109743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.818 [2024-09-27 15:57:46.109751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.818 qpair failed and we were unable to recover it. 00:39:05.818 [2024-09-27 15:57:46.110132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.818 [2024-09-27 15:57:46.110141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.818 qpair failed and we were unable to recover it. 00:39:05.818 [2024-09-27 15:57:46.110477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.818 [2024-09-27 15:57:46.110484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.818 qpair failed and we were unable to recover it. 00:39:05.818 [2024-09-27 15:57:46.110811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.818 [2024-09-27 15:57:46.110823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.818 qpair failed and we were unable to recover it. 00:39:05.818 [2024-09-27 15:57:46.111217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.818 [2024-09-27 15:57:46.111225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.818 qpair failed and we were unable to recover it. 00:39:05.818 [2024-09-27 15:57:46.111551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.818 [2024-09-27 15:57:46.111559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.818 qpair failed and we were unable to recover it. 00:39:05.818 [2024-09-27 15:57:46.111724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.818 [2024-09-27 15:57:46.111733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.818 qpair failed and we were unable to recover it. 00:39:05.818 [2024-09-27 15:57:46.112043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.818 [2024-09-27 15:57:46.112051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.818 qpair failed and we were unable to recover it. 00:39:05.818 [2024-09-27 15:57:46.112387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.818 [2024-09-27 15:57:46.112395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.818 qpair failed and we were unable to recover it. 00:39:05.818 [2024-09-27 15:57:46.112721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.818 [2024-09-27 15:57:46.112729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.818 qpair failed and we were unable to recover it. 00:39:05.818 [2024-09-27 15:57:46.113136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.818 [2024-09-27 15:57:46.113145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.818 qpair failed and we were unable to recover it. 00:39:05.818 [2024-09-27 15:57:46.113480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.818 [2024-09-27 15:57:46.113489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.818 qpair failed and we were unable to recover it. 00:39:05.818 [2024-09-27 15:57:46.113814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.818 [2024-09-27 15:57:46.113822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.818 qpair failed and we were unable to recover it. 00:39:05.818 [2024-09-27 15:57:46.114114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.818 [2024-09-27 15:57:46.114122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.818 qpair failed and we were unable to recover it. 00:39:05.818 [2024-09-27 15:57:46.114389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.818 [2024-09-27 15:57:46.114396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.818 qpair failed and we were unable to recover it. 00:39:05.818 [2024-09-27 15:57:46.114721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.818 [2024-09-27 15:57:46.114728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.818 qpair failed and we were unable to recover it. 00:39:05.818 [2024-09-27 15:57:46.115065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.818 [2024-09-27 15:57:46.115073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.818 qpair failed and we were unable to recover it. 00:39:05.818 [2024-09-27 15:57:46.115412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.818 [2024-09-27 15:57:46.115421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.818 qpair failed and we were unable to recover it. 00:39:05.818 [2024-09-27 15:57:46.115760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.818 [2024-09-27 15:57:46.115769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.818 qpair failed and we were unable to recover it. 00:39:05.818 [2024-09-27 15:57:46.116086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.818 [2024-09-27 15:57:46.116096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.818 qpair failed and we were unable to recover it. 00:39:05.818 [2024-09-27 15:57:46.116410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.818 [2024-09-27 15:57:46.116418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.818 qpair failed and we were unable to recover it. 00:39:05.818 [2024-09-27 15:57:46.116805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.818 [2024-09-27 15:57:46.116813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.818 qpair failed and we were unable to recover it. 00:39:05.818 [2024-09-27 15:57:46.117127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.818 [2024-09-27 15:57:46.117135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.818 qpair failed and we were unable to recover it. 00:39:05.818 [2024-09-27 15:57:46.117470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.818 [2024-09-27 15:57:46.117477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.818 qpair failed and we were unable to recover it. 00:39:05.818 [2024-09-27 15:57:46.117702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.818 [2024-09-27 15:57:46.117711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.818 qpair failed and we were unable to recover it. 00:39:05.818 [2024-09-27 15:57:46.118041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.818 [2024-09-27 15:57:46.118049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.818 qpair failed and we were unable to recover it. 00:39:05.818 [2024-09-27 15:57:46.118267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.818 [2024-09-27 15:57:46.118275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.818 qpair failed and we were unable to recover it. 00:39:05.818 [2024-09-27 15:57:46.118667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.818 [2024-09-27 15:57:46.118674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.818 qpair failed and we were unable to recover it. 00:39:05.818 [2024-09-27 15:57:46.118851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.818 [2024-09-27 15:57:46.118859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.818 qpair failed and we were unable to recover it. 00:39:05.818 [2024-09-27 15:57:46.119154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.818 [2024-09-27 15:57:46.119161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.818 qpair failed and we were unable to recover it. 00:39:05.818 [2024-09-27 15:57:46.119495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.818 [2024-09-27 15:57:46.119505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.818 qpair failed and we were unable to recover it. 00:39:05.818 [2024-09-27 15:57:46.119791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.818 [2024-09-27 15:57:46.119798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.818 qpair failed and we were unable to recover it. 00:39:05.818 [2024-09-27 15:57:46.120086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.818 [2024-09-27 15:57:46.120094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.818 qpair failed and we were unable to recover it. 00:39:05.818 [2024-09-27 15:57:46.120308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.818 [2024-09-27 15:57:46.120318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.818 qpair failed and we were unable to recover it. 00:39:05.818 [2024-09-27 15:57:46.120497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.818 [2024-09-27 15:57:46.120505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.819 qpair failed and we were unable to recover it. 00:39:05.819 [2024-09-27 15:57:46.120801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.819 [2024-09-27 15:57:46.120809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.819 qpair failed and we were unable to recover it. 00:39:05.819 [2024-09-27 15:57:46.121142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.819 [2024-09-27 15:57:46.121150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.819 qpair failed and we were unable to recover it. 00:39:05.819 [2024-09-27 15:57:46.121348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.819 [2024-09-27 15:57:46.121356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.819 qpair failed and we were unable to recover it. 00:39:05.819 [2024-09-27 15:57:46.121688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.819 [2024-09-27 15:57:46.121696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.819 qpair failed and we were unable to recover it. 00:39:05.819 [2024-09-27 15:57:46.121905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.819 [2024-09-27 15:57:46.121913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.819 qpair failed and we were unable to recover it. 00:39:05.819 [2024-09-27 15:57:46.122101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.819 [2024-09-27 15:57:46.122109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.819 qpair failed and we were unable to recover it. 00:39:05.819 [2024-09-27 15:57:46.122391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.819 [2024-09-27 15:57:46.122398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.819 qpair failed and we were unable to recover it. 00:39:05.819 [2024-09-27 15:57:46.122712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.819 [2024-09-27 15:57:46.122722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.819 qpair failed and we were unable to recover it. 00:39:05.819 [2024-09-27 15:57:46.122933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.819 [2024-09-27 15:57:46.122941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.819 qpair failed and we were unable to recover it. 00:39:05.819 [2024-09-27 15:57:46.123277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.819 [2024-09-27 15:57:46.123285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.819 qpair failed and we were unable to recover it. 00:39:05.819 [2024-09-27 15:57:46.123611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.819 [2024-09-27 15:57:46.123619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.819 qpair failed and we were unable to recover it. 00:39:05.819 [2024-09-27 15:57:46.123941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.819 [2024-09-27 15:57:46.123949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.819 qpair failed and we were unable to recover it. 00:39:05.819 [2024-09-27 15:57:46.124246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.819 [2024-09-27 15:57:46.124254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.819 qpair failed and we were unable to recover it. 00:39:05.819 [2024-09-27 15:57:46.124565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.819 [2024-09-27 15:57:46.124574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.819 qpair failed and we were unable to recover it. 00:39:05.819 [2024-09-27 15:57:46.124929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.819 [2024-09-27 15:57:46.124938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.819 qpair failed and we were unable to recover it. 00:39:05.819 [2024-09-27 15:57:46.125137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.819 [2024-09-27 15:57:46.125146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.819 qpair failed and we were unable to recover it. 00:39:05.819 [2024-09-27 15:57:46.125519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.819 [2024-09-27 15:57:46.125527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.819 qpair failed and we were unable to recover it. 00:39:05.819 [2024-09-27 15:57:46.125848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.819 [2024-09-27 15:57:46.125856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.819 qpair failed and we were unable to recover it. 00:39:05.819 [2024-09-27 15:57:46.126194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.819 [2024-09-27 15:57:46.126201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.819 qpair failed and we were unable to recover it. 00:39:05.819 [2024-09-27 15:57:46.126508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.819 [2024-09-27 15:57:46.126516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.819 qpair failed and we were unable to recover it. 00:39:05.819 [2024-09-27 15:57:46.126624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.819 [2024-09-27 15:57:46.126631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.819 qpair failed and we were unable to recover it. 00:39:05.819 [2024-09-27 15:57:46.126788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.819 [2024-09-27 15:57:46.126796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.819 qpair failed and we were unable to recover it. 00:39:05.819 [2024-09-27 15:57:46.127086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.819 [2024-09-27 15:57:46.127095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.819 qpair failed and we were unable to recover it. 00:39:05.819 [2024-09-27 15:57:46.127423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.819 [2024-09-27 15:57:46.127432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.819 qpair failed and we were unable to recover it. 00:39:05.819 [2024-09-27 15:57:46.127775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.819 [2024-09-27 15:57:46.127782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.819 qpair failed and we were unable to recover it. 00:39:05.819 [2024-09-27 15:57:46.128067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.819 [2024-09-27 15:57:46.128075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.819 qpair failed and we were unable to recover it. 00:39:05.819 [2024-09-27 15:57:46.128401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.819 [2024-09-27 15:57:46.128409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.819 qpair failed and we were unable to recover it. 00:39:05.819 [2024-09-27 15:57:46.128736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.819 [2024-09-27 15:57:46.128744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.819 qpair failed and we were unable to recover it. 00:39:05.819 [2024-09-27 15:57:46.128918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.819 [2024-09-27 15:57:46.128927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.819 qpair failed and we were unable to recover it. 00:39:05.819 [2024-09-27 15:57:46.129249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.819 [2024-09-27 15:57:46.129256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.819 qpair failed and we were unable to recover it. 00:39:05.819 [2024-09-27 15:57:46.129573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.819 [2024-09-27 15:57:46.129582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.819 qpair failed and we were unable to recover it. 00:39:05.819 [2024-09-27 15:57:46.129909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.819 [2024-09-27 15:57:46.129917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.819 qpair failed and we were unable to recover it. 00:39:05.819 [2024-09-27 15:57:46.130229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.819 [2024-09-27 15:57:46.130238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.819 qpair failed and we were unable to recover it. 00:39:05.819 [2024-09-27 15:57:46.130555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.819 [2024-09-27 15:57:46.130562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.819 qpair failed and we were unable to recover it. 00:39:05.819 [2024-09-27 15:57:46.130966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.820 [2024-09-27 15:57:46.130974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.820 qpair failed and we were unable to recover it. 00:39:05.820 [2024-09-27 15:57:46.131301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.820 [2024-09-27 15:57:46.131311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.820 qpair failed and we were unable to recover it. 00:39:05.820 [2024-09-27 15:57:46.131512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.820 [2024-09-27 15:57:46.131521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.820 qpair failed and we were unable to recover it. 00:39:05.820 [2024-09-27 15:57:46.131721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.820 [2024-09-27 15:57:46.131730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.820 qpair failed and we were unable to recover it. 00:39:05.820 [2024-09-27 15:57:46.132015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.820 [2024-09-27 15:57:46.132024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.820 qpair failed and we were unable to recover it. 00:39:05.820 [2024-09-27 15:57:46.132385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.820 [2024-09-27 15:57:46.132395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.820 qpair failed and we were unable to recover it. 00:39:05.820 [2024-09-27 15:57:46.132719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.820 [2024-09-27 15:57:46.132727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.820 qpair failed and we were unable to recover it. 00:39:05.820 [2024-09-27 15:57:46.133079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.820 [2024-09-27 15:57:46.133088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.820 qpair failed and we were unable to recover it. 00:39:05.820 [2024-09-27 15:57:46.133431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.820 [2024-09-27 15:57:46.133438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.820 qpair failed and we were unable to recover it. 00:39:05.820 [2024-09-27 15:57:46.133762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.820 [2024-09-27 15:57:46.133769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.820 qpair failed and we were unable to recover it. 00:39:05.820 [2024-09-27 15:57:46.134107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.820 [2024-09-27 15:57:46.134116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.820 qpair failed and we were unable to recover it. 00:39:05.820 [2024-09-27 15:57:46.134446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.820 [2024-09-27 15:57:46.134454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.820 qpair failed and we were unable to recover it. 00:39:05.820 [2024-09-27 15:57:46.134648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.820 [2024-09-27 15:57:46.134657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.820 qpair failed and we were unable to recover it. 00:39:05.820 [2024-09-27 15:57:46.134883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.820 [2024-09-27 15:57:46.134892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.820 qpair failed and we were unable to recover it. 00:39:05.820 [2024-09-27 15:57:46.135281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.820 [2024-09-27 15:57:46.135289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.820 qpair failed and we were unable to recover it. 00:39:05.820 [2024-09-27 15:57:46.135489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.820 [2024-09-27 15:57:46.135496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.820 qpair failed and we were unable to recover it. 00:39:05.820 [2024-09-27 15:57:46.135872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.820 [2024-09-27 15:57:46.135881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.820 qpair failed and we were unable to recover it. 00:39:05.820 [2024-09-27 15:57:46.136083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.820 [2024-09-27 15:57:46.136091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.820 qpair failed and we were unable to recover it. 00:39:05.820 [2024-09-27 15:57:46.136418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.820 [2024-09-27 15:57:46.136426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.820 qpair failed and we were unable to recover it. 00:39:05.820 [2024-09-27 15:57:46.136772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.820 [2024-09-27 15:57:46.136781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.820 qpair failed and we were unable to recover it. 00:39:05.820 [2024-09-27 15:57:46.137127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.820 [2024-09-27 15:57:46.137136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.820 qpair failed and we were unable to recover it. 00:39:05.820 [2024-09-27 15:57:46.137454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.820 [2024-09-27 15:57:46.137463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.820 qpair failed and we were unable to recover it. 00:39:05.820 [2024-09-27 15:57:46.137822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.820 [2024-09-27 15:57:46.137831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.820 qpair failed and we were unable to recover it. 00:39:05.820 [2024-09-27 15:57:46.137998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.820 [2024-09-27 15:57:46.138007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.820 qpair failed and we were unable to recover it. 00:39:05.820 [2024-09-27 15:57:46.138297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.820 [2024-09-27 15:57:46.138305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.820 qpair failed and we were unable to recover it. 00:39:05.820 [2024-09-27 15:57:46.138474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.820 [2024-09-27 15:57:46.138484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.820 qpair failed and we were unable to recover it. 00:39:05.820 [2024-09-27 15:57:46.138834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.820 [2024-09-27 15:57:46.138843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.820 qpair failed and we were unable to recover it. 00:39:05.820 [2024-09-27 15:57:46.139183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.820 [2024-09-27 15:57:46.139193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.820 qpair failed and we were unable to recover it. 00:39:05.820 [2024-09-27 15:57:46.139553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.820 [2024-09-27 15:57:46.139563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.820 qpair failed and we were unable to recover it. 00:39:05.820 [2024-09-27 15:57:46.139888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.820 [2024-09-27 15:57:46.139912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.820 qpair failed and we were unable to recover it. 00:39:05.820 [2024-09-27 15:57:46.140259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.820 [2024-09-27 15:57:46.140266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.820 qpair failed and we were unable to recover it. 00:39:05.820 [2024-09-27 15:57:46.140586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.820 [2024-09-27 15:57:46.140594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.820 qpair failed and we were unable to recover it. 00:39:05.820 [2024-09-27 15:57:46.140922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.820 [2024-09-27 15:57:46.140930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.820 qpair failed and we were unable to recover it. 00:39:05.820 [2024-09-27 15:57:46.141267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.820 [2024-09-27 15:57:46.141275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.820 qpair failed and we were unable to recover it. 00:39:05.820 [2024-09-27 15:57:46.141598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.820 [2024-09-27 15:57:46.141607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.820 qpair failed and we were unable to recover it. 00:39:05.820 [2024-09-27 15:57:46.142013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.820 [2024-09-27 15:57:46.142023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.820 qpair failed and we were unable to recover it. 00:39:05.820 [2024-09-27 15:57:46.142358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.820 [2024-09-27 15:57:46.142365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.820 qpair failed and we were unable to recover it. 00:39:05.820 [2024-09-27 15:57:46.142670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.821 [2024-09-27 15:57:46.142679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.821 qpair failed and we were unable to recover it. 00:39:05.821 [2024-09-27 15:57:46.143012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.821 [2024-09-27 15:57:46.143020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.821 qpair failed and we were unable to recover it. 00:39:05.821 [2024-09-27 15:57:46.143247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.821 [2024-09-27 15:57:46.143255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.821 qpair failed and we were unable to recover it. 00:39:05.821 [2024-09-27 15:57:46.143556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.821 [2024-09-27 15:57:46.143563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.821 qpair failed and we were unable to recover it. 00:39:05.821 [2024-09-27 15:57:46.143776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.821 [2024-09-27 15:57:46.143785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.821 qpair failed and we were unable to recover it. 00:39:05.821 [2024-09-27 15:57:46.144149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.821 [2024-09-27 15:57:46.144159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.821 qpair failed and we were unable to recover it. 00:39:05.821 [2024-09-27 15:57:46.144478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.821 [2024-09-27 15:57:46.144486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.821 qpair failed and we were unable to recover it. 00:39:05.821 [2024-09-27 15:57:46.144795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.821 [2024-09-27 15:57:46.144803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.821 qpair failed and we were unable to recover it. 00:39:05.821 [2024-09-27 15:57:46.145023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.821 [2024-09-27 15:57:46.145032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.821 qpair failed and we were unable to recover it. 00:39:05.821 [2024-09-27 15:57:46.145391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.821 [2024-09-27 15:57:46.145398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.821 qpair failed and we were unable to recover it. 00:39:05.821 [2024-09-27 15:57:46.145744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.821 [2024-09-27 15:57:46.145754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.821 qpair failed and we were unable to recover it. 00:39:05.821 [2024-09-27 15:57:46.146086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.821 [2024-09-27 15:57:46.146094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.821 qpair failed and we were unable to recover it. 00:39:05.821 [2024-09-27 15:57:46.146392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.821 [2024-09-27 15:57:46.146400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.821 qpair failed and we were unable to recover it. 00:39:05.821 [2024-09-27 15:57:46.146723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.821 [2024-09-27 15:57:46.146731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.821 qpair failed and we were unable to recover it. 00:39:05.821 [2024-09-27 15:57:46.147047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.821 [2024-09-27 15:57:46.147055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.821 qpair failed and we were unable to recover it. 00:39:05.821 [2024-09-27 15:57:46.147377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.821 [2024-09-27 15:57:46.147384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.821 qpair failed and we were unable to recover it. 00:39:05.821 [2024-09-27 15:57:46.147706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.821 [2024-09-27 15:57:46.147714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.821 qpair failed and we were unable to recover it. 00:39:05.821 [2024-09-27 15:57:46.148033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.821 [2024-09-27 15:57:46.148040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.821 qpair failed and we were unable to recover it. 00:39:05.821 [2024-09-27 15:57:46.148370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.821 [2024-09-27 15:57:46.148377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.821 qpair failed and we were unable to recover it. 00:39:05.821 [2024-09-27 15:57:46.148700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.821 [2024-09-27 15:57:46.148712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.821 qpair failed and we were unable to recover it. 00:39:05.821 [2024-09-27 15:57:46.149019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.821 [2024-09-27 15:57:46.149027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.821 qpair failed and we were unable to recover it. 00:39:05.821 [2024-09-27 15:57:46.149391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.821 [2024-09-27 15:57:46.149399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.821 qpair failed and we were unable to recover it. 00:39:05.821 [2024-09-27 15:57:46.149720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.821 [2024-09-27 15:57:46.149727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.821 qpair failed and we were unable to recover it. 00:39:05.821 [2024-09-27 15:57:46.150051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.821 [2024-09-27 15:57:46.150059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.821 qpair failed and we were unable to recover it. 00:39:05.821 [2024-09-27 15:57:46.150429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.821 [2024-09-27 15:57:46.150437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.821 qpair failed and we were unable to recover it. 00:39:05.821 [2024-09-27 15:57:46.150754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.821 [2024-09-27 15:57:46.150762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.821 qpair failed and we were unable to recover it. 00:39:05.821 [2024-09-27 15:57:46.151123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.821 [2024-09-27 15:57:46.151132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.821 qpair failed and we were unable to recover it. 00:39:05.821 [2024-09-27 15:57:46.151457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.821 [2024-09-27 15:57:46.151466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.821 qpair failed and we were unable to recover it. 00:39:05.821 [2024-09-27 15:57:46.151794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.821 [2024-09-27 15:57:46.151803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.821 qpair failed and we were unable to recover it. 00:39:05.821 [2024-09-27 15:57:46.152116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.821 [2024-09-27 15:57:46.152125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.821 qpair failed and we were unable to recover it. 00:39:05.821 [2024-09-27 15:57:46.152363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.821 [2024-09-27 15:57:46.152371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.821 qpair failed and we were unable to recover it. 00:39:05.821 [2024-09-27 15:57:46.152706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.821 [2024-09-27 15:57:46.152713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.821 qpair failed and we were unable to recover it. 00:39:05.821 [2024-09-27 15:57:46.153044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.821 [2024-09-27 15:57:46.153053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.821 qpair failed and we were unable to recover it. 00:39:05.821 [2024-09-27 15:57:46.153357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.821 [2024-09-27 15:57:46.153364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.821 qpair failed and we were unable to recover it. 00:39:05.821 [2024-09-27 15:57:46.153684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.821 [2024-09-27 15:57:46.153692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.821 qpair failed and we were unable to recover it. 00:39:05.821 [2024-09-27 15:57:46.154023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.821 [2024-09-27 15:57:46.154031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.821 qpair failed and we were unable to recover it. 00:39:05.821 [2024-09-27 15:57:46.154346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.821 [2024-09-27 15:57:46.154354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.821 qpair failed and we were unable to recover it. 00:39:05.821 [2024-09-27 15:57:46.154692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.822 [2024-09-27 15:57:46.154699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.822 qpair failed and we were unable to recover it. 00:39:05.822 [2024-09-27 15:57:46.155023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.822 [2024-09-27 15:57:46.155031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.822 qpair failed and we were unable to recover it. 00:39:05.822 [2024-09-27 15:57:46.155357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.822 [2024-09-27 15:57:46.155365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.822 qpair failed and we were unable to recover it. 00:39:05.822 [2024-09-27 15:57:46.155694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.822 [2024-09-27 15:57:46.155703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.822 qpair failed and we were unable to recover it. 00:39:05.822 [2024-09-27 15:57:46.156028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.822 [2024-09-27 15:57:46.156035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.822 qpair failed and we were unable to recover it. 00:39:05.822 [2024-09-27 15:57:46.156340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.822 [2024-09-27 15:57:46.156348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.822 qpair failed and we were unable to recover it. 00:39:05.822 [2024-09-27 15:57:46.156671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.822 [2024-09-27 15:57:46.156678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.822 qpair failed and we were unable to recover it. 00:39:05.822 [2024-09-27 15:57:46.157007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.822 [2024-09-27 15:57:46.157015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.822 qpair failed and we were unable to recover it. 00:39:05.822 [2024-09-27 15:57:46.157419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.822 [2024-09-27 15:57:46.157427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.822 qpair failed and we were unable to recover it. 00:39:05.822 [2024-09-27 15:57:46.157731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.822 [2024-09-27 15:57:46.157740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.822 qpair failed and we were unable to recover it. 00:39:05.822 [2024-09-27 15:57:46.158037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.822 [2024-09-27 15:57:46.158046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.822 qpair failed and we were unable to recover it. 00:39:05.822 [2024-09-27 15:57:46.158358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.822 [2024-09-27 15:57:46.158368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.822 qpair failed and we were unable to recover it. 00:39:05.822 [2024-09-27 15:57:46.158658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.822 [2024-09-27 15:57:46.158666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.822 qpair failed and we were unable to recover it. 00:39:05.822 [2024-09-27 15:57:46.158972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.822 [2024-09-27 15:57:46.158980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.822 qpair failed and we were unable to recover it. 00:39:05.822 [2024-09-27 15:57:46.159301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.822 [2024-09-27 15:57:46.159308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.822 qpair failed and we were unable to recover it. 00:39:05.822 [2024-09-27 15:57:46.159688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.822 [2024-09-27 15:57:46.159695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.822 qpair failed and we were unable to recover it. 00:39:05.822 [2024-09-27 15:57:46.159972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.822 [2024-09-27 15:57:46.159979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.822 qpair failed and we were unable to recover it. 00:39:05.822 [2024-09-27 15:57:46.160277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.822 [2024-09-27 15:57:46.160284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.822 qpair failed and we were unable to recover it. 00:39:05.822 [2024-09-27 15:57:46.160606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.822 [2024-09-27 15:57:46.160615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.822 qpair failed and we were unable to recover it. 00:39:05.822 [2024-09-27 15:57:46.160935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.822 [2024-09-27 15:57:46.160944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.822 qpair failed and we were unable to recover it. 00:39:05.822 [2024-09-27 15:57:46.161277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.822 [2024-09-27 15:57:46.161285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.822 qpair failed and we were unable to recover it. 00:39:05.822 [2024-09-27 15:57:46.161501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.822 [2024-09-27 15:57:46.161510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.822 qpair failed and we were unable to recover it. 00:39:05.822 [2024-09-27 15:57:46.161817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.822 [2024-09-27 15:57:46.161824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.822 qpair failed and we were unable to recover it. 00:39:05.822 [2024-09-27 15:57:46.162133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.822 [2024-09-27 15:57:46.162142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.822 qpair failed and we were unable to recover it. 00:39:05.822 [2024-09-27 15:57:46.162448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.822 [2024-09-27 15:57:46.162456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.822 qpair failed and we were unable to recover it. 00:39:05.822 [2024-09-27 15:57:46.162763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.822 [2024-09-27 15:57:46.162772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.822 qpair failed and we were unable to recover it. 00:39:05.822 [2024-09-27 15:57:46.163077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.822 [2024-09-27 15:57:46.163085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.822 qpair failed and we were unable to recover it. 00:39:05.822 [2024-09-27 15:57:46.163400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.822 [2024-09-27 15:57:46.163408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.822 qpair failed and we were unable to recover it. 00:39:05.822 [2024-09-27 15:57:46.163729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.822 [2024-09-27 15:57:46.163736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.822 qpair failed and we were unable to recover it. 00:39:05.822 [2024-09-27 15:57:46.163956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.822 [2024-09-27 15:57:46.163964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.822 qpair failed and we were unable to recover it. 00:39:05.822 [2024-09-27 15:57:46.164269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.822 [2024-09-27 15:57:46.164277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.822 qpair failed and we were unable to recover it. 00:39:05.822 [2024-09-27 15:57:46.164448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.822 [2024-09-27 15:57:46.164457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.822 qpair failed and we were unable to recover it. 00:39:05.822 [2024-09-27 15:57:46.164800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.822 [2024-09-27 15:57:46.164808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.822 qpair failed and we were unable to recover it. 00:39:05.822 [2024-09-27 15:57:46.165073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.822 [2024-09-27 15:57:46.165081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.822 qpair failed and we were unable to recover it. 00:39:05.822 [2024-09-27 15:57:46.165415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.822 [2024-09-27 15:57:46.165424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.822 qpair failed and we were unable to recover it. 00:39:05.822 [2024-09-27 15:57:46.165610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.822 [2024-09-27 15:57:46.165618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.822 qpair failed and we were unable to recover it. 00:39:05.822 [2024-09-27 15:57:46.165879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.822 [2024-09-27 15:57:46.165888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.822 qpair failed and we were unable to recover it. 00:39:05.822 [2024-09-27 15:57:46.166251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.822 [2024-09-27 15:57:46.166259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.823 qpair failed and we were unable to recover it. 00:39:05.823 [2024-09-27 15:57:46.166580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.823 [2024-09-27 15:57:46.166588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.823 qpair failed and we were unable to recover it. 00:39:05.823 [2024-09-27 15:57:46.166914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.823 [2024-09-27 15:57:46.166923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.823 qpair failed and we were unable to recover it. 00:39:05.823 [2024-09-27 15:57:46.167248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.823 [2024-09-27 15:57:46.167255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.823 qpair failed and we were unable to recover it. 00:39:05.823 [2024-09-27 15:57:46.167575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.823 [2024-09-27 15:57:46.167584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.823 qpair failed and we were unable to recover it. 00:39:05.823 [2024-09-27 15:57:46.167905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.823 [2024-09-27 15:57:46.167914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.823 qpair failed and we were unable to recover it. 00:39:05.823 [2024-09-27 15:57:46.168219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.823 [2024-09-27 15:57:46.168227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.823 qpair failed and we were unable to recover it. 00:39:05.823 [2024-09-27 15:57:46.168428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.823 [2024-09-27 15:57:46.168435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.823 qpair failed and we were unable to recover it. 00:39:05.823 [2024-09-27 15:57:46.168683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.823 [2024-09-27 15:57:46.168691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.823 qpair failed and we were unable to recover it. 00:39:05.823 [2024-09-27 15:57:46.169000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.823 [2024-09-27 15:57:46.169008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.823 qpair failed and we were unable to recover it. 00:39:05.823 [2024-09-27 15:57:46.169335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.823 [2024-09-27 15:57:46.169342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.823 qpair failed and we were unable to recover it. 00:39:05.823 [2024-09-27 15:57:46.169664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.823 [2024-09-27 15:57:46.169671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.823 qpair failed and we were unable to recover it. 00:39:05.823 [2024-09-27 15:57:46.169968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.823 [2024-09-27 15:57:46.169986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.823 qpair failed and we were unable to recover it. 00:39:05.823 [2024-09-27 15:57:46.170375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.823 [2024-09-27 15:57:46.170387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.823 qpair failed and we were unable to recover it. 00:39:05.823 [2024-09-27 15:57:46.170757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.823 [2024-09-27 15:57:46.170764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.823 qpair failed and we were unable to recover it. 00:39:05.823 [2024-09-27 15:57:46.171076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.823 [2024-09-27 15:57:46.171084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.823 qpair failed and we were unable to recover it. 00:39:05.823 [2024-09-27 15:57:46.171310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.823 [2024-09-27 15:57:46.171317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.823 qpair failed and we were unable to recover it. 00:39:05.823 [2024-09-27 15:57:46.171610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.823 [2024-09-27 15:57:46.171617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.823 qpair failed and we were unable to recover it. 00:39:05.823 [2024-09-27 15:57:46.171947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.823 [2024-09-27 15:57:46.171958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.823 qpair failed and we were unable to recover it. 00:39:05.823 [2024-09-27 15:57:46.172321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.823 [2024-09-27 15:57:46.172329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.823 qpair failed and we were unable to recover it. 00:39:05.823 [2024-09-27 15:57:46.172636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.823 [2024-09-27 15:57:46.172644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.823 qpair failed and we were unable to recover it. 00:39:05.823 [2024-09-27 15:57:46.172971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.823 [2024-09-27 15:57:46.172979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.823 qpair failed and we were unable to recover it. 00:39:05.823 [2024-09-27 15:57:46.173310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.823 [2024-09-27 15:57:46.173318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.823 qpair failed and we were unable to recover it. 00:39:05.823 [2024-09-27 15:57:46.173648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.823 [2024-09-27 15:57:46.173656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.823 qpair failed and we were unable to recover it. 00:39:05.823 [2024-09-27 15:57:46.173979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.823 [2024-09-27 15:57:46.173987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.823 qpair failed and we were unable to recover it. 00:39:05.823 [2024-09-27 15:57:46.174307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.823 [2024-09-27 15:57:46.174315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.823 qpair failed and we were unable to recover it. 00:39:05.823 [2024-09-27 15:57:46.174636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.823 [2024-09-27 15:57:46.174644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.823 qpair failed and we were unable to recover it. 00:39:05.823 [2024-09-27 15:57:46.174971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.823 [2024-09-27 15:57:46.174979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.823 qpair failed and we were unable to recover it. 00:39:05.823 [2024-09-27 15:57:46.175306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.823 [2024-09-27 15:57:46.175313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.823 qpair failed and we were unable to recover it. 00:39:05.823 [2024-09-27 15:57:46.175631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.823 [2024-09-27 15:57:46.175639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.823 qpair failed and we were unable to recover it. 00:39:05.823 [2024-09-27 15:57:46.175964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.823 [2024-09-27 15:57:46.175972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.823 qpair failed and we were unable to recover it. 00:39:05.823 [2024-09-27 15:57:46.176254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.823 [2024-09-27 15:57:46.176262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.823 qpair failed and we were unable to recover it. 00:39:05.823 [2024-09-27 15:57:46.176595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.823 [2024-09-27 15:57:46.176604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.823 qpair failed and we were unable to recover it. 00:39:05.823 [2024-09-27 15:57:46.176777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.823 [2024-09-27 15:57:46.176785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.823 qpair failed and we were unable to recover it. 00:39:05.823 [2024-09-27 15:57:46.177117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.823 [2024-09-27 15:57:46.177127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.823 qpair failed and we were unable to recover it. 00:39:05.823 [2024-09-27 15:57:46.177423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.823 [2024-09-27 15:57:46.177433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.823 qpair failed and we were unable to recover it. 00:39:05.823 [2024-09-27 15:57:46.177632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.823 [2024-09-27 15:57:46.177641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.823 qpair failed and we were unable to recover it. 00:39:05.823 [2024-09-27 15:57:46.177816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.824 [2024-09-27 15:57:46.177825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.824 qpair failed and we were unable to recover it. 00:39:05.824 [2024-09-27 15:57:46.178122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.824 [2024-09-27 15:57:46.178130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.824 qpair failed and we were unable to recover it. 00:39:05.824 [2024-09-27 15:57:46.178417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.824 [2024-09-27 15:57:46.178426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.824 qpair failed and we were unable to recover it. 00:39:05.824 [2024-09-27 15:57:46.178773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.824 [2024-09-27 15:57:46.178782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.824 qpair failed and we were unable to recover it. 00:39:05.824 [2024-09-27 15:57:46.179007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.824 [2024-09-27 15:57:46.179016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.824 qpair failed and we were unable to recover it. 00:39:05.824 [2024-09-27 15:57:46.179370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.824 [2024-09-27 15:57:46.179378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.824 qpair failed and we were unable to recover it. 00:39:05.824 [2024-09-27 15:57:46.179686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.824 [2024-09-27 15:57:46.179694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.824 qpair failed and we were unable to recover it. 00:39:05.824 [2024-09-27 15:57:46.180010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.824 [2024-09-27 15:57:46.180017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.824 qpair failed and we were unable to recover it. 00:39:05.824 [2024-09-27 15:57:46.180342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.824 [2024-09-27 15:57:46.180350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.824 qpair failed and we were unable to recover it. 00:39:05.824 [2024-09-27 15:57:46.180672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.824 [2024-09-27 15:57:46.180679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.824 qpair failed and we were unable to recover it. 00:39:05.824 [2024-09-27 15:57:46.181015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.824 [2024-09-27 15:57:46.181023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.824 qpair failed and we were unable to recover it. 00:39:05.824 [2024-09-27 15:57:46.181359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.824 [2024-09-27 15:57:46.181368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.824 qpair failed and we were unable to recover it. 00:39:05.824 [2024-09-27 15:57:46.181527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.824 [2024-09-27 15:57:46.181535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.824 qpair failed and we were unable to recover it. 00:39:05.824 [2024-09-27 15:57:46.181887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.824 [2024-09-27 15:57:46.181901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.824 qpair failed and we were unable to recover it. 00:39:05.824 [2024-09-27 15:57:46.182182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.824 [2024-09-27 15:57:46.182190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.824 qpair failed and we were unable to recover it. 00:39:05.824 [2024-09-27 15:57:46.182519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.824 [2024-09-27 15:57:46.182526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.824 qpair failed and we were unable to recover it. 00:39:05.824 [2024-09-27 15:57:46.182879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.824 [2024-09-27 15:57:46.182887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.824 qpair failed and we were unable to recover it. 00:39:05.824 [2024-09-27 15:57:46.183231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.824 [2024-09-27 15:57:46.183238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.824 qpair failed and we were unable to recover it. 00:39:05.824 [2024-09-27 15:57:46.183560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.824 [2024-09-27 15:57:46.183569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.824 qpair failed and we were unable to recover it. 00:39:05.824 [2024-09-27 15:57:46.183887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.824 [2024-09-27 15:57:46.183901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.824 qpair failed and we were unable to recover it. 00:39:05.824 [2024-09-27 15:57:46.184216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.824 [2024-09-27 15:57:46.184225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.824 qpair failed and we were unable to recover it. 00:39:05.824 [2024-09-27 15:57:46.184543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.824 [2024-09-27 15:57:46.184551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.824 qpair failed and we were unable to recover it. 00:39:05.824 [2024-09-27 15:57:46.184860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.824 [2024-09-27 15:57:46.184869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.824 qpair failed and we were unable to recover it. 00:39:05.824 [2024-09-27 15:57:46.185195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.824 [2024-09-27 15:57:46.185204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.824 qpair failed and we were unable to recover it. 00:39:05.824 [2024-09-27 15:57:46.185534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.824 [2024-09-27 15:57:46.185543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.824 qpair failed and we were unable to recover it. 00:39:05.824 [2024-09-27 15:57:46.185857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.824 [2024-09-27 15:57:46.185865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.824 qpair failed and we were unable to recover it. 00:39:05.824 [2024-09-27 15:57:46.186168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.824 [2024-09-27 15:57:46.186177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.824 qpair failed and we were unable to recover it. 00:39:05.824 [2024-09-27 15:57:46.186529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.824 [2024-09-27 15:57:46.186537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.824 qpair failed and we were unable to recover it. 00:39:05.824 [2024-09-27 15:57:46.186842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.824 [2024-09-27 15:57:46.186851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.824 qpair failed and we were unable to recover it. 00:39:05.824 [2024-09-27 15:57:46.187174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.824 [2024-09-27 15:57:46.187183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.824 qpair failed and we were unable to recover it. 00:39:05.824 [2024-09-27 15:57:46.187499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.824 [2024-09-27 15:57:46.187511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.824 qpair failed and we were unable to recover it. 00:39:05.824 [2024-09-27 15:57:46.187827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.824 [2024-09-27 15:57:46.187836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.824 qpair failed and we were unable to recover it. 00:39:05.824 [2024-09-27 15:57:46.188157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.824 [2024-09-27 15:57:46.188165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.824 qpair failed and we were unable to recover it. 00:39:05.824 [2024-09-27 15:57:46.188468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.824 [2024-09-27 15:57:46.188476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.824 qpair failed and we were unable to recover it. 00:39:05.825 [2024-09-27 15:57:46.188695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.825 [2024-09-27 15:57:46.188705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.825 qpair failed and we were unable to recover it. 00:39:05.825 [2024-09-27 15:57:46.189042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.825 [2024-09-27 15:57:46.189051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.825 qpair failed and we were unable to recover it. 00:39:05.825 [2024-09-27 15:57:46.189367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.825 [2024-09-27 15:57:46.189375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.825 qpair failed and we were unable to recover it. 00:39:05.825 [2024-09-27 15:57:46.189742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.825 [2024-09-27 15:57:46.189749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.825 qpair failed and we were unable to recover it. 00:39:05.825 [2024-09-27 15:57:46.190206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.825 [2024-09-27 15:57:46.190213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.825 qpair failed and we were unable to recover it. 00:39:05.825 [2024-09-27 15:57:46.190516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.825 [2024-09-27 15:57:46.190523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.825 qpair failed and we were unable to recover it. 00:39:05.825 [2024-09-27 15:57:46.190738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.825 [2024-09-27 15:57:46.190746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.825 qpair failed and we were unable to recover it. 00:39:05.825 [2024-09-27 15:57:46.191075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.825 [2024-09-27 15:57:46.191084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.825 qpair failed and we were unable to recover it. 00:39:05.825 [2024-09-27 15:57:46.191413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.825 [2024-09-27 15:57:46.191421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.825 qpair failed and we were unable to recover it. 00:39:05.825 [2024-09-27 15:57:46.191745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.825 [2024-09-27 15:57:46.191754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.825 qpair failed and we were unable to recover it. 00:39:05.825 [2024-09-27 15:57:46.192067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.825 [2024-09-27 15:57:46.192075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.825 qpair failed and we were unable to recover it. 00:39:05.825 [2024-09-27 15:57:46.192283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.825 [2024-09-27 15:57:46.192291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.825 qpair failed and we were unable to recover it. 00:39:05.825 [2024-09-27 15:57:46.192652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.825 [2024-09-27 15:57:46.192659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.825 qpair failed and we were unable to recover it. 00:39:05.825 [2024-09-27 15:57:46.192997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.825 [2024-09-27 15:57:46.193006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.825 qpair failed and we were unable to recover it. 00:39:05.825 [2024-09-27 15:57:46.193190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.825 [2024-09-27 15:57:46.193205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.825 qpair failed and we were unable to recover it. 00:39:05.825 [2024-09-27 15:57:46.193605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.825 [2024-09-27 15:57:46.193614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.825 qpair failed and we were unable to recover it. 00:39:05.825 [2024-09-27 15:57:46.193929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.825 [2024-09-27 15:57:46.193939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.825 qpair failed and we were unable to recover it. 00:39:05.825 [2024-09-27 15:57:46.194279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.825 [2024-09-27 15:57:46.194287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.825 qpair failed and we were unable to recover it. 00:39:05.825 [2024-09-27 15:57:46.194596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.825 [2024-09-27 15:57:46.194604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.825 qpair failed and we were unable to recover it. 00:39:05.825 [2024-09-27 15:57:46.194966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.825 [2024-09-27 15:57:46.194973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.825 qpair failed and we were unable to recover it. 00:39:05.825 [2024-09-27 15:57:46.195283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.825 [2024-09-27 15:57:46.195291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.825 qpair failed and we were unable to recover it. 00:39:05.825 [2024-09-27 15:57:46.195629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.825 [2024-09-27 15:57:46.195638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.825 qpair failed and we were unable to recover it. 00:39:05.825 [2024-09-27 15:57:46.195959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.825 [2024-09-27 15:57:46.195967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.825 qpair failed and we were unable to recover it. 00:39:05.825 [2024-09-27 15:57:46.196287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.825 [2024-09-27 15:57:46.196295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.825 qpair failed and we were unable to recover it. 00:39:05.825 [2024-09-27 15:57:46.196618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.825 [2024-09-27 15:57:46.196626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.825 qpair failed and we were unable to recover it. 00:39:05.825 [2024-09-27 15:57:46.196991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.825 [2024-09-27 15:57:46.197000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.825 qpair failed and we were unable to recover it. 00:39:05.825 [2024-09-27 15:57:46.197333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.825 [2024-09-27 15:57:46.197340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.825 qpair failed and we were unable to recover it. 00:39:05.825 [2024-09-27 15:57:46.197665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.825 [2024-09-27 15:57:46.197673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.825 qpair failed and we were unable to recover it. 00:39:05.825 [2024-09-27 15:57:46.198051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.825 [2024-09-27 15:57:46.198061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.825 qpair failed and we were unable to recover it. 00:39:05.825 [2024-09-27 15:57:46.198378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.825 [2024-09-27 15:57:46.198387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.825 qpair failed and we were unable to recover it. 00:39:05.825 [2024-09-27 15:57:46.198601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.825 [2024-09-27 15:57:46.198608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.825 qpair failed and we were unable to recover it. 00:39:05.825 [2024-09-27 15:57:46.198935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.825 [2024-09-27 15:57:46.198944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.825 qpair failed and we were unable to recover it. 00:39:05.825 [2024-09-27 15:57:46.199252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.825 [2024-09-27 15:57:46.199259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.825 qpair failed and we were unable to recover it. 00:39:05.825 [2024-09-27 15:57:46.199450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.825 [2024-09-27 15:57:46.199457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.825 qpair failed and we were unable to recover it. 00:39:05.825 [2024-09-27 15:57:46.199832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.825 [2024-09-27 15:57:46.199839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.825 qpair failed and we were unable to recover it. 00:39:05.825 [2024-09-27 15:57:46.200190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.825 [2024-09-27 15:57:46.200199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.825 qpair failed and we were unable to recover it. 00:39:05.825 [2024-09-27 15:57:46.200519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.826 [2024-09-27 15:57:46.200527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.826 qpair failed and we were unable to recover it. 00:39:05.826 [2024-09-27 15:57:46.200854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.826 [2024-09-27 15:57:46.200864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.826 qpair failed and we were unable to recover it. 00:39:05.826 [2024-09-27 15:57:46.201181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.826 [2024-09-27 15:57:46.201189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.826 qpair failed and we were unable to recover it. 00:39:05.826 [2024-09-27 15:57:46.201585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.826 [2024-09-27 15:57:46.201594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.826 qpair failed and we were unable to recover it. 00:39:05.826 [2024-09-27 15:57:46.201917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.826 [2024-09-27 15:57:46.201925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.826 qpair failed and we were unable to recover it. 00:39:05.826 [2024-09-27 15:57:46.202229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.826 [2024-09-27 15:57:46.202238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.826 qpair failed and we were unable to recover it. 00:39:05.826 [2024-09-27 15:57:46.202572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.826 [2024-09-27 15:57:46.202580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.826 qpair failed and we were unable to recover it. 00:39:05.826 [2024-09-27 15:57:46.202907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.826 [2024-09-27 15:57:46.202915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.826 qpair failed and we were unable to recover it. 00:39:05.826 [2024-09-27 15:57:46.203223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.826 [2024-09-27 15:57:46.203230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.826 qpair failed and we were unable to recover it. 00:39:05.826 [2024-09-27 15:57:46.203558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.826 [2024-09-27 15:57:46.203566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.826 qpair failed and we were unable to recover it. 00:39:05.826 [2024-09-27 15:57:46.203740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.826 [2024-09-27 15:57:46.203749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.826 qpair failed and we were unable to recover it. 00:39:05.826 [2024-09-27 15:57:46.203970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.826 [2024-09-27 15:57:46.203978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.826 qpair failed and we were unable to recover it. 00:39:05.826 [2024-09-27 15:57:46.204043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.826 [2024-09-27 15:57:46.204049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.826 qpair failed and we were unable to recover it. 00:39:05.826 [2024-09-27 15:57:46.204250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.826 [2024-09-27 15:57:46.204258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.826 qpair failed and we were unable to recover it. 00:39:05.826 [2024-09-27 15:57:46.204552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.826 [2024-09-27 15:57:46.204560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.826 qpair failed and we were unable to recover it. 00:39:05.826 [2024-09-27 15:57:46.204864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.826 [2024-09-27 15:57:46.204873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.826 qpair failed and we were unable to recover it. 00:39:05.826 [2024-09-27 15:57:46.205195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.826 [2024-09-27 15:57:46.205203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.826 qpair failed and we were unable to recover it. 00:39:05.826 [2024-09-27 15:57:46.205523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.826 [2024-09-27 15:57:46.205532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.826 qpair failed and we were unable to recover it. 00:39:05.826 [2024-09-27 15:57:46.205852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.826 [2024-09-27 15:57:46.205860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.826 qpair failed and we were unable to recover it. 00:39:05.826 [2024-09-27 15:57:46.206195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.826 [2024-09-27 15:57:46.206203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.826 qpair failed and we were unable to recover it. 00:39:05.826 [2024-09-27 15:57:46.206527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.826 [2024-09-27 15:57:46.206536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.826 qpair failed and we were unable to recover it. 00:39:05.826 [2024-09-27 15:57:46.206840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.826 [2024-09-27 15:57:46.206849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.826 qpair failed and we were unable to recover it. 00:39:05.826 [2024-09-27 15:57:46.207166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.826 [2024-09-27 15:57:46.207175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.826 qpair failed and we were unable to recover it. 00:39:05.826 [2024-09-27 15:57:46.207489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.826 [2024-09-27 15:57:46.207497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.826 qpair failed and we were unable to recover it. 00:39:05.826 [2024-09-27 15:57:46.207681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.826 [2024-09-27 15:57:46.207689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.826 qpair failed and we were unable to recover it. 00:39:05.826 [2024-09-27 15:57:46.208030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.826 [2024-09-27 15:57:46.208038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.826 qpair failed and we were unable to recover it. 00:39:05.826 [2024-09-27 15:57:46.208356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.826 [2024-09-27 15:57:46.208363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.826 qpair failed and we were unable to recover it. 00:39:05.826 [2024-09-27 15:57:46.208685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.826 [2024-09-27 15:57:46.208693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.826 qpair failed and we were unable to recover it. 00:39:05.826 [2024-09-27 15:57:46.208866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.826 [2024-09-27 15:57:46.208876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.826 qpair failed and we were unable to recover it. 00:39:05.826 [2024-09-27 15:57:46.209087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.826 [2024-09-27 15:57:46.209096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.826 qpair failed and we were unable to recover it. 00:39:05.826 [2024-09-27 15:57:46.209439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.826 [2024-09-27 15:57:46.209448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.826 qpair failed and we were unable to recover it. 00:39:05.826 [2024-09-27 15:57:46.209755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.826 [2024-09-27 15:57:46.209763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.826 qpair failed and we were unable to recover it. 00:39:05.826 [2024-09-27 15:57:46.210093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.826 [2024-09-27 15:57:46.210101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.826 qpair failed and we were unable to recover it. 00:39:05.826 [2024-09-27 15:57:46.210403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.826 [2024-09-27 15:57:46.210411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.826 qpair failed and we were unable to recover it. 00:39:05.826 [2024-09-27 15:57:46.210730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.826 [2024-09-27 15:57:46.210738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.826 qpair failed and we were unable to recover it. 00:39:05.826 [2024-09-27 15:57:46.211059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.826 [2024-09-27 15:57:46.211068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.826 qpair failed and we were unable to recover it. 00:39:05.826 [2024-09-27 15:57:46.211395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.826 [2024-09-27 15:57:46.211402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.826 qpair failed and we were unable to recover it. 00:39:05.826 [2024-09-27 15:57:46.211721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.826 [2024-09-27 15:57:46.211731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.827 qpair failed and we were unable to recover it. 00:39:05.827 [2024-09-27 15:57:46.212068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.827 [2024-09-27 15:57:46.212077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.827 qpair failed and we were unable to recover it. 00:39:05.827 [2024-09-27 15:57:46.212406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.827 [2024-09-27 15:57:46.212414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.827 qpair failed and we were unable to recover it. 00:39:05.827 [2024-09-27 15:57:46.212726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.827 [2024-09-27 15:57:46.212735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.827 qpair failed and we were unable to recover it. 00:39:05.827 [2024-09-27 15:57:46.213070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.827 [2024-09-27 15:57:46.213079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.827 qpair failed and we were unable to recover it. 00:39:05.827 [2024-09-27 15:57:46.213319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.827 [2024-09-27 15:57:46.213328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.827 qpair failed and we were unable to recover it. 00:39:05.827 [2024-09-27 15:57:46.213539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.827 [2024-09-27 15:57:46.213546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.827 qpair failed and we were unable to recover it. 00:39:05.827 [2024-09-27 15:57:46.213873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.827 [2024-09-27 15:57:46.213880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.827 qpair failed and we were unable to recover it. 00:39:05.827 [2024-09-27 15:57:46.214242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.827 [2024-09-27 15:57:46.214250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.827 qpair failed and we were unable to recover it. 00:39:05.827 [2024-09-27 15:57:46.214571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.827 [2024-09-27 15:57:46.214579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.827 qpair failed and we were unable to recover it. 00:39:05.827 [2024-09-27 15:57:46.214912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.827 [2024-09-27 15:57:46.214921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.827 qpair failed and we were unable to recover it. 00:39:05.827 [2024-09-27 15:57:46.215251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.827 [2024-09-27 15:57:46.215260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.827 qpair failed and we were unable to recover it. 00:39:05.827 [2024-09-27 15:57:46.215580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.827 [2024-09-27 15:57:46.215587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.827 qpair failed and we were unable to recover it. 00:39:05.827 [2024-09-27 15:57:46.215803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.827 [2024-09-27 15:57:46.215810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.827 qpair failed and we were unable to recover it. 00:39:05.827 [2024-09-27 15:57:46.216035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.827 [2024-09-27 15:57:46.216043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.827 qpair failed and we were unable to recover it. 00:39:05.827 [2024-09-27 15:57:46.216370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.827 [2024-09-27 15:57:46.216377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.827 qpair failed and we were unable to recover it. 00:39:05.827 [2024-09-27 15:57:46.216714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.827 [2024-09-27 15:57:46.216723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.827 qpair failed and we were unable to recover it. 00:39:05.827 [2024-09-27 15:57:46.217047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.827 [2024-09-27 15:57:46.217056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.827 qpair failed and we were unable to recover it. 00:39:05.827 [2024-09-27 15:57:46.217454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.827 [2024-09-27 15:57:46.217465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.827 qpair failed and we were unable to recover it. 00:39:05.827 [2024-09-27 15:57:46.217776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.827 [2024-09-27 15:57:46.217784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.827 qpair failed and we were unable to recover it. 00:39:05.827 [2024-09-27 15:57:46.218077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.827 [2024-09-27 15:57:46.218085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.827 qpair failed and we were unable to recover it. 00:39:05.827 [2024-09-27 15:57:46.218303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.827 [2024-09-27 15:57:46.218312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.827 qpair failed and we were unable to recover it. 00:39:05.827 [2024-09-27 15:57:46.218572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.827 [2024-09-27 15:57:46.218580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.827 qpair failed and we were unable to recover it. 00:39:05.827 [2024-09-27 15:57:46.218957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.827 [2024-09-27 15:57:46.218967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.827 qpair failed and we were unable to recover it. 00:39:05.827 [2024-09-27 15:57:46.219165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.827 [2024-09-27 15:57:46.219174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.827 qpair failed and we were unable to recover it. 00:39:05.827 [2024-09-27 15:57:46.219497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.827 [2024-09-27 15:57:46.219505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.827 qpair failed and we were unable to recover it. 00:39:05.827 [2024-09-27 15:57:46.219833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.827 [2024-09-27 15:57:46.219841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.827 qpair failed and we were unable to recover it. 00:39:05.827 [2024-09-27 15:57:46.220157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.827 [2024-09-27 15:57:46.220165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.827 qpair failed and we were unable to recover it. 00:39:05.827 [2024-09-27 15:57:46.220490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.827 [2024-09-27 15:57:46.220498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.827 qpair failed and we were unable to recover it. 00:39:05.827 [2024-09-27 15:57:46.220820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.827 [2024-09-27 15:57:46.220829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.827 qpair failed and we were unable to recover it. 00:39:05.827 [2024-09-27 15:57:46.221151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.827 [2024-09-27 15:57:46.221159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.827 qpair failed and we were unable to recover it. 00:39:05.827 [2024-09-27 15:57:46.221477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.827 [2024-09-27 15:57:46.221486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.827 qpair failed and we were unable to recover it. 00:39:05.827 [2024-09-27 15:57:46.221807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.827 [2024-09-27 15:57:46.221815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.827 qpair failed and we were unable to recover it. 00:39:05.827 [2024-09-27 15:57:46.221891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.827 [2024-09-27 15:57:46.221907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.827 qpair failed and we were unable to recover it. 00:39:05.827 [2024-09-27 15:57:46.222085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.827 [2024-09-27 15:57:46.222093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.827 qpair failed and we were unable to recover it. 00:39:05.827 [2024-09-27 15:57:46.222425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.827 [2024-09-27 15:57:46.222432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.827 qpair failed and we were unable to recover it. 00:39:05.827 [2024-09-27 15:57:46.222657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.827 [2024-09-27 15:57:46.222665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.827 qpair failed and we were unable to recover it. 00:39:05.827 [2024-09-27 15:57:46.222983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.827 [2024-09-27 15:57:46.222991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.828 qpair failed and we were unable to recover it. 00:39:05.828 [2024-09-27 15:57:46.223427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.828 [2024-09-27 15:57:46.223434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.828 qpair failed and we were unable to recover it. 00:39:05.828 [2024-09-27 15:57:46.223714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.828 [2024-09-27 15:57:46.223723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.828 qpair failed and we were unable to recover it. 00:39:05.828 [2024-09-27 15:57:46.224016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.828 [2024-09-27 15:57:46.224024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.828 qpair failed and we were unable to recover it. 00:39:05.828 [2024-09-27 15:57:46.224362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.828 [2024-09-27 15:57:46.224369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.828 qpair failed and we were unable to recover it. 00:39:05.828 [2024-09-27 15:57:46.224713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.828 [2024-09-27 15:57:46.224721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.828 qpair failed and we were unable to recover it. 00:39:05.828 [2024-09-27 15:57:46.225042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.828 [2024-09-27 15:57:46.225050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.828 qpair failed and we were unable to recover it. 00:39:05.828 [2024-09-27 15:57:46.225376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.828 [2024-09-27 15:57:46.225384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.828 qpair failed and we were unable to recover it. 00:39:05.828 [2024-09-27 15:57:46.225589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.828 [2024-09-27 15:57:46.225597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.828 qpair failed and we were unable to recover it. 00:39:05.828 [2024-09-27 15:57:46.225882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.828 [2024-09-27 15:57:46.225891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.828 qpair failed and we were unable to recover it. 00:39:05.828 [2024-09-27 15:57:46.225974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.828 [2024-09-27 15:57:46.225982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.828 qpair failed and we were unable to recover it. 00:39:05.828 [2024-09-27 15:57:46.226309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.828 [2024-09-27 15:57:46.226318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.828 qpair failed and we were unable to recover it. 00:39:05.828 [2024-09-27 15:57:46.226651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.828 [2024-09-27 15:57:46.226660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.828 qpair failed and we were unable to recover it. 00:39:05.828 [2024-09-27 15:57:46.226974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.828 [2024-09-27 15:57:46.226983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.828 qpair failed and we were unable to recover it. 00:39:05.828 [2024-09-27 15:57:46.227310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.828 [2024-09-27 15:57:46.227318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.828 qpair failed and we were unable to recover it. 00:39:05.828 [2024-09-27 15:57:46.227624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.828 [2024-09-27 15:57:46.227632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.828 qpair failed and we were unable to recover it. 00:39:05.828 [2024-09-27 15:57:46.227951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.828 [2024-09-27 15:57:46.227961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.828 qpair failed and we were unable to recover it. 00:39:05.828 [2024-09-27 15:57:46.228269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.828 [2024-09-27 15:57:46.228276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.828 qpair failed and we were unable to recover it. 00:39:05.828 [2024-09-27 15:57:46.228598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.828 [2024-09-27 15:57:46.228605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.828 qpair failed and we were unable to recover it. 00:39:05.828 [2024-09-27 15:57:46.228913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.828 [2024-09-27 15:57:46.228921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.828 qpair failed and we were unable to recover it. 00:39:05.828 [2024-09-27 15:57:46.229237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.828 [2024-09-27 15:57:46.229244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.828 qpair failed and we were unable to recover it. 00:39:05.828 [2024-09-27 15:57:46.229577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.828 [2024-09-27 15:57:46.229585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.828 qpair failed and we were unable to recover it. 00:39:05.828 [2024-09-27 15:57:46.229911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.828 [2024-09-27 15:57:46.229919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.828 qpair failed and we were unable to recover it. 00:39:05.828 [2024-09-27 15:57:46.230237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.828 [2024-09-27 15:57:46.230246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.828 qpair failed and we were unable to recover it. 00:39:05.828 [2024-09-27 15:57:46.230561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.828 [2024-09-27 15:57:46.230569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.828 qpair failed and we were unable to recover it. 00:39:05.828 [2024-09-27 15:57:46.230905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.828 [2024-09-27 15:57:46.230913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.828 qpair failed and we were unable to recover it. 00:39:05.828 [2024-09-27 15:57:46.231194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.828 [2024-09-27 15:57:46.231201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.828 qpair failed and we were unable to recover it. 00:39:05.828 [2024-09-27 15:57:46.231526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.828 [2024-09-27 15:57:46.231533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.828 qpair failed and we were unable to recover it. 00:39:05.828 [2024-09-27 15:57:46.231851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.828 [2024-09-27 15:57:46.231858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.828 qpair failed and we were unable to recover it. 00:39:05.828 [2024-09-27 15:57:46.232164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.828 [2024-09-27 15:57:46.232172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.828 qpair failed and we were unable to recover it. 00:39:05.828 [2024-09-27 15:57:46.232492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.828 [2024-09-27 15:57:46.232501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.828 qpair failed and we were unable to recover it. 00:39:05.828 [2024-09-27 15:57:46.232825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.828 [2024-09-27 15:57:46.232833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.828 qpair failed and we were unable to recover it. 00:39:05.828 [2024-09-27 15:57:46.233161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.828 [2024-09-27 15:57:46.233172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.828 qpair failed and we were unable to recover it. 00:39:05.828 [2024-09-27 15:57:46.233487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.828 [2024-09-27 15:57:46.233495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.828 qpair failed and we were unable to recover it. 00:39:05.828 [2024-09-27 15:57:46.233790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.828 [2024-09-27 15:57:46.233798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.828 qpair failed and we were unable to recover it. 00:39:05.828 [2024-09-27 15:57:46.234011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.828 [2024-09-27 15:57:46.234019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.828 qpair failed and we were unable to recover it. 00:39:05.828 [2024-09-27 15:57:46.234350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.828 [2024-09-27 15:57:46.234358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.828 qpair failed and we were unable to recover it. 00:39:05.828 [2024-09-27 15:57:46.234771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.828 [2024-09-27 15:57:46.234778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.829 qpair failed and we were unable to recover it. 00:39:05.829 [2024-09-27 15:57:46.235071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.829 [2024-09-27 15:57:46.235080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.829 qpair failed and we were unable to recover it. 00:39:05.829 [2024-09-27 15:57:46.235402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.829 [2024-09-27 15:57:46.235411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.829 qpair failed and we were unable to recover it. 00:39:05.829 [2024-09-27 15:57:46.235728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.829 [2024-09-27 15:57:46.235737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.829 qpair failed and we were unable to recover it. 00:39:05.829 [2024-09-27 15:57:46.236026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.829 [2024-09-27 15:57:46.236033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.829 qpair failed and we were unable to recover it. 00:39:05.829 [2024-09-27 15:57:46.236335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.829 [2024-09-27 15:57:46.236343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.829 qpair failed and we were unable to recover it. 00:39:05.829 [2024-09-27 15:57:46.236661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.829 [2024-09-27 15:57:46.236670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.829 qpair failed and we were unable to recover it. 00:39:05.829 [2024-09-27 15:57:46.237017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.829 [2024-09-27 15:57:46.237025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.829 qpair failed and we were unable to recover it. 00:39:05.829 [2024-09-27 15:57:46.237358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.829 [2024-09-27 15:57:46.237366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.829 qpair failed and we were unable to recover it. 00:39:05.829 [2024-09-27 15:57:46.237562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.829 [2024-09-27 15:57:46.237570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.829 qpair failed and we were unable to recover it. 00:39:05.829 [2024-09-27 15:57:46.237888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.829 [2024-09-27 15:57:46.237922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.829 qpair failed and we were unable to recover it. 00:39:05.829 [2024-09-27 15:57:46.238115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.829 [2024-09-27 15:57:46.238122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.829 qpair failed and we were unable to recover it. 00:39:05.829 [2024-09-27 15:57:46.238434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.829 [2024-09-27 15:57:46.238445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.829 qpair failed and we were unable to recover it. 00:39:05.829 [2024-09-27 15:57:46.238759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.829 [2024-09-27 15:57:46.238766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.829 qpair failed and we were unable to recover it. 00:39:05.829 [2024-09-27 15:57:46.239069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.829 [2024-09-27 15:57:46.239077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.829 qpair failed and we were unable to recover it. 00:39:05.829 [2024-09-27 15:57:46.239381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.829 [2024-09-27 15:57:46.239388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.829 qpair failed and we were unable to recover it. 00:39:05.829 [2024-09-27 15:57:46.239684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.829 [2024-09-27 15:57:46.239693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.829 qpair failed and we were unable to recover it. 00:39:05.829 [2024-09-27 15:57:46.240011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.829 [2024-09-27 15:57:46.240018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.829 qpair failed and we were unable to recover it. 00:39:05.829 [2024-09-27 15:57:46.240353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.829 [2024-09-27 15:57:46.240361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.829 qpair failed and we were unable to recover it. 00:39:05.829 [2024-09-27 15:57:46.240727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.829 [2024-09-27 15:57:46.240735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.829 qpair failed and we were unable to recover it. 00:39:05.829 [2024-09-27 15:57:46.241119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.829 [2024-09-27 15:57:46.241128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.829 qpair failed and we were unable to recover it. 00:39:05.829 [2024-09-27 15:57:46.241347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.829 [2024-09-27 15:57:46.241355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.829 qpair failed and we were unable to recover it. 00:39:05.829 [2024-09-27 15:57:46.241737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.829 [2024-09-27 15:57:46.241745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.829 qpair failed and we were unable to recover it. 00:39:05.829 [2024-09-27 15:57:46.242070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.829 [2024-09-27 15:57:46.242078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.829 qpair failed and we were unable to recover it. 00:39:05.829 [2024-09-27 15:57:46.242280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.829 [2024-09-27 15:57:46.242288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.829 qpair failed and we were unable to recover it. 00:39:05.829 [2024-09-27 15:57:46.242502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.829 [2024-09-27 15:57:46.242512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.829 qpair failed and we were unable to recover it. 00:39:05.829 [2024-09-27 15:57:46.242813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.829 [2024-09-27 15:57:46.242821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.829 qpair failed and we were unable to recover it. 00:39:05.829 [2024-09-27 15:57:46.243156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.829 [2024-09-27 15:57:46.243164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.829 qpair failed and we were unable to recover it. 00:39:05.829 [2024-09-27 15:57:46.243473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.829 [2024-09-27 15:57:46.243480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.829 qpair failed and we were unable to recover it. 00:39:05.829 [2024-09-27 15:57:46.243802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.829 [2024-09-27 15:57:46.243809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.829 qpair failed and we were unable to recover it. 00:39:05.829 [2024-09-27 15:57:46.244194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.829 [2024-09-27 15:57:46.244201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.829 qpair failed and we were unable to recover it. 00:39:05.829 [2024-09-27 15:57:46.244498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.829 [2024-09-27 15:57:46.244506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.829 qpair failed and we were unable to recover it. 00:39:05.829 [2024-09-27 15:57:46.244836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.829 [2024-09-27 15:57:46.244844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.829 qpair failed and we were unable to recover it. 00:39:05.829 [2024-09-27 15:57:46.245160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.829 [2024-09-27 15:57:46.245168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.829 qpair failed and we were unable to recover it. 00:39:05.829 [2024-09-27 15:57:46.245542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.830 [2024-09-27 15:57:46.245549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.830 qpair failed and we were unable to recover it. 00:39:05.830 [2024-09-27 15:57:46.245876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.830 [2024-09-27 15:57:46.245884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.830 qpair failed and we were unable to recover it. 00:39:05.830 [2024-09-27 15:57:46.246090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.830 [2024-09-27 15:57:46.246098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.830 qpair failed and we were unable to recover it. 00:39:05.830 [2024-09-27 15:57:46.246479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.830 [2024-09-27 15:57:46.246488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.830 qpair failed and we were unable to recover it. 00:39:05.830 [2024-09-27 15:57:46.246813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.830 [2024-09-27 15:57:46.246823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.830 qpair failed and we were unable to recover it. 00:39:05.830 [2024-09-27 15:57:46.247024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.830 [2024-09-27 15:57:46.247036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.830 qpair failed and we were unable to recover it. 00:39:05.830 [2024-09-27 15:57:46.247315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.830 [2024-09-27 15:57:46.247323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.830 qpair failed and we were unable to recover it. 00:39:05.830 [2024-09-27 15:57:46.247528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.830 [2024-09-27 15:57:46.247536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.830 qpair failed and we were unable to recover it. 00:39:05.830 [2024-09-27 15:57:46.247863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.830 [2024-09-27 15:57:46.247872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.830 qpair failed and we were unable to recover it. 00:39:05.830 [2024-09-27 15:57:46.248190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.830 [2024-09-27 15:57:46.248197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.830 qpair failed and we were unable to recover it. 00:39:05.830 [2024-09-27 15:57:46.248521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.830 [2024-09-27 15:57:46.248529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.830 qpair failed and we were unable to recover it. 00:39:05.830 [2024-09-27 15:57:46.248738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.830 [2024-09-27 15:57:46.248747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.830 qpair failed and we were unable to recover it. 00:39:05.830 [2024-09-27 15:57:46.248989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.830 [2024-09-27 15:57:46.248998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.830 qpair failed and we were unable to recover it. 00:39:05.830 [2024-09-27 15:57:46.249282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.830 [2024-09-27 15:57:46.249289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.830 qpair failed and we were unable to recover it. 00:39:05.830 [2024-09-27 15:57:46.249654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.830 [2024-09-27 15:57:46.249662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.830 qpair failed and we were unable to recover it. 00:39:05.830 [2024-09-27 15:57:46.249968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.830 [2024-09-27 15:57:46.249976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.830 qpair failed and we were unable to recover it. 00:39:05.830 [2024-09-27 15:57:46.250175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.830 [2024-09-27 15:57:46.250182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.830 qpair failed and we were unable to recover it. 00:39:05.830 [2024-09-27 15:57:46.250518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.830 [2024-09-27 15:57:46.250525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.830 qpair failed and we were unable to recover it. 00:39:05.830 [2024-09-27 15:57:46.250829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.830 [2024-09-27 15:57:46.250836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.830 qpair failed and we were unable to recover it. 00:39:05.830 [2024-09-27 15:57:46.251064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.830 [2024-09-27 15:57:46.251072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.830 qpair failed and we were unable to recover it. 00:39:05.830 [2024-09-27 15:57:46.251267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.830 [2024-09-27 15:57:46.251275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.830 qpair failed and we were unable to recover it. 00:39:05.830 [2024-09-27 15:57:46.251609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.830 [2024-09-27 15:57:46.251617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.830 qpair failed and we were unable to recover it. 00:39:05.830 [2024-09-27 15:57:46.252035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.830 [2024-09-27 15:57:46.252045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.830 qpair failed and we were unable to recover it. 00:39:05.830 [2024-09-27 15:57:46.252375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.830 [2024-09-27 15:57:46.252384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.830 qpair failed and we were unable to recover it. 00:39:05.830 [2024-09-27 15:57:46.252742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.830 [2024-09-27 15:57:46.252749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.830 qpair failed and we were unable to recover it. 00:39:05.830 [2024-09-27 15:57:46.253080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.830 [2024-09-27 15:57:46.253088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.830 qpair failed and we were unable to recover it. 00:39:05.830 [2024-09-27 15:57:46.253411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.830 [2024-09-27 15:57:46.253418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.830 qpair failed and we were unable to recover it. 00:39:05.830 [2024-09-27 15:57:46.253726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.830 [2024-09-27 15:57:46.253734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.830 qpair failed and we were unable to recover it. 00:39:05.830 [2024-09-27 15:57:46.254079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.830 [2024-09-27 15:57:46.254088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.830 qpair failed and we were unable to recover it. 00:39:05.830 [2024-09-27 15:57:46.254421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.830 [2024-09-27 15:57:46.254429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.830 qpair failed and we were unable to recover it. 00:39:05.830 [2024-09-27 15:57:46.254771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.830 [2024-09-27 15:57:46.254780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.830 qpair failed and we were unable to recover it. 00:39:05.830 [2024-09-27 15:57:46.255130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.830 [2024-09-27 15:57:46.255138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.830 qpair failed and we were unable to recover it. 00:39:05.830 [2024-09-27 15:57:46.255465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.830 [2024-09-27 15:57:46.255476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.830 qpair failed and we were unable to recover it. 00:39:05.830 [2024-09-27 15:57:46.255674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.830 [2024-09-27 15:57:46.255682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.830 qpair failed and we were unable to recover it. 00:39:05.830 [2024-09-27 15:57:46.256003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.830 [2024-09-27 15:57:46.256011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.830 qpair failed and we were unable to recover it. 00:39:05.830 [2024-09-27 15:57:46.256347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.830 [2024-09-27 15:57:46.256356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.830 qpair failed and we were unable to recover it. 00:39:05.830 [2024-09-27 15:57:46.256567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.830 [2024-09-27 15:57:46.256574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.830 qpair failed and we were unable to recover it. 00:39:05.830 [2024-09-27 15:57:46.256800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.831 [2024-09-27 15:57:46.256808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.831 qpair failed and we were unable to recover it. 00:39:05.831 [2024-09-27 15:57:46.257130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.831 [2024-09-27 15:57:46.257140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.831 qpair failed and we were unable to recover it. 00:39:05.831 [2024-09-27 15:57:46.257433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.831 [2024-09-27 15:57:46.257441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.831 qpair failed and we were unable to recover it. 00:39:05.831 [2024-09-27 15:57:46.257761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.831 [2024-09-27 15:57:46.257768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.831 qpair failed and we were unable to recover it. 00:39:05.831 [2024-09-27 15:57:46.258071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.831 [2024-09-27 15:57:46.258079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.831 qpair failed and we were unable to recover it. 00:39:05.831 [2024-09-27 15:57:46.258399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.831 [2024-09-27 15:57:46.258406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.831 qpair failed and we were unable to recover it. 00:39:05.831 [2024-09-27 15:57:46.258744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.831 [2024-09-27 15:57:46.258753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.831 qpair failed and we were unable to recover it. 00:39:05.831 [2024-09-27 15:57:46.259068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.831 [2024-09-27 15:57:46.259076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.831 qpair failed and we were unable to recover it. 00:39:05.831 [2024-09-27 15:57:46.259409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.831 [2024-09-27 15:57:46.259417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.831 qpair failed and we were unable to recover it. 00:39:05.831 [2024-09-27 15:57:46.259723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.831 [2024-09-27 15:57:46.259731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.831 qpair failed and we were unable to recover it. 00:39:05.831 [2024-09-27 15:57:46.260061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.831 [2024-09-27 15:57:46.260069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.831 qpair failed and we were unable to recover it. 00:39:05.831 [2024-09-27 15:57:46.260398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.831 [2024-09-27 15:57:46.260406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.831 qpair failed and we were unable to recover it. 00:39:05.831 [2024-09-27 15:57:46.260728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.831 [2024-09-27 15:57:46.260737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.831 qpair failed and we were unable to recover it. 00:39:05.831 [2024-09-27 15:57:46.261060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.831 [2024-09-27 15:57:46.261068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.831 qpair failed and we were unable to recover it. 00:39:05.831 [2024-09-27 15:57:46.261400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.831 [2024-09-27 15:57:46.261410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.831 qpair failed and we were unable to recover it. 00:39:05.831 [2024-09-27 15:57:46.261734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.831 [2024-09-27 15:57:46.261741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.831 qpair failed and we were unable to recover it. 00:39:05.831 [2024-09-27 15:57:46.262029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.831 [2024-09-27 15:57:46.262038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.831 qpair failed and we were unable to recover it. 00:39:05.831 [2024-09-27 15:57:46.262336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.831 [2024-09-27 15:57:46.262343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.831 qpair failed and we were unable to recover it. 00:39:05.831 [2024-09-27 15:57:46.262558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.831 [2024-09-27 15:57:46.262565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.831 qpair failed and we were unable to recover it. 00:39:05.831 [2024-09-27 15:57:46.262919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.831 [2024-09-27 15:57:46.262926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.831 qpair failed and we were unable to recover it. 00:39:05.831 [2024-09-27 15:57:46.263211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.831 [2024-09-27 15:57:46.263219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.831 qpair failed and we were unable to recover it. 00:39:05.831 [2024-09-27 15:57:46.263547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.831 [2024-09-27 15:57:46.263555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.831 qpair failed and we were unable to recover it. 00:39:05.831 [2024-09-27 15:57:46.263869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.831 [2024-09-27 15:57:46.263878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.831 qpair failed and we were unable to recover it. 00:39:05.831 [2024-09-27 15:57:46.264239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.831 [2024-09-27 15:57:46.264248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.831 qpair failed and we were unable to recover it. 00:39:05.831 [2024-09-27 15:57:46.264482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.831 [2024-09-27 15:57:46.264491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.831 qpair failed and we were unable to recover it. 00:39:05.831 [2024-09-27 15:57:46.264810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.831 [2024-09-27 15:57:46.264819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.831 qpair failed and we were unable to recover it. 00:39:05.831 [2024-09-27 15:57:46.265014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.831 [2024-09-27 15:57:46.265023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.831 qpair failed and we were unable to recover it. 00:39:05.831 [2024-09-27 15:57:46.265314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.831 [2024-09-27 15:57:46.265323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.831 qpair failed and we were unable to recover it. 00:39:05.831 [2024-09-27 15:57:46.265662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.831 [2024-09-27 15:57:46.265671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.831 qpair failed and we were unable to recover it. 00:39:05.831 [2024-09-27 15:57:46.265862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.831 [2024-09-27 15:57:46.265873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.831 qpair failed and we were unable to recover it. 00:39:05.831 [2024-09-27 15:57:46.266210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.831 [2024-09-27 15:57:46.266220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.831 qpair failed and we were unable to recover it. 00:39:05.831 [2024-09-27 15:57:46.266425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.831 [2024-09-27 15:57:46.266434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.831 qpair failed and we were unable to recover it. 00:39:05.831 [2024-09-27 15:57:46.266766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.831 [2024-09-27 15:57:46.266775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.831 qpair failed and we were unable to recover it. 00:39:05.831 [2024-09-27 15:57:46.267119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.831 [2024-09-27 15:57:46.267128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.831 qpair failed and we were unable to recover it. 00:39:05.831 [2024-09-27 15:57:46.267326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.831 [2024-09-27 15:57:46.267336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.831 qpair failed and we were unable to recover it. 00:39:05.831 [2024-09-27 15:57:46.267529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.831 [2024-09-27 15:57:46.267538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.831 qpair failed and we were unable to recover it. 00:39:05.831 [2024-09-27 15:57:46.267869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.831 [2024-09-27 15:57:46.267881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.831 qpair failed and we were unable to recover it. 00:39:05.831 [2024-09-27 15:57:46.268243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.832 [2024-09-27 15:57:46.268254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.832 qpair failed and we were unable to recover it. 00:39:05.832 [2024-09-27 15:57:46.268592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.832 [2024-09-27 15:57:46.268602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.832 qpair failed and we were unable to recover it. 00:39:05.832 [2024-09-27 15:57:46.268811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.832 [2024-09-27 15:57:46.268821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.832 qpair failed and we were unable to recover it. 00:39:05.832 [2024-09-27 15:57:46.269147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.832 [2024-09-27 15:57:46.269156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.832 qpair failed and we were unable to recover it. 00:39:05.832 [2024-09-27 15:57:46.269317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.832 [2024-09-27 15:57:46.269326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.832 qpair failed and we were unable to recover it. 00:39:05.832 [2024-09-27 15:57:46.269666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.832 [2024-09-27 15:57:46.269675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.832 qpair failed and we were unable to recover it. 00:39:05.832 [2024-09-27 15:57:46.269859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.832 [2024-09-27 15:57:46.269867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.832 qpair failed and we were unable to recover it. 00:39:05.832 [2024-09-27 15:57:46.270200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.832 [2024-09-27 15:57:46.270210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.832 qpair failed and we were unable to recover it. 00:39:05.832 [2024-09-27 15:57:46.270537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.832 [2024-09-27 15:57:46.270546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.832 qpair failed and we were unable to recover it. 00:39:05.832 [2024-09-27 15:57:46.270862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.832 [2024-09-27 15:57:46.270872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.832 qpair failed and we were unable to recover it. 00:39:05.832 [2024-09-27 15:57:46.271199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.832 [2024-09-27 15:57:46.271209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.832 qpair failed and we were unable to recover it. 00:39:05.832 [2024-09-27 15:57:46.271528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.832 [2024-09-27 15:57:46.271538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.832 qpair failed and we were unable to recover it. 00:39:05.832 [2024-09-27 15:57:46.271715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.832 [2024-09-27 15:57:46.271725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.832 qpair failed and we were unable to recover it. 00:39:05.832 [2024-09-27 15:57:46.272059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.832 [2024-09-27 15:57:46.272066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.832 qpair failed and we were unable to recover it. 00:39:05.832 [2024-09-27 15:57:46.272394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.832 [2024-09-27 15:57:46.272402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.832 qpair failed and we were unable to recover it. 00:39:05.832 [2024-09-27 15:57:46.272730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.832 [2024-09-27 15:57:46.272737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.832 qpair failed and we were unable to recover it. 00:39:05.832 [2024-09-27 15:57:46.273063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.832 [2024-09-27 15:57:46.273072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.832 qpair failed and we were unable to recover it. 00:39:05.832 [2024-09-27 15:57:46.273407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.832 [2024-09-27 15:57:46.273415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.832 qpair failed and we were unable to recover it. 00:39:05.832 [2024-09-27 15:57:46.273737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.832 [2024-09-27 15:57:46.273745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.832 qpair failed and we were unable to recover it. 00:39:05.832 [2024-09-27 15:57:46.274067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.832 [2024-09-27 15:57:46.274075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.832 qpair failed and we were unable to recover it. 00:39:05.832 [2024-09-27 15:57:46.274394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.832 [2024-09-27 15:57:46.274403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.832 qpair failed and we were unable to recover it. 00:39:05.832 [2024-09-27 15:57:46.274726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.832 [2024-09-27 15:57:46.274733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.832 qpair failed and we were unable to recover it. 00:39:05.832 [2024-09-27 15:57:46.275128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.832 [2024-09-27 15:57:46.275136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.832 qpair failed and we were unable to recover it. 00:39:05.832 [2024-09-27 15:57:46.275459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.832 [2024-09-27 15:57:46.275468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.832 qpair failed and we were unable to recover it. 00:39:05.832 [2024-09-27 15:57:46.275669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.832 [2024-09-27 15:57:46.275677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.832 qpair failed and we were unable to recover it. 00:39:05.832 [2024-09-27 15:57:46.275964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.832 [2024-09-27 15:57:46.275974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.832 qpair failed and we were unable to recover it. 00:39:05.832 [2024-09-27 15:57:46.276305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.832 [2024-09-27 15:57:46.276315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.832 qpair failed and we were unable to recover it. 00:39:05.832 [2024-09-27 15:57:46.276656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.832 [2024-09-27 15:57:46.276665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.832 qpair failed and we were unable to recover it. 00:39:05.832 [2024-09-27 15:57:46.276976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.832 [2024-09-27 15:57:46.276985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.832 qpair failed and we were unable to recover it. 00:39:05.832 [2024-09-27 15:57:46.277180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.832 [2024-09-27 15:57:46.277188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.832 qpair failed and we were unable to recover it. 00:39:05.832 [2024-09-27 15:57:46.277524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.832 [2024-09-27 15:57:46.277532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.832 qpair failed and we were unable to recover it. 00:39:05.832 [2024-09-27 15:57:46.277855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.832 [2024-09-27 15:57:46.277865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.832 qpair failed and we were unable to recover it. 00:39:05.832 [2024-09-27 15:57:46.278048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.832 [2024-09-27 15:57:46.278057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.832 qpair failed and we were unable to recover it. 00:39:05.832 [2024-09-27 15:57:46.278389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.832 [2024-09-27 15:57:46.278397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.832 qpair failed and we were unable to recover it. 00:39:05.832 [2024-09-27 15:57:46.278794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.832 [2024-09-27 15:57:46.278802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.832 qpair failed and we were unable to recover it. 00:39:05.832 [2024-09-27 15:57:46.279014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:05.832 [2024-09-27 15:57:46.279023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:05.832 qpair failed and we were unable to recover it. 00:39:06.110 [2024-09-27 15:57:46.279353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.110 [2024-09-27 15:57:46.279363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.110 qpair failed and we were unable to recover it. 00:39:06.110 [2024-09-27 15:57:46.279679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.110 [2024-09-27 15:57:46.279689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.110 qpair failed and we were unable to recover it. 00:39:06.110 [2024-09-27 15:57:46.279990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.110 [2024-09-27 15:57:46.280000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.110 qpair failed and we were unable to recover it. 00:39:06.110 [2024-09-27 15:57:46.280288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.110 [2024-09-27 15:57:46.280296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.110 qpair failed and we were unable to recover it. 00:39:06.110 [2024-09-27 15:57:46.280395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.110 [2024-09-27 15:57:46.280403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.110 qpair failed and we were unable to recover it. 00:39:06.110 [2024-09-27 15:57:46.280679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.110 [2024-09-27 15:57:46.280686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.280974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.280982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.281316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.281324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.281637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.281645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.281967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.281975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.282305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.282313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.282633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.282640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.282799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.282807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.283027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.283036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.283415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.283424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.283752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.283761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.284113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.284121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.284439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.284449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.284846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.284855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.285184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.285192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.285389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.285396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.285772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.285781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.285995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.286003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.286214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.286222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.286603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.286612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.286938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.286946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.287270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.287278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.287643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.287652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.287969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.287977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.288325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.288333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.288701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.288709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.288913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.288922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.289318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.289325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.289588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.289596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.289792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.289800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.290116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.290124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.290451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.290459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.290806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.290814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.291148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.291157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.291479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.291487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.291815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.291823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.292131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.292138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.292335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.292344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.292413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.292422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.292480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.292488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.292771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.292779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.292994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.293002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.293349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.293360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.293681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.293690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.293861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.293872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.294120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.294129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.294442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.294450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.294777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.294784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.294988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.294996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.111 qpair failed and we were unable to recover it. 00:39:06.111 [2024-09-27 15:57:46.295348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.111 [2024-09-27 15:57:46.295355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.112 qpair failed and we were unable to recover it. 00:39:06.112 [2024-09-27 15:57:46.295688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.112 [2024-09-27 15:57:46.295696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.112 qpair failed and we were unable to recover it. 00:39:06.112 [2024-09-27 15:57:46.296032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.112 [2024-09-27 15:57:46.296040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.112 qpair failed and we were unable to recover it. 00:39:06.112 [2024-09-27 15:57:46.296364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.112 [2024-09-27 15:57:46.296372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.112 qpair failed and we were unable to recover it. 00:39:06.112 [2024-09-27 15:57:46.296671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.112 [2024-09-27 15:57:46.296679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.112 qpair failed and we were unable to recover it. 00:39:06.112 [2024-09-27 15:57:46.296971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.112 [2024-09-27 15:57:46.296980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.112 qpair failed and we were unable to recover it. 00:39:06.112 [2024-09-27 15:57:46.297185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.112 [2024-09-27 15:57:46.297193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.112 qpair failed and we were unable to recover it. 00:39:06.112 [2024-09-27 15:57:46.297527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.112 [2024-09-27 15:57:46.297534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.112 qpair failed and we were unable to recover it. 00:39:06.112 [2024-09-27 15:57:46.297745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.112 [2024-09-27 15:57:46.297753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.112 qpair failed and we were unable to recover it. 00:39:06.112 [2024-09-27 15:57:46.297960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.112 [2024-09-27 15:57:46.297969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.112 qpair failed and we were unable to recover it. 00:39:06.112 [2024-09-27 15:57:46.298348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.112 [2024-09-27 15:57:46.298355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.112 qpair failed and we were unable to recover it. 00:39:06.112 [2024-09-27 15:57:46.298681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.112 [2024-09-27 15:57:46.298689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.112 qpair failed and we were unable to recover it. 00:39:06.112 [2024-09-27 15:57:46.299019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.112 [2024-09-27 15:57:46.299028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.112 qpair failed and we were unable to recover it. 00:39:06.112 [2024-09-27 15:57:46.299355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.112 [2024-09-27 15:57:46.299364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.112 qpair failed and we were unable to recover it. 00:39:06.112 [2024-09-27 15:57:46.299579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.112 [2024-09-27 15:57:46.299587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.112 qpair failed and we were unable to recover it. 00:39:06.112 [2024-09-27 15:57:46.299921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.112 [2024-09-27 15:57:46.299929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.112 qpair failed and we were unable to recover it. 00:39:06.112 [2024-09-27 15:57:46.300220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.112 [2024-09-27 15:57:46.300229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.112 qpair failed and we were unable to recover it. 00:39:06.112 [2024-09-27 15:57:46.300557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.112 [2024-09-27 15:57:46.300566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.112 qpair failed and we were unable to recover it. 00:39:06.112 [2024-09-27 15:57:46.300959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.112 [2024-09-27 15:57:46.300967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.112 qpair failed and we were unable to recover it. 00:39:06.112 [2024-09-27 15:57:46.301260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.112 [2024-09-27 15:57:46.301268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.112 qpair failed and we were unable to recover it. 00:39:06.112 [2024-09-27 15:57:46.301603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.112 [2024-09-27 15:57:46.301611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.112 qpair failed and we were unable to recover it. 00:39:06.112 [2024-09-27 15:57:46.301831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.112 [2024-09-27 15:57:46.301839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.112 qpair failed and we were unable to recover it. 00:39:06.112 [2024-09-27 15:57:46.302185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.112 [2024-09-27 15:57:46.302193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.112 qpair failed and we were unable to recover it. 00:39:06.112 [2024-09-27 15:57:46.302579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.112 [2024-09-27 15:57:46.302587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.112 qpair failed and we were unable to recover it. 00:39:06.112 [2024-09-27 15:57:46.302782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.112 [2024-09-27 15:57:46.302789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.112 qpair failed and we were unable to recover it. 00:39:06.112 [2024-09-27 15:57:46.303147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.112 [2024-09-27 15:57:46.303155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.112 qpair failed and we were unable to recover it. 00:39:06.112 [2024-09-27 15:57:46.303513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.112 [2024-09-27 15:57:46.303521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.112 qpair failed and we were unable to recover it. 00:39:06.112 [2024-09-27 15:57:46.303729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.112 [2024-09-27 15:57:46.303737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.112 qpair failed and we were unable to recover it. 00:39:06.112 [2024-09-27 15:57:46.304014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.112 [2024-09-27 15:57:46.304022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.112 qpair failed and we were unable to recover it. 00:39:06.112 [2024-09-27 15:57:46.304213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.112 [2024-09-27 15:57:46.304222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.112 qpair failed and we were unable to recover it. 00:39:06.112 [2024-09-27 15:57:46.304569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.112 [2024-09-27 15:57:46.304577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.112 qpair failed and we were unable to recover it. 00:39:06.112 [2024-09-27 15:57:46.304910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.112 [2024-09-27 15:57:46.304920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.112 qpair failed and we were unable to recover it. 00:39:06.112 [2024-09-27 15:57:46.305242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.112 [2024-09-27 15:57:46.305250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.112 qpair failed and we were unable to recover it. 00:39:06.112 [2024-09-27 15:57:46.305581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.112 [2024-09-27 15:57:46.305589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.112 qpair failed and we were unable to recover it. 00:39:06.112 [2024-09-27 15:57:46.305935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.112 [2024-09-27 15:57:46.305943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.112 qpair failed and we were unable to recover it. 00:39:06.112 [2024-09-27 15:57:46.306338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.112 [2024-09-27 15:57:46.306345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.112 qpair failed and we were unable to recover it. 00:39:06.112 [2024-09-27 15:57:46.306529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.112 [2024-09-27 15:57:46.306537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.112 qpair failed and we were unable to recover it. 00:39:06.113 [2024-09-27 15:57:46.306788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.113 [2024-09-27 15:57:46.306796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.113 qpair failed and we were unable to recover it. 00:39:06.113 [2024-09-27 15:57:46.307071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.113 [2024-09-27 15:57:46.307080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.113 qpair failed and we were unable to recover it. 00:39:06.113 [2024-09-27 15:57:46.307327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.113 [2024-09-27 15:57:46.307336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.113 qpair failed and we were unable to recover it. 00:39:06.113 [2024-09-27 15:57:46.307547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.113 [2024-09-27 15:57:46.307556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.113 qpair failed and we were unable to recover it. 00:39:06.113 [2024-09-27 15:57:46.307741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.113 [2024-09-27 15:57:46.307750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.113 qpair failed and we were unable to recover it. 00:39:06.113 [2024-09-27 15:57:46.308061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.113 [2024-09-27 15:57:46.308070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.113 qpair failed and we were unable to recover it. 00:39:06.113 [2024-09-27 15:57:46.308389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.113 [2024-09-27 15:57:46.308398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.113 qpair failed and we were unable to recover it. 00:39:06.113 [2024-09-27 15:57:46.308712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.113 [2024-09-27 15:57:46.308721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.113 qpair failed and we were unable to recover it. 00:39:06.113 [2024-09-27 15:57:46.309050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.113 [2024-09-27 15:57:46.309059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.113 qpair failed and we were unable to recover it. 00:39:06.113 [2024-09-27 15:57:46.309384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.113 [2024-09-27 15:57:46.309392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.113 qpair failed and we were unable to recover it. 00:39:06.113 [2024-09-27 15:57:46.309715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.113 [2024-09-27 15:57:46.309723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.113 qpair failed and we were unable to recover it. 00:39:06.113 [2024-09-27 15:57:46.310125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.113 [2024-09-27 15:57:46.310134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.113 qpair failed and we were unable to recover it. 00:39:06.113 [2024-09-27 15:57:46.310464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.113 [2024-09-27 15:57:46.310472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.113 qpair failed and we were unable to recover it. 00:39:06.113 [2024-09-27 15:57:46.310649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.113 [2024-09-27 15:57:46.310658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.113 qpair failed and we were unable to recover it. 00:39:06.113 [2024-09-27 15:57:46.310966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.113 [2024-09-27 15:57:46.310974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.113 qpair failed and we were unable to recover it. 00:39:06.113 [2024-09-27 15:57:46.311318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.113 [2024-09-27 15:57:46.311326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.113 qpair failed and we were unable to recover it. 00:39:06.113 [2024-09-27 15:57:46.311667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.113 [2024-09-27 15:57:46.311675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.113 qpair failed and we were unable to recover it. 00:39:06.113 [2024-09-27 15:57:46.311993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.113 [2024-09-27 15:57:46.312001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.113 qpair failed and we were unable to recover it. 00:39:06.113 [2024-09-27 15:57:46.312340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.113 [2024-09-27 15:57:46.312348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.113 qpair failed and we were unable to recover it. 00:39:06.113 [2024-09-27 15:57:46.312653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.113 [2024-09-27 15:57:46.312660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.113 qpair failed and we were unable to recover it. 00:39:06.113 [2024-09-27 15:57:46.312969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.113 [2024-09-27 15:57:46.312978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.113 qpair failed and we were unable to recover it. 00:39:06.113 [2024-09-27 15:57:46.313384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.113 [2024-09-27 15:57:46.313394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.113 qpair failed and we were unable to recover it. 00:39:06.113 [2024-09-27 15:57:46.313733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.113 [2024-09-27 15:57:46.313741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.113 qpair failed and we were unable to recover it. 00:39:06.113 [2024-09-27 15:57:46.314071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.113 [2024-09-27 15:57:46.314079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.113 qpair failed and we were unable to recover it. 00:39:06.113 [2024-09-27 15:57:46.314415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.113 [2024-09-27 15:57:46.314423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.113 qpair failed and we were unable to recover it. 00:39:06.113 [2024-09-27 15:57:46.314747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.113 [2024-09-27 15:57:46.314754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.113 qpair failed and we were unable to recover it. 00:39:06.113 [2024-09-27 15:57:46.315072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.113 [2024-09-27 15:57:46.315081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.113 qpair failed and we were unable to recover it. 00:39:06.113 [2024-09-27 15:57:46.315395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.113 [2024-09-27 15:57:46.315402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.113 qpair failed and we were unable to recover it. 00:39:06.113 [2024-09-27 15:57:46.315729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.113 [2024-09-27 15:57:46.315738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.113 qpair failed and we were unable to recover it. 00:39:06.113 [2024-09-27 15:57:46.315932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.113 [2024-09-27 15:57:46.315939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.113 qpair failed and we were unable to recover it. 00:39:06.113 [2024-09-27 15:57:46.316309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.113 [2024-09-27 15:57:46.316318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.113 qpair failed and we were unable to recover it. 00:39:06.113 [2024-09-27 15:57:46.316636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.113 [2024-09-27 15:57:46.316645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.113 qpair failed and we were unable to recover it. 00:39:06.113 [2024-09-27 15:57:46.316972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.113 [2024-09-27 15:57:46.316979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.113 qpair failed and we were unable to recover it. 00:39:06.113 [2024-09-27 15:57:46.317315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.113 [2024-09-27 15:57:46.317323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.113 qpair failed and we were unable to recover it. 00:39:06.113 [2024-09-27 15:57:46.317648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.113 [2024-09-27 15:57:46.317656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.113 qpair failed and we were unable to recover it. 00:39:06.113 [2024-09-27 15:57:46.317977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.113 [2024-09-27 15:57:46.317986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.113 qpair failed and we were unable to recover it. 00:39:06.113 [2024-09-27 15:57:46.318298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.113 [2024-09-27 15:57:46.318306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.113 qpair failed and we were unable to recover it. 00:39:06.114 [2024-09-27 15:57:46.318626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.114 [2024-09-27 15:57:46.318634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.114 qpair failed and we were unable to recover it. 00:39:06.114 [2024-09-27 15:57:46.318953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.114 [2024-09-27 15:57:46.318961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.114 qpair failed and we were unable to recover it. 00:39:06.114 [2024-09-27 15:57:46.319095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.114 [2024-09-27 15:57:46.319102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.114 qpair failed and we were unable to recover it. 00:39:06.114 [2024-09-27 15:57:46.319383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.114 [2024-09-27 15:57:46.319391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.114 qpair failed and we were unable to recover it. 00:39:06.114 [2024-09-27 15:57:46.319721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.114 [2024-09-27 15:57:46.319728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.114 qpair failed and we were unable to recover it. 00:39:06.114 [2024-09-27 15:57:46.320043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.114 [2024-09-27 15:57:46.320052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.114 qpair failed and we were unable to recover it. 00:39:06.114 [2024-09-27 15:57:46.320369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.114 [2024-09-27 15:57:46.320376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.114 qpair failed and we were unable to recover it. 00:39:06.114 [2024-09-27 15:57:46.320704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.114 [2024-09-27 15:57:46.320713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.114 qpair failed and we were unable to recover it. 00:39:06.114 [2024-09-27 15:57:46.320906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.114 [2024-09-27 15:57:46.320916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.114 qpair failed and we were unable to recover it. 00:39:06.114 [2024-09-27 15:57:46.321225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.114 [2024-09-27 15:57:46.321234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.114 qpair failed and we were unable to recover it. 00:39:06.114 [2024-09-27 15:57:46.321428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.114 [2024-09-27 15:57:46.321437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.114 qpair failed and we were unable to recover it. 00:39:06.114 [2024-09-27 15:57:46.321759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.114 [2024-09-27 15:57:46.321772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.114 qpair failed and we were unable to recover it. 00:39:06.114 [2024-09-27 15:57:46.322070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.114 [2024-09-27 15:57:46.322078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.114 qpair failed and we were unable to recover it. 00:39:06.114 [2024-09-27 15:57:46.322404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.114 [2024-09-27 15:57:46.322411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.114 qpair failed and we were unable to recover it. 00:39:06.114 [2024-09-27 15:57:46.322731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.114 [2024-09-27 15:57:46.322740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.114 qpair failed and we were unable to recover it. 00:39:06.114 [2024-09-27 15:57:46.323061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.114 [2024-09-27 15:57:46.323069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.114 qpair failed and we were unable to recover it. 00:39:06.114 [2024-09-27 15:57:46.323399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.114 [2024-09-27 15:57:46.323407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.114 qpair failed and we were unable to recover it. 00:39:06.114 [2024-09-27 15:57:46.323736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.114 [2024-09-27 15:57:46.323744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.114 qpair failed and we were unable to recover it. 00:39:06.114 [2024-09-27 15:57:46.324073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.114 [2024-09-27 15:57:46.324081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.114 qpair failed and we were unable to recover it. 00:39:06.114 [2024-09-27 15:57:46.324415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.114 [2024-09-27 15:57:46.324423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.114 qpair failed and we were unable to recover it. 00:39:06.114 [2024-09-27 15:57:46.324826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.114 [2024-09-27 15:57:46.324836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.114 qpair failed and we were unable to recover it. 00:39:06.114 [2024-09-27 15:57:46.325170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.114 [2024-09-27 15:57:46.325179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.114 qpair failed and we were unable to recover it. 00:39:06.114 [2024-09-27 15:57:46.325507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.114 [2024-09-27 15:57:46.325515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.114 qpair failed and we were unable to recover it. 00:39:06.114 [2024-09-27 15:57:46.325828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.114 [2024-09-27 15:57:46.325835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.114 qpair failed and we were unable to recover it. 00:39:06.114 [2024-09-27 15:57:46.326153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.114 [2024-09-27 15:57:46.326161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.114 qpair failed and we were unable to recover it. 00:39:06.114 [2024-09-27 15:57:46.326479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.114 [2024-09-27 15:57:46.326486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.114 qpair failed and we were unable to recover it. 00:39:06.114 [2024-09-27 15:57:46.326814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.114 [2024-09-27 15:57:46.326823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.114 qpair failed and we were unable to recover it. 00:39:06.114 [2024-09-27 15:57:46.327131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.114 [2024-09-27 15:57:46.327139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.114 qpair failed and we were unable to recover it. 00:39:06.114 [2024-09-27 15:57:46.327351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.114 [2024-09-27 15:57:46.327359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.114 qpair failed and we were unable to recover it. 00:39:06.114 [2024-09-27 15:57:46.327713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.114 [2024-09-27 15:57:46.327721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.114 qpair failed and we were unable to recover it. 00:39:06.114 [2024-09-27 15:57:46.328044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.114 [2024-09-27 15:57:46.328052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.114 qpair failed and we were unable to recover it. 00:39:06.114 [2024-09-27 15:57:46.328371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.114 [2024-09-27 15:57:46.328378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.114 qpair failed and we were unable to recover it. 00:39:06.114 [2024-09-27 15:57:46.328572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.114 [2024-09-27 15:57:46.328579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.114 qpair failed and we were unable to recover it. 00:39:06.114 [2024-09-27 15:57:46.328939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.114 [2024-09-27 15:57:46.328948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.114 qpair failed and we were unable to recover it. 00:39:06.114 [2024-09-27 15:57:46.329261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.114 [2024-09-27 15:57:46.329268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.114 qpair failed and we were unable to recover it. 00:39:06.114 [2024-09-27 15:57:46.329600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.114 [2024-09-27 15:57:46.329609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.114 qpair failed and we were unable to recover it. 00:39:06.114 [2024-09-27 15:57:46.329838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.114 [2024-09-27 15:57:46.329846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.115 qpair failed and we were unable to recover it. 00:39:06.115 [2024-09-27 15:57:46.330227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.115 [2024-09-27 15:57:46.330235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.115 qpair failed and we were unable to recover it. 00:39:06.115 [2024-09-27 15:57:46.330574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.115 [2024-09-27 15:57:46.330582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.115 qpair failed and we were unable to recover it. 00:39:06.115 [2024-09-27 15:57:46.330801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.115 [2024-09-27 15:57:46.330810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.115 qpair failed and we were unable to recover it. 00:39:06.115 [2024-09-27 15:57:46.331052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.115 [2024-09-27 15:57:46.331059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.115 qpair failed and we were unable to recover it. 00:39:06.115 [2024-09-27 15:57:46.331394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.115 [2024-09-27 15:57:46.331402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.115 qpair failed and we were unable to recover it. 00:39:06.115 [2024-09-27 15:57:46.331727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.115 [2024-09-27 15:57:46.331735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.115 qpair failed and we were unable to recover it. 00:39:06.115 [2024-09-27 15:57:46.332071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.115 [2024-09-27 15:57:46.332080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.115 qpair failed and we were unable to recover it. 00:39:06.115 [2024-09-27 15:57:46.332387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.115 [2024-09-27 15:57:46.332395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.115 qpair failed and we were unable to recover it. 00:39:06.115 [2024-09-27 15:57:46.332720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.115 [2024-09-27 15:57:46.332727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.115 qpair failed and we were unable to recover it. 00:39:06.115 [2024-09-27 15:57:46.332949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.115 [2024-09-27 15:57:46.332957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.115 qpair failed and we were unable to recover it. 00:39:06.115 [2024-09-27 15:57:46.333282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.115 [2024-09-27 15:57:46.333289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.115 qpair failed and we were unable to recover it. 00:39:06.115 [2024-09-27 15:57:46.333602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.115 [2024-09-27 15:57:46.333610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.115 qpair failed and we were unable to recover it. 00:39:06.115 [2024-09-27 15:57:46.333945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.115 [2024-09-27 15:57:46.333953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.115 qpair failed and we were unable to recover it. 00:39:06.115 [2024-09-27 15:57:46.334146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.115 [2024-09-27 15:57:46.334155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.115 qpair failed and we were unable to recover it. 00:39:06.115 [2024-09-27 15:57:46.334527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.115 [2024-09-27 15:57:46.334535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.115 qpair failed and we were unable to recover it. 00:39:06.115 [2024-09-27 15:57:46.334952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.115 [2024-09-27 15:57:46.334962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.115 qpair failed and we were unable to recover it. 00:39:06.115 [2024-09-27 15:57:46.335177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.115 [2024-09-27 15:57:46.335186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.115 qpair failed and we were unable to recover it. 00:39:06.115 [2024-09-27 15:57:46.335513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.115 [2024-09-27 15:57:46.335521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.115 qpair failed and we were unable to recover it. 00:39:06.115 [2024-09-27 15:57:46.335735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.115 [2024-09-27 15:57:46.335743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.115 qpair failed and we were unable to recover it. 00:39:06.115 [2024-09-27 15:57:46.336071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.115 [2024-09-27 15:57:46.336079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.115 qpair failed and we were unable to recover it. 00:39:06.115 [2024-09-27 15:57:46.336404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.115 [2024-09-27 15:57:46.336413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.115 qpair failed and we were unable to recover it. 00:39:06.115 [2024-09-27 15:57:46.336811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.115 [2024-09-27 15:57:46.336820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.115 qpair failed and we were unable to recover it. 00:39:06.115 [2024-09-27 15:57:46.337053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.115 [2024-09-27 15:57:46.337060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.115 qpair failed and we were unable to recover it. 00:39:06.115 [2024-09-27 15:57:46.337381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.115 [2024-09-27 15:57:46.337388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.115 qpair failed and we were unable to recover it. 00:39:06.115 [2024-09-27 15:57:46.337607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.115 [2024-09-27 15:57:46.337615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.115 qpair failed and we were unable to recover it. 00:39:06.115 [2024-09-27 15:57:46.337952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.115 [2024-09-27 15:57:46.337960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.115 qpair failed and we were unable to recover it. 00:39:06.115 [2024-09-27 15:57:46.338280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.115 [2024-09-27 15:57:46.338287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.115 qpair failed and we were unable to recover it. 00:39:06.115 [2024-09-27 15:57:46.338614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.115 [2024-09-27 15:57:46.338622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.115 qpair failed and we were unable to recover it. 00:39:06.115 [2024-09-27 15:57:46.338943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.115 [2024-09-27 15:57:46.338950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.115 qpair failed and we were unable to recover it. 00:39:06.115 [2024-09-27 15:57:46.339271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.115 [2024-09-27 15:57:46.339280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.115 qpair failed and we were unable to recover it. 00:39:06.115 [2024-09-27 15:57:46.339579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.115 [2024-09-27 15:57:46.339586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.115 qpair failed and we were unable to recover it. 00:39:06.115 [2024-09-27 15:57:46.339917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.115 [2024-09-27 15:57:46.339926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.115 qpair failed and we were unable to recover it. 00:39:06.115 [2024-09-27 15:57:46.340264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.116 [2024-09-27 15:57:46.340272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.116 qpair failed and we were unable to recover it. 00:39:06.116 [2024-09-27 15:57:46.340589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.116 [2024-09-27 15:57:46.340597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.116 qpair failed and we were unable to recover it. 00:39:06.116 [2024-09-27 15:57:46.340923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.116 [2024-09-27 15:57:46.340932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.116 qpair failed and we were unable to recover it. 00:39:06.116 [2024-09-27 15:57:46.341362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.116 [2024-09-27 15:57:46.341371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.116 qpair failed and we were unable to recover it. 00:39:06.116 [2024-09-27 15:57:46.341683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.116 [2024-09-27 15:57:46.341693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.116 qpair failed and we were unable to recover it. 00:39:06.116 [2024-09-27 15:57:46.342013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.116 [2024-09-27 15:57:46.342021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.116 qpair failed and we were unable to recover it. 00:39:06.116 [2024-09-27 15:57:46.342313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.116 [2024-09-27 15:57:46.342320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.116 qpair failed and we were unable to recover it. 00:39:06.116 [2024-09-27 15:57:46.342646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.116 [2024-09-27 15:57:46.342653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.116 qpair failed and we were unable to recover it. 00:39:06.116 [2024-09-27 15:57:46.342972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.116 [2024-09-27 15:57:46.342980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.116 qpair failed and we were unable to recover it. 00:39:06.116 [2024-09-27 15:57:46.343306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.116 [2024-09-27 15:57:46.343313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.116 qpair failed and we were unable to recover it. 00:39:06.116 [2024-09-27 15:57:46.343600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.116 [2024-09-27 15:57:46.343611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.116 qpair failed and we were unable to recover it. 00:39:06.116 [2024-09-27 15:57:46.343952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.116 [2024-09-27 15:57:46.343961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.116 qpair failed and we were unable to recover it. 00:39:06.116 [2024-09-27 15:57:46.344258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.116 [2024-09-27 15:57:46.344266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.116 qpair failed and we were unable to recover it. 00:39:06.116 [2024-09-27 15:57:46.344587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.116 [2024-09-27 15:57:46.344595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.116 qpair failed and we were unable to recover it. 00:39:06.116 [2024-09-27 15:57:46.344920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.116 [2024-09-27 15:57:46.344928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.116 qpair failed and we were unable to recover it. 00:39:06.116 [2024-09-27 15:57:46.345190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.116 [2024-09-27 15:57:46.345197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.116 qpair failed and we were unable to recover it. 00:39:06.116 [2024-09-27 15:57:46.345548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.116 [2024-09-27 15:57:46.345556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.116 qpair failed and we were unable to recover it. 00:39:06.116 [2024-09-27 15:57:46.345891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.116 [2024-09-27 15:57:46.345923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.116 qpair failed and we were unable to recover it. 00:39:06.116 [2024-09-27 15:57:46.346229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.116 [2024-09-27 15:57:46.346237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.116 qpair failed and we were unable to recover it. 00:39:06.116 [2024-09-27 15:57:46.346560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.116 [2024-09-27 15:57:46.346568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.116 qpair failed and we were unable to recover it. 00:39:06.116 [2024-09-27 15:57:46.346886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.116 [2024-09-27 15:57:46.346902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.116 qpair failed and we were unable to recover it. 00:39:06.116 [2024-09-27 15:57:46.347219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.116 [2024-09-27 15:57:46.347226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.116 qpair failed and we were unable to recover it. 00:39:06.116 [2024-09-27 15:57:46.347559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.116 [2024-09-27 15:57:46.347567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.116 qpair failed and we were unable to recover it. 00:39:06.116 [2024-09-27 15:57:46.347902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.116 [2024-09-27 15:57:46.347909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.116 qpair failed and we were unable to recover it. 00:39:06.116 [2024-09-27 15:57:46.348229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.116 [2024-09-27 15:57:46.348236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.116 qpair failed and we were unable to recover it. 00:39:06.116 [2024-09-27 15:57:46.348495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.116 [2024-09-27 15:57:46.348503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.116 qpair failed and we were unable to recover it. 00:39:06.116 [2024-09-27 15:57:46.348820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.116 [2024-09-27 15:57:46.348829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.116 qpair failed and we were unable to recover it. 00:39:06.116 [2024-09-27 15:57:46.349163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.116 [2024-09-27 15:57:46.349172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.116 qpair failed and we were unable to recover it. 00:39:06.116 [2024-09-27 15:57:46.349400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.116 [2024-09-27 15:57:46.349409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.116 qpair failed and we were unable to recover it. 00:39:06.116 [2024-09-27 15:57:46.349722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.116 [2024-09-27 15:57:46.349730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.116 qpair failed and we were unable to recover it. 00:39:06.116 [2024-09-27 15:57:46.350123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.116 [2024-09-27 15:57:46.350133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.116 qpair failed and we were unable to recover it. 00:39:06.116 [2024-09-27 15:57:46.350452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.116 [2024-09-27 15:57:46.350460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.116 qpair failed and we were unable to recover it. 00:39:06.116 [2024-09-27 15:57:46.350802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.116 [2024-09-27 15:57:46.350811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.116 qpair failed and we were unable to recover it. 00:39:06.116 [2024-09-27 15:57:46.351129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.116 [2024-09-27 15:57:46.351138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.116 qpair failed and we were unable to recover it. 00:39:06.116 [2024-09-27 15:57:46.351452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.116 [2024-09-27 15:57:46.351460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.116 qpair failed and we were unable to recover it. 00:39:06.116 [2024-09-27 15:57:46.351782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.116 [2024-09-27 15:57:46.351790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.116 qpair failed and we were unable to recover it. 00:39:06.116 [2024-09-27 15:57:46.352116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.116 [2024-09-27 15:57:46.352124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.116 qpair failed and we were unable to recover it. 00:39:06.116 [2024-09-27 15:57:46.352451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.116 [2024-09-27 15:57:46.352460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.117 qpair failed and we were unable to recover it. 00:39:06.117 [2024-09-27 15:57:46.352782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.117 [2024-09-27 15:57:46.352789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.117 qpair failed and we were unable to recover it. 00:39:06.117 [2024-09-27 15:57:46.353112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.117 [2024-09-27 15:57:46.353120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.117 qpair failed and we were unable to recover it. 00:39:06.117 [2024-09-27 15:57:46.353504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.117 [2024-09-27 15:57:46.353513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.117 qpair failed and we were unable to recover it. 00:39:06.117 [2024-09-27 15:57:46.353830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.117 [2024-09-27 15:57:46.353838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.117 qpair failed and we were unable to recover it. 00:39:06.117 [2024-09-27 15:57:46.354154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.117 [2024-09-27 15:57:46.354161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.117 qpair failed and we were unable to recover it. 00:39:06.117 [2024-09-27 15:57:46.354512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.117 [2024-09-27 15:57:46.354521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.117 qpair failed and we were unable to recover it. 00:39:06.117 [2024-09-27 15:57:46.354722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.117 [2024-09-27 15:57:46.354731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.117 qpair failed and we were unable to recover it. 00:39:06.117 [2024-09-27 15:57:46.355048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.117 [2024-09-27 15:57:46.355057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.117 qpair failed and we were unable to recover it. 00:39:06.117 [2024-09-27 15:57:46.355398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.117 [2024-09-27 15:57:46.355409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.117 qpair failed and we were unable to recover it. 00:39:06.117 [2024-09-27 15:57:46.355729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.117 [2024-09-27 15:57:46.355737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.117 qpair failed and we were unable to recover it. 00:39:06.117 [2024-09-27 15:57:46.356097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.117 [2024-09-27 15:57:46.356106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.117 qpair failed and we were unable to recover it. 00:39:06.117 [2024-09-27 15:57:46.356434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.117 [2024-09-27 15:57:46.356441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.117 qpair failed and we were unable to recover it. 00:39:06.117 [2024-09-27 15:57:46.356760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.117 [2024-09-27 15:57:46.356768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.117 qpair failed and we were unable to recover it. 00:39:06.117 [2024-09-27 15:57:46.357070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.117 [2024-09-27 15:57:46.357078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.117 qpair failed and we were unable to recover it. 00:39:06.117 [2024-09-27 15:57:46.357493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.117 [2024-09-27 15:57:46.357503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.117 qpair failed and we were unable to recover it. 00:39:06.117 [2024-09-27 15:57:46.357821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.117 [2024-09-27 15:57:46.357829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.117 qpair failed and we were unable to recover it. 00:39:06.117 [2024-09-27 15:57:46.358152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.117 [2024-09-27 15:57:46.358162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.117 qpair failed and we were unable to recover it. 00:39:06.117 [2024-09-27 15:57:46.358380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.117 [2024-09-27 15:57:46.358390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.117 qpair failed and we were unable to recover it. 00:39:06.117 [2024-09-27 15:57:46.358753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.117 [2024-09-27 15:57:46.358762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.117 qpair failed and we were unable to recover it. 00:39:06.117 [2024-09-27 15:57:46.359109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.117 [2024-09-27 15:57:46.359117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.117 qpair failed and we were unable to recover it. 00:39:06.117 [2024-09-27 15:57:46.359403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.117 [2024-09-27 15:57:46.359411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.117 qpair failed and we were unable to recover it. 00:39:06.117 [2024-09-27 15:57:46.359738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.117 [2024-09-27 15:57:46.359746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.117 qpair failed and we were unable to recover it. 00:39:06.117 [2024-09-27 15:57:46.360017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.117 [2024-09-27 15:57:46.360025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.117 qpair failed and we were unable to recover it. 00:39:06.117 [2024-09-27 15:57:46.360351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.117 [2024-09-27 15:57:46.360358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.117 qpair failed and we were unable to recover it. 00:39:06.117 [2024-09-27 15:57:46.360673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.117 [2024-09-27 15:57:46.360683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.117 qpair failed and we were unable to recover it. 00:39:06.117 [2024-09-27 15:57:46.361003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.117 [2024-09-27 15:57:46.361011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.117 qpair failed and we were unable to recover it. 00:39:06.117 [2024-09-27 15:57:46.361330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.117 [2024-09-27 15:57:46.361338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.117 qpair failed and we were unable to recover it. 00:39:06.117 [2024-09-27 15:57:46.361658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.117 [2024-09-27 15:57:46.361666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.117 qpair failed and we were unable to recover it. 00:39:06.117 [2024-09-27 15:57:46.361972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.117 [2024-09-27 15:57:46.361980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.117 qpair failed and we were unable to recover it. 00:39:06.117 [2024-09-27 15:57:46.362324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.117 [2024-09-27 15:57:46.362331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.117 qpair failed and we were unable to recover it. 00:39:06.117 [2024-09-27 15:57:46.362626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.117 [2024-09-27 15:57:46.362635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.117 qpair failed and we were unable to recover it. 00:39:06.117 [2024-09-27 15:57:46.362828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.117 [2024-09-27 15:57:46.362837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.117 qpair failed and we were unable to recover it. 00:39:06.117 [2024-09-27 15:57:46.363021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.117 [2024-09-27 15:57:46.363030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.117 qpair failed and we were unable to recover it. 00:39:06.117 [2024-09-27 15:57:46.363303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.117 [2024-09-27 15:57:46.363311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.117 qpair failed and we were unable to recover it. 00:39:06.117 [2024-09-27 15:57:46.363647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.117 [2024-09-27 15:57:46.363656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.117 qpair failed and we were unable to recover it. 00:39:06.117 [2024-09-27 15:57:46.363855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.117 [2024-09-27 15:57:46.363864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.117 qpair failed and we were unable to recover it. 00:39:06.117 [2024-09-27 15:57:46.364200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.117 [2024-09-27 15:57:46.364208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.117 qpair failed and we were unable to recover it. 00:39:06.117 [2024-09-27 15:57:46.364536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.118 [2024-09-27 15:57:46.364544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.118 qpair failed and we were unable to recover it. 00:39:06.118 [2024-09-27 15:57:46.364863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.118 [2024-09-27 15:57:46.364873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.118 qpair failed and we were unable to recover it. 00:39:06.118 [2024-09-27 15:57:46.365232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.118 [2024-09-27 15:57:46.365242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.118 qpair failed and we were unable to recover it. 00:39:06.118 [2024-09-27 15:57:46.365563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.118 [2024-09-27 15:57:46.365572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.118 qpair failed and we were unable to recover it. 00:39:06.118 [2024-09-27 15:57:46.365889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.118 [2024-09-27 15:57:46.365904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.118 qpair failed and we were unable to recover it. 00:39:06.118 [2024-09-27 15:57:46.366221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.118 [2024-09-27 15:57:46.366228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.118 qpair failed and we were unable to recover it. 00:39:06.118 [2024-09-27 15:57:46.366391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.118 [2024-09-27 15:57:46.366398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.118 qpair failed and we were unable to recover it. 00:39:06.118 [2024-09-27 15:57:46.366798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.118 [2024-09-27 15:57:46.366805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.118 qpair failed and we were unable to recover it. 00:39:06.118 [2024-09-27 15:57:46.367139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.118 [2024-09-27 15:57:46.367148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.118 qpair failed and we were unable to recover it. 00:39:06.118 [2024-09-27 15:57:46.367457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.118 [2024-09-27 15:57:46.367465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.118 qpair failed and we were unable to recover it. 00:39:06.118 [2024-09-27 15:57:46.367784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.118 [2024-09-27 15:57:46.367791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.118 qpair failed and we were unable to recover it. 00:39:06.118 [2024-09-27 15:57:46.368114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.118 [2024-09-27 15:57:46.368122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.118 qpair failed and we were unable to recover it. 00:39:06.118 [2024-09-27 15:57:46.368455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.118 [2024-09-27 15:57:46.368462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.118 qpair failed and we were unable to recover it. 00:39:06.118 [2024-09-27 15:57:46.368662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.118 [2024-09-27 15:57:46.368669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.118 qpair failed and we were unable to recover it. 00:39:06.118 [2024-09-27 15:57:46.368999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.118 [2024-09-27 15:57:46.369007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.118 qpair failed and we were unable to recover it. 00:39:06.118 [2024-09-27 15:57:46.369370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.118 [2024-09-27 15:57:46.369378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.118 qpair failed and we were unable to recover it. 00:39:06.118 [2024-09-27 15:57:46.369687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.118 [2024-09-27 15:57:46.369695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.118 qpair failed and we were unable to recover it. 00:39:06.118 [2024-09-27 15:57:46.370026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.118 [2024-09-27 15:57:46.370035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.118 qpair failed and we were unable to recover it. 00:39:06.118 [2024-09-27 15:57:46.370359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.118 [2024-09-27 15:57:46.370368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.118 qpair failed and we were unable to recover it. 00:39:06.118 [2024-09-27 15:57:46.370695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.118 [2024-09-27 15:57:46.370704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.118 qpair failed and we were unable to recover it. 00:39:06.118 [2024-09-27 15:57:46.371029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.118 [2024-09-27 15:57:46.371038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.118 qpair failed and we were unable to recover it. 00:39:06.118 [2024-09-27 15:57:46.371268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.118 [2024-09-27 15:57:46.371277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.118 qpair failed and we were unable to recover it. 00:39:06.118 [2024-09-27 15:57:46.371578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.118 [2024-09-27 15:57:46.371585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.118 qpair failed and we were unable to recover it. 00:39:06.118 [2024-09-27 15:57:46.371788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.118 [2024-09-27 15:57:46.371796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.118 qpair failed and we were unable to recover it. 00:39:06.118 [2024-09-27 15:57:46.372117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.118 [2024-09-27 15:57:46.372126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.118 qpair failed and we were unable to recover it. 00:39:06.118 [2024-09-27 15:57:46.372324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.118 [2024-09-27 15:57:46.372332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.118 qpair failed and we were unable to recover it. 00:39:06.118 [2024-09-27 15:57:46.372633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.118 [2024-09-27 15:57:46.372642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.118 qpair failed and we were unable to recover it. 00:39:06.118 [2024-09-27 15:57:46.372821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.118 [2024-09-27 15:57:46.372828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.118 qpair failed and we were unable to recover it. 00:39:06.118 [2024-09-27 15:57:46.373170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.118 [2024-09-27 15:57:46.373179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.118 qpair failed and we were unable to recover it. 00:39:06.118 [2024-09-27 15:57:46.373499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.118 [2024-09-27 15:57:46.373507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.118 qpair failed and we were unable to recover it. 00:39:06.118 [2024-09-27 15:57:46.373732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.118 [2024-09-27 15:57:46.373742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.118 qpair failed and we were unable to recover it. 00:39:06.118 [2024-09-27 15:57:46.374086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.118 [2024-09-27 15:57:46.374095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.118 qpair failed and we were unable to recover it. 00:39:06.118 [2024-09-27 15:57:46.374322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.118 [2024-09-27 15:57:46.374329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.118 qpair failed and we were unable to recover it. 00:39:06.118 [2024-09-27 15:57:46.374696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.118 [2024-09-27 15:57:46.374705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.118 qpair failed and we were unable to recover it. 00:39:06.118 [2024-09-27 15:57:46.375039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.118 [2024-09-27 15:57:46.375048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.118 qpair failed and we were unable to recover it. 00:39:06.118 [2024-09-27 15:57:46.375368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.118 [2024-09-27 15:57:46.375375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.118 qpair failed and we were unable to recover it. 00:39:06.118 [2024-09-27 15:57:46.375714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.118 [2024-09-27 15:57:46.375723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.118 qpair failed and we were unable to recover it. 00:39:06.118 [2024-09-27 15:57:46.376051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.119 [2024-09-27 15:57:46.376059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.119 qpair failed and we were unable to recover it. 00:39:06.119 [2024-09-27 15:57:46.376378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.119 [2024-09-27 15:57:46.376386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.119 qpair failed and we were unable to recover it. 00:39:06.119 [2024-09-27 15:57:46.376711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.119 [2024-09-27 15:57:46.376718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.119 qpair failed and we were unable to recover it. 00:39:06.119 [2024-09-27 15:57:46.377041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.119 [2024-09-27 15:57:46.377051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.119 qpair failed and we were unable to recover it. 00:39:06.119 [2024-09-27 15:57:46.377377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.119 [2024-09-27 15:57:46.377385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.119 qpair failed and we were unable to recover it. 00:39:06.119 [2024-09-27 15:57:46.377698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.119 [2024-09-27 15:57:46.377706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.119 qpair failed and we were unable to recover it. 00:39:06.119 [2024-09-27 15:57:46.377910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.119 [2024-09-27 15:57:46.377919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.119 qpair failed and we were unable to recover it. 00:39:06.119 [2024-09-27 15:57:46.378137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.119 [2024-09-27 15:57:46.378145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.119 qpair failed and we were unable to recover it. 00:39:06.119 [2024-09-27 15:57:46.378455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.119 [2024-09-27 15:57:46.378463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.119 qpair failed and we were unable to recover it. 00:39:06.119 [2024-09-27 15:57:46.378829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.119 [2024-09-27 15:57:46.378836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.119 qpair failed and we were unable to recover it. 00:39:06.119 [2024-09-27 15:57:46.379152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.119 [2024-09-27 15:57:46.379161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.119 qpair failed and we were unable to recover it. 00:39:06.119 [2024-09-27 15:57:46.379377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.119 [2024-09-27 15:57:46.379385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.119 qpair failed and we were unable to recover it. 00:39:06.119 [2024-09-27 15:57:46.379706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.119 [2024-09-27 15:57:46.379714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.119 qpair failed and we were unable to recover it. 00:39:06.119 [2024-09-27 15:57:46.380044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.119 [2024-09-27 15:57:46.380052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.119 qpair failed and we were unable to recover it. 00:39:06.119 [2024-09-27 15:57:46.380375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.119 [2024-09-27 15:57:46.380382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.119 qpair failed and we were unable to recover it. 00:39:06.119 [2024-09-27 15:57:46.380702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.119 [2024-09-27 15:57:46.380710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.119 qpair failed and we were unable to recover it. 00:39:06.119 [2024-09-27 15:57:46.381024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.119 [2024-09-27 15:57:46.381031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.119 qpair failed and we were unable to recover it. 00:39:06.119 [2024-09-27 15:57:46.381343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.119 [2024-09-27 15:57:46.381352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.119 qpair failed and we were unable to recover it. 00:39:06.119 [2024-09-27 15:57:46.381674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.119 [2024-09-27 15:57:46.381682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.119 qpair failed and we were unable to recover it. 00:39:06.119 [2024-09-27 15:57:46.381998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.119 [2024-09-27 15:57:46.382006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.119 qpair failed and we were unable to recover it. 00:39:06.119 [2024-09-27 15:57:46.382338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.119 [2024-09-27 15:57:46.382348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.119 qpair failed and we were unable to recover it. 00:39:06.119 [2024-09-27 15:57:46.382687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.119 [2024-09-27 15:57:46.382695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.119 qpair failed and we were unable to recover it. 00:39:06.119 [2024-09-27 15:57:46.383007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.119 [2024-09-27 15:57:46.383015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.119 qpair failed and we were unable to recover it. 00:39:06.119 [2024-09-27 15:57:46.383296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.119 [2024-09-27 15:57:46.383303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.119 qpair failed and we were unable to recover it. 00:39:06.119 [2024-09-27 15:57:46.383592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.119 [2024-09-27 15:57:46.383601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.119 qpair failed and we were unable to recover it. 00:39:06.119 [2024-09-27 15:57:46.383911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.119 [2024-09-27 15:57:46.383919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.119 qpair failed and we were unable to recover it. 00:39:06.119 [2024-09-27 15:57:46.384205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.119 [2024-09-27 15:57:46.384213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.119 qpair failed and we were unable to recover it. 00:39:06.119 [2024-09-27 15:57:46.384452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.119 [2024-09-27 15:57:46.384460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.119 qpair failed and we were unable to recover it. 00:39:06.119 [2024-09-27 15:57:46.384782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.119 [2024-09-27 15:57:46.384791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.119 qpair failed and we were unable to recover it. 00:39:06.119 [2024-09-27 15:57:46.385127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.119 [2024-09-27 15:57:46.385135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.119 qpair failed and we were unable to recover it. 00:39:06.119 [2024-09-27 15:57:46.385468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.119 [2024-09-27 15:57:46.385475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.119 qpair failed and we were unable to recover it. 00:39:06.119 [2024-09-27 15:57:46.385800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.119 [2024-09-27 15:57:46.385809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.119 qpair failed and we were unable to recover it. 00:39:06.119 [2024-09-27 15:57:46.386193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.119 [2024-09-27 15:57:46.386203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.119 qpair failed and we were unable to recover it. 00:39:06.119 [2024-09-27 15:57:46.386426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.119 [2024-09-27 15:57:46.386436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.119 qpair failed and we were unable to recover it. 00:39:06.119 [2024-09-27 15:57:46.386608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.120 [2024-09-27 15:57:46.386617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.120 qpair failed and we were unable to recover it. 00:39:06.120 [2024-09-27 15:57:46.386947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.120 [2024-09-27 15:57:46.386955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.120 qpair failed and we were unable to recover it. 00:39:06.120 [2024-09-27 15:57:46.387282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.120 [2024-09-27 15:57:46.387290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.120 qpair failed and we were unable to recover it. 00:39:06.120 [2024-09-27 15:57:46.387614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.120 [2024-09-27 15:57:46.387621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.120 qpair failed and we were unable to recover it. 00:39:06.120 [2024-09-27 15:57:46.387948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.120 [2024-09-27 15:57:46.387956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.120 qpair failed and we were unable to recover it. 00:39:06.120 [2024-09-27 15:57:46.388683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.120 [2024-09-27 15:57:46.388691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.120 qpair failed and we were unable to recover it. 00:39:06.120 [2024-09-27 15:57:46.389004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.120 [2024-09-27 15:57:46.389014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.120 qpair failed and we were unable to recover it. 00:39:06.120 [2024-09-27 15:57:46.389333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.120 [2024-09-27 15:57:46.389342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.120 qpair failed and we were unable to recover it. 00:39:06.120 [2024-09-27 15:57:46.389672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.120 [2024-09-27 15:57:46.389680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.120 qpair failed and we were unable to recover it. 00:39:06.120 [2024-09-27 15:57:46.389867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.120 [2024-09-27 15:57:46.389875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.120 qpair failed and we were unable to recover it. 00:39:06.120 [2024-09-27 15:57:46.390082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.120 [2024-09-27 15:57:46.390089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.120 qpair failed and we were unable to recover it. 00:39:06.120 [2024-09-27 15:57:46.390396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.120 [2024-09-27 15:57:46.390404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.120 qpair failed and we were unable to recover it. 00:39:06.120 [2024-09-27 15:57:46.390742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.120 [2024-09-27 15:57:46.390749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.120 qpair failed and we were unable to recover it. 00:39:06.120 [2024-09-27 15:57:46.391047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.120 [2024-09-27 15:57:46.391057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.120 qpair failed and we were unable to recover it. 00:39:06.120 [2024-09-27 15:57:46.391371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.120 [2024-09-27 15:57:46.391379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.120 qpair failed and we were unable to recover it. 00:39:06.120 [2024-09-27 15:57:46.391671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.120 [2024-09-27 15:57:46.391679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.120 qpair failed and we were unable to recover it. 00:39:06.120 [2024-09-27 15:57:46.392025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.120 [2024-09-27 15:57:46.392034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.120 qpair failed and we were unable to recover it. 00:39:06.120 [2024-09-27 15:57:46.392386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.120 [2024-09-27 15:57:46.392395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.120 qpair failed and we were unable to recover it. 00:39:06.120 [2024-09-27 15:57:46.392752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.120 [2024-09-27 15:57:46.392760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.120 qpair failed and we were unable to recover it. 00:39:06.120 [2024-09-27 15:57:46.393071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.120 [2024-09-27 15:57:46.393081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.120 qpair failed and we were unable to recover it. 00:39:06.120 [2024-09-27 15:57:46.393373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.120 [2024-09-27 15:57:46.393382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.120 qpair failed and we were unable to recover it. 00:39:06.120 [2024-09-27 15:57:46.393705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.120 [2024-09-27 15:57:46.393714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.120 qpair failed and we were unable to recover it. 00:39:06.120 [2024-09-27 15:57:46.394005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.120 [2024-09-27 15:57:46.394013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.120 qpair failed and we were unable to recover it. 00:39:06.120 [2024-09-27 15:57:46.394317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.120 [2024-09-27 15:57:46.394325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.120 qpair failed and we were unable to recover it. 00:39:06.120 [2024-09-27 15:57:46.394646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.120 [2024-09-27 15:57:46.394653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.120 qpair failed and we were unable to recover it. 00:39:06.120 [2024-09-27 15:57:46.394970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.120 [2024-09-27 15:57:46.394978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.120 qpair failed and we were unable to recover it. 00:39:06.120 [2024-09-27 15:57:46.395150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.120 [2024-09-27 15:57:46.395160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.120 qpair failed and we were unable to recover it. 00:39:06.120 [2024-09-27 15:57:46.395490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.120 [2024-09-27 15:57:46.395498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.120 qpair failed and we were unable to recover it. 00:39:06.120 [2024-09-27 15:57:46.395819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.120 [2024-09-27 15:57:46.395827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.120 qpair failed and we were unable to recover it. 00:39:06.120 [2024-09-27 15:57:46.396077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.120 [2024-09-27 15:57:46.396086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.120 qpair failed and we were unable to recover it. 00:39:06.120 [2024-09-27 15:57:46.396394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.120 [2024-09-27 15:57:46.396402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.120 qpair failed and we were unable to recover it. 00:39:06.120 [2024-09-27 15:57:46.396626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.120 [2024-09-27 15:57:46.396634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.120 qpair failed and we were unable to recover it. 00:39:06.120 [2024-09-27 15:57:46.396966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.120 [2024-09-27 15:57:46.396974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.120 qpair failed and we were unable to recover it. 00:39:06.120 [2024-09-27 15:57:46.397291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.120 [2024-09-27 15:57:46.397299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.120 qpair failed and we were unable to recover it. 00:39:06.120 [2024-09-27 15:57:46.397626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.120 [2024-09-27 15:57:46.397633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.120 qpair failed and we were unable to recover it. 00:39:06.120 [2024-09-27 15:57:46.397967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.120 [2024-09-27 15:57:46.397975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.120 qpair failed and we were unable to recover it. 00:39:06.120 [2024-09-27 15:57:46.398305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.120 [2024-09-27 15:57:46.398313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.121 qpair failed and we were unable to recover it. 00:39:06.121 [2024-09-27 15:57:46.398495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.121 [2024-09-27 15:57:46.398503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.121 qpair failed and we were unable to recover it. 00:39:06.121 [2024-09-27 15:57:46.398891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.121 [2024-09-27 15:57:46.398906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.121 qpair failed and we were unable to recover it. 00:39:06.121 [2024-09-27 15:57:46.399245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.121 [2024-09-27 15:57:46.399253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.121 qpair failed and we were unable to recover it. 00:39:06.121 [2024-09-27 15:57:46.399532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.121 [2024-09-27 15:57:46.399539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.121 qpair failed and we were unable to recover it. 00:39:06.121 [2024-09-27 15:57:46.399847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.121 [2024-09-27 15:57:46.399855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.121 qpair failed and we were unable to recover it. 00:39:06.121 [2024-09-27 15:57:46.400178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.121 [2024-09-27 15:57:46.400185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.121 qpair failed and we were unable to recover it. 00:39:06.121 [2024-09-27 15:57:46.400507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.121 [2024-09-27 15:57:46.400516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.121 qpair failed and we were unable to recover it. 00:39:06.121 [2024-09-27 15:57:46.400842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.121 [2024-09-27 15:57:46.400851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.121 qpair failed and we were unable to recover it. 00:39:06.121 [2024-09-27 15:57:46.401170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.121 [2024-09-27 15:57:46.401180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.121 qpair failed and we were unable to recover it. 00:39:06.121 [2024-09-27 15:57:46.401419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.121 [2024-09-27 15:57:46.401427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.121 qpair failed and we were unable to recover it. 00:39:06.121 [2024-09-27 15:57:46.401753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.121 [2024-09-27 15:57:46.401762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.121 qpair failed and we were unable to recover it. 00:39:06.121 [2024-09-27 15:57:46.401950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.121 [2024-09-27 15:57:46.401959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.121 qpair failed and we were unable to recover it. 00:39:06.121 [2024-09-27 15:57:46.402252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.121 [2024-09-27 15:57:46.402260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.121 qpair failed and we were unable to recover it. 00:39:06.121 [2024-09-27 15:57:46.402592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.121 [2024-09-27 15:57:46.402599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.121 qpair failed and we were unable to recover it. 00:39:06.121 [2024-09-27 15:57:46.402953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.121 [2024-09-27 15:57:46.402962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.121 qpair failed and we were unable to recover it. 00:39:06.121 [2024-09-27 15:57:46.403278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.121 [2024-09-27 15:57:46.403285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.121 qpair failed and we were unable to recover it. 00:39:06.121 [2024-09-27 15:57:46.403600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.121 [2024-09-27 15:57:46.403608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.121 qpair failed and we were unable to recover it. 00:39:06.121 [2024-09-27 15:57:46.403925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.121 [2024-09-27 15:57:46.403936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.121 qpair failed and we were unable to recover it. 00:39:06.121 [2024-09-27 15:57:46.404242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.121 [2024-09-27 15:57:46.404250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.121 qpair failed and we were unable to recover it. 00:39:06.121 [2024-09-27 15:57:46.404618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.121 [2024-09-27 15:57:46.404625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.121 qpair failed and we were unable to recover it. 00:39:06.121 [2024-09-27 15:57:46.404923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.121 [2024-09-27 15:57:46.404931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.121 qpair failed and we were unable to recover it. 00:39:06.121 [2024-09-27 15:57:46.405254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.121 [2024-09-27 15:57:46.405263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.121 qpair failed and we were unable to recover it. 00:39:06.121 [2024-09-27 15:57:46.405589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.121 [2024-09-27 15:57:46.405597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.121 qpair failed and we were unable to recover it. 00:39:06.121 [2024-09-27 15:57:46.405931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.121 [2024-09-27 15:57:46.405940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.121 qpair failed and we were unable to recover it. 00:39:06.121 [2024-09-27 15:57:46.406294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.121 [2024-09-27 15:57:46.406303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.121 qpair failed and we were unable to recover it. 00:39:06.121 [2024-09-27 15:57:46.406642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.121 [2024-09-27 15:57:46.406651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.121 qpair failed and we were unable to recover it. 00:39:06.121 [2024-09-27 15:57:46.406837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.121 [2024-09-27 15:57:46.406847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.121 qpair failed and we were unable to recover it. 00:39:06.121 [2024-09-27 15:57:46.407140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.121 [2024-09-27 15:57:46.407149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.121 qpair failed and we were unable to recover it. 00:39:06.121 [2024-09-27 15:57:46.407471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.121 [2024-09-27 15:57:46.407478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.121 qpair failed and we were unable to recover it. 00:39:06.121 [2024-09-27 15:57:46.407804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.121 [2024-09-27 15:57:46.407813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.121 qpair failed and we were unable to recover it. 00:39:06.121 [2024-09-27 15:57:46.408134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.121 [2024-09-27 15:57:46.408143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.121 qpair failed and we were unable to recover it. 00:39:06.121 [2024-09-27 15:57:46.408468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.121 [2024-09-27 15:57:46.408476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.121 qpair failed and we were unable to recover it. 00:39:06.121 [2024-09-27 15:57:46.408755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.121 [2024-09-27 15:57:46.408762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.121 qpair failed and we were unable to recover it. 00:39:06.121 [2024-09-27 15:57:46.409071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.121 [2024-09-27 15:57:46.409078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.121 qpair failed and we were unable to recover it. 00:39:06.121 [2024-09-27 15:57:46.409310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.121 [2024-09-27 15:57:46.409318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.121 qpair failed and we were unable to recover it. 00:39:06.121 [2024-09-27 15:57:46.409638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.121 [2024-09-27 15:57:46.409646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.121 qpair failed and we were unable to recover it. 00:39:06.121 [2024-09-27 15:57:46.410017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.121 [2024-09-27 15:57:46.410026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.122 qpair failed and we were unable to recover it. 00:39:06.122 [2024-09-27 15:57:46.410362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.122 [2024-09-27 15:57:46.410370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.122 qpair failed and we were unable to recover it. 00:39:06.122 [2024-09-27 15:57:46.410689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.122 [2024-09-27 15:57:46.410697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.122 qpair failed and we were unable to recover it. 00:39:06.122 [2024-09-27 15:57:46.411015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.122 [2024-09-27 15:57:46.411024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.122 qpair failed and we were unable to recover it. 00:39:06.122 [2024-09-27 15:57:46.411380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.122 [2024-09-27 15:57:46.411387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.122 qpair failed and we were unable to recover it. 00:39:06.122 [2024-09-27 15:57:46.411736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.122 [2024-09-27 15:57:46.411743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.122 qpair failed and we were unable to recover it. 00:39:06.122 [2024-09-27 15:57:46.412071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.122 [2024-09-27 15:57:46.412078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.122 qpair failed and we were unable to recover it. 00:39:06.122 [2024-09-27 15:57:46.412301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.122 [2024-09-27 15:57:46.412310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.122 qpair failed and we were unable to recover it. 00:39:06.122 [2024-09-27 15:57:46.412541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.122 [2024-09-27 15:57:46.412551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.122 qpair failed and we were unable to recover it. 00:39:06.122 [2024-09-27 15:57:46.412784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.122 [2024-09-27 15:57:46.412792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.122 qpair failed and we were unable to recover it. 00:39:06.122 [2024-09-27 15:57:46.413114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.122 [2024-09-27 15:57:46.413122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.122 qpair failed and we were unable to recover it. 00:39:06.122 [2024-09-27 15:57:46.413439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.122 [2024-09-27 15:57:46.413447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.122 qpair failed and we were unable to recover it. 00:39:06.122 [2024-09-27 15:57:46.413749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.122 [2024-09-27 15:57:46.413757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.122 qpair failed and we were unable to recover it. 00:39:06.122 [2024-09-27 15:57:46.414069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.122 [2024-09-27 15:57:46.414076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.122 qpair failed and we were unable to recover it. 00:39:06.122 [2024-09-27 15:57:46.414369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.122 [2024-09-27 15:57:46.414377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.122 qpair failed and we were unable to recover it. 00:39:06.122 [2024-09-27 15:57:46.414703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.122 [2024-09-27 15:57:46.414712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.122 qpair failed and we were unable to recover it. 00:39:06.122 [2024-09-27 15:57:46.414927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.122 [2024-09-27 15:57:46.414935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.122 qpair failed and we were unable to recover it. 00:39:06.122 [2024-09-27 15:57:46.415114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.122 [2024-09-27 15:57:46.415123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.122 qpair failed and we were unable to recover it. 00:39:06.122 [2024-09-27 15:57:46.415473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.122 [2024-09-27 15:57:46.415481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.122 qpair failed and we were unable to recover it. 00:39:06.122 [2024-09-27 15:57:46.415796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.122 [2024-09-27 15:57:46.415805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.122 qpair failed and we were unable to recover it. 00:39:06.122 [2024-09-27 15:57:46.416117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.122 [2024-09-27 15:57:46.416125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.122 qpair failed and we were unable to recover it. 00:39:06.122 [2024-09-27 15:57:46.416448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.122 [2024-09-27 15:57:46.416456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.122 qpair failed and we were unable to recover it. 00:39:06.122 [2024-09-27 15:57:46.416826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.122 [2024-09-27 15:57:46.416833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.122 qpair failed and we were unable to recover it. 00:39:06.122 [2024-09-27 15:57:46.417143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.122 [2024-09-27 15:57:46.417151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.122 qpair failed and we were unable to recover it. 00:39:06.122 [2024-09-27 15:57:46.417474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.122 [2024-09-27 15:57:46.417481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.122 qpair failed and we were unable to recover it. 00:39:06.122 [2024-09-27 15:57:46.417798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.122 [2024-09-27 15:57:46.417807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.122 qpair failed and we were unable to recover it. 00:39:06.122 [2024-09-27 15:57:46.418138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.122 [2024-09-27 15:57:46.418146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.122 qpair failed and we were unable to recover it. 00:39:06.122 [2024-09-27 15:57:46.418471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.122 [2024-09-27 15:57:46.418479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.122 qpair failed and we were unable to recover it. 00:39:06.122 [2024-09-27 15:57:46.418806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.122 [2024-09-27 15:57:46.418813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.122 qpair failed and we were unable to recover it. 00:39:06.122 [2024-09-27 15:57:46.419150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.122 [2024-09-27 15:57:46.419158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.122 qpair failed and we were unable to recover it. 00:39:06.122 [2024-09-27 15:57:46.419481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.122 [2024-09-27 15:57:46.419488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.122 qpair failed and we were unable to recover it. 00:39:06.122 [2024-09-27 15:57:46.419812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.122 [2024-09-27 15:57:46.419821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.122 qpair failed and we were unable to recover it. 00:39:06.122 [2024-09-27 15:57:46.420139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.122 [2024-09-27 15:57:46.420147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.122 qpair failed and we were unable to recover it. 00:39:06.122 [2024-09-27 15:57:46.420458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.122 [2024-09-27 15:57:46.420466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.122 qpair failed and we were unable to recover it. 00:39:06.122 [2024-09-27 15:57:46.420788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.122 [2024-09-27 15:57:46.420797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.122 qpair failed and we were unable to recover it. 00:39:06.122 [2024-09-27 15:57:46.421142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.122 [2024-09-27 15:57:46.421153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.122 qpair failed and we were unable to recover it. 00:39:06.122 [2024-09-27 15:57:46.421467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.123 [2024-09-27 15:57:46.421476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.123 qpair failed and we were unable to recover it. 00:39:06.123 [2024-09-27 15:57:46.421800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.123 [2024-09-27 15:57:46.421808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.123 qpair failed and we were unable to recover it. 00:39:06.123 [2024-09-27 15:57:46.422125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.123 [2024-09-27 15:57:46.422134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.123 qpair failed and we were unable to recover it. 00:39:06.123 [2024-09-27 15:57:46.422456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.123 [2024-09-27 15:57:46.422465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.123 qpair failed and we were unable to recover it. 00:39:06.123 [2024-09-27 15:57:46.422764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.123 [2024-09-27 15:57:46.422771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.123 qpair failed and we were unable to recover it. 00:39:06.123 [2024-09-27 15:57:46.423067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.123 [2024-09-27 15:57:46.423075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.123 qpair failed and we were unable to recover it. 00:39:06.123 [2024-09-27 15:57:46.423400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.123 [2024-09-27 15:57:46.423407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.123 qpair failed and we were unable to recover it. 00:39:06.123 [2024-09-27 15:57:46.423726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.123 [2024-09-27 15:57:46.423734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.123 qpair failed and we were unable to recover it. 00:39:06.123 [2024-09-27 15:57:46.424059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.123 [2024-09-27 15:57:46.424066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.123 qpair failed and we were unable to recover it. 00:39:06.123 [2024-09-27 15:57:46.424400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.123 [2024-09-27 15:57:46.424407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.123 qpair failed and we were unable to recover it. 00:39:06.123 [2024-09-27 15:57:46.424731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.123 [2024-09-27 15:57:46.424741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.123 qpair failed and we were unable to recover it. 00:39:06.123 [2024-09-27 15:57:46.425049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.123 [2024-09-27 15:57:46.425058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.123 qpair failed and we were unable to recover it. 00:39:06.123 [2024-09-27 15:57:46.425391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.123 [2024-09-27 15:57:46.425398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.123 qpair failed and we were unable to recover it. 00:39:06.123 [2024-09-27 15:57:46.425719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.123 [2024-09-27 15:57:46.425727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.123 qpair failed and we were unable to recover it. 00:39:06.123 [2024-09-27 15:57:46.426052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.123 [2024-09-27 15:57:46.426059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.123 qpair failed and we were unable to recover it. 00:39:06.123 [2024-09-27 15:57:46.426384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.123 [2024-09-27 15:57:46.426392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.123 qpair failed and we were unable to recover it. 00:39:06.123 [2024-09-27 15:57:46.426584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.123 [2024-09-27 15:57:46.426593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.123 qpair failed and we were unable to recover it. 00:39:06.123 [2024-09-27 15:57:46.426918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.123 [2024-09-27 15:57:46.426928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.123 qpair failed and we were unable to recover it. 00:39:06.123 [2024-09-27 15:57:46.427222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.123 [2024-09-27 15:57:46.427232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.123 qpair failed and we were unable to recover it. 00:39:06.123 [2024-09-27 15:57:46.427556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.123 [2024-09-27 15:57:46.427563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.123 qpair failed and we were unable to recover it. 00:39:06.123 [2024-09-27 15:57:46.427877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.123 [2024-09-27 15:57:46.427885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.123 qpair failed and we were unable to recover it. 00:39:06.123 [2024-09-27 15:57:46.428209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.123 [2024-09-27 15:57:46.428216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.123 qpair failed and we were unable to recover it. 00:39:06.123 [2024-09-27 15:57:46.428540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.123 [2024-09-27 15:57:46.428548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.123 qpair failed and we were unable to recover it. 00:39:06.123 [2024-09-27 15:57:46.428905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.123 [2024-09-27 15:57:46.428913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.123 qpair failed and we were unable to recover it. 00:39:06.123 [2024-09-27 15:57:46.429237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.123 [2024-09-27 15:57:46.429246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.123 qpair failed and we were unable to recover it. 00:39:06.123 [2024-09-27 15:57:46.429565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.123 [2024-09-27 15:57:46.429574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.123 qpair failed and we were unable to recover it. 00:39:06.123 [2024-09-27 15:57:46.429904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.123 [2024-09-27 15:57:46.429913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.123 qpair failed and we were unable to recover it. 00:39:06.123 [2024-09-27 15:57:46.430219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.123 [2024-09-27 15:57:46.430228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.123 qpair failed and we were unable to recover it. 00:39:06.123 [2024-09-27 15:57:46.430545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.123 [2024-09-27 15:57:46.430554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.123 qpair failed and we were unable to recover it. 00:39:06.123 [2024-09-27 15:57:46.430741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.123 [2024-09-27 15:57:46.430751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.123 qpair failed and we were unable to recover it. 00:39:06.123 [2024-09-27 15:57:46.431052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.123 [2024-09-27 15:57:46.431060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.123 qpair failed and we were unable to recover it. 00:39:06.123 [2024-09-27 15:57:46.431377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.123 [2024-09-27 15:57:46.431386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.123 qpair failed and we were unable to recover it. 00:39:06.123 [2024-09-27 15:57:46.431658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.123 [2024-09-27 15:57:46.431665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.123 qpair failed and we were unable to recover it. 00:39:06.123 [2024-09-27 15:57:46.431924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.123 [2024-09-27 15:57:46.431933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.123 qpair failed and we were unable to recover it. 00:39:06.123 [2024-09-27 15:57:46.432302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.123 [2024-09-27 15:57:46.432310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.123 qpair failed and we were unable to recover it. 00:39:06.123 [2024-09-27 15:57:46.432600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.123 [2024-09-27 15:57:46.432608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.123 qpair failed and we were unable to recover it. 00:39:06.123 [2024-09-27 15:57:46.432882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.123 [2024-09-27 15:57:46.432889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.123 qpair failed and we were unable to recover it. 00:39:06.123 [2024-09-27 15:57:46.433190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.123 [2024-09-27 15:57:46.433198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.124 qpair failed and we were unable to recover it. 00:39:06.124 [2024-09-27 15:57:46.433525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.124 [2024-09-27 15:57:46.433532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.124 qpair failed and we were unable to recover it. 00:39:06.124 [2024-09-27 15:57:46.433853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.124 [2024-09-27 15:57:46.433861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.124 qpair failed and we were unable to recover it. 00:39:06.124 [2024-09-27 15:57:46.434071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.124 [2024-09-27 15:57:46.434080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.124 qpair failed and we were unable to recover it. 00:39:06.124 [2024-09-27 15:57:46.434401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.124 [2024-09-27 15:57:46.434410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.124 qpair failed and we were unable to recover it. 00:39:06.124 [2024-09-27 15:57:46.434733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.124 [2024-09-27 15:57:46.434742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.124 qpair failed and we were unable to recover it. 00:39:06.124 [2024-09-27 15:57:46.434962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.124 [2024-09-27 15:57:46.434971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.124 qpair failed and we were unable to recover it. 00:39:06.124 [2024-09-27 15:57:46.435292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.124 [2024-09-27 15:57:46.435300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.124 qpair failed and we were unable to recover it. 00:39:06.124 [2024-09-27 15:57:46.435619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.124 [2024-09-27 15:57:46.435627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.124 qpair failed and we were unable to recover it. 00:39:06.124 [2024-09-27 15:57:46.435946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.124 [2024-09-27 15:57:46.435954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.124 qpair failed and we were unable to recover it. 00:39:06.124 [2024-09-27 15:57:46.436278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.124 [2024-09-27 15:57:46.436286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.124 qpair failed and we were unable to recover it. 00:39:06.124 [2024-09-27 15:57:46.436614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.124 [2024-09-27 15:57:46.436622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.124 qpair failed and we were unable to recover it. 00:39:06.124 [2024-09-27 15:57:46.436942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.124 [2024-09-27 15:57:46.436951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.124 qpair failed and we were unable to recover it. 00:39:06.124 [2024-09-27 15:57:46.437277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.124 [2024-09-27 15:57:46.437286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.124 qpair failed and we were unable to recover it. 00:39:06.124 [2024-09-27 15:57:46.437677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.124 [2024-09-27 15:57:46.437686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.124 qpair failed and we were unable to recover it. 00:39:06.124 [2024-09-27 15:57:46.437999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.124 [2024-09-27 15:57:46.438007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.124 qpair failed and we were unable to recover it. 00:39:06.124 [2024-09-27 15:57:46.438329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.124 [2024-09-27 15:57:46.438337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.124 qpair failed and we were unable to recover it. 00:39:06.124 [2024-09-27 15:57:46.438656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.124 [2024-09-27 15:57:46.438663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.124 qpair failed and we were unable to recover it. 00:39:06.124 [2024-09-27 15:57:46.438978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.124 [2024-09-27 15:57:46.438987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.124 qpair failed and we were unable to recover it. 00:39:06.124 [2024-09-27 15:57:46.439321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.124 [2024-09-27 15:57:46.439328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.124 qpair failed and we were unable to recover it. 00:39:06.124 [2024-09-27 15:57:46.439632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.124 [2024-09-27 15:57:46.439640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.124 qpair failed and we were unable to recover it. 00:39:06.124 [2024-09-27 15:57:46.439979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.124 [2024-09-27 15:57:46.439987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.124 qpair failed and we were unable to recover it. 00:39:06.124 [2024-09-27 15:57:46.440302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.124 [2024-09-27 15:57:46.440310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.124 qpair failed and we were unable to recover it. 00:39:06.124 [2024-09-27 15:57:46.440512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.124 [2024-09-27 15:57:46.440521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.124 qpair failed and we were unable to recover it. 00:39:06.124 [2024-09-27 15:57:46.440706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.124 [2024-09-27 15:57:46.440716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.124 qpair failed and we were unable to recover it. 00:39:06.124 [2024-09-27 15:57:46.440980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.124 [2024-09-27 15:57:46.440988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.124 qpair failed and we were unable to recover it. 00:39:06.124 [2024-09-27 15:57:46.441337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.124 [2024-09-27 15:57:46.441346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.124 qpair failed and we were unable to recover it. 00:39:06.124 [2024-09-27 15:57:46.441647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.124 [2024-09-27 15:57:46.441656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.124 qpair failed and we were unable to recover it. 00:39:06.124 [2024-09-27 15:57:46.441985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.124 [2024-09-27 15:57:46.441994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.124 qpair failed and we were unable to recover it. 00:39:06.124 [2024-09-27 15:57:46.442310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.124 [2024-09-27 15:57:46.442318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.124 qpair failed and we were unable to recover it. 00:39:06.124 [2024-09-27 15:57:46.442679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.124 [2024-09-27 15:57:46.442689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.124 qpair failed and we were unable to recover it. 00:39:06.124 [2024-09-27 15:57:46.442861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.124 [2024-09-27 15:57:46.442869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.124 qpair failed and we were unable to recover it. 00:39:06.124 [2024-09-27 15:57:46.443266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.124 [2024-09-27 15:57:46.443275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.125 qpair failed and we were unable to recover it. 00:39:06.125 [2024-09-27 15:57:46.443600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.125 [2024-09-27 15:57:46.443608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.125 qpair failed and we were unable to recover it. 00:39:06.125 [2024-09-27 15:57:46.443924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.125 [2024-09-27 15:57:46.443933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.125 qpair failed and we were unable to recover it. 00:39:06.125 [2024-09-27 15:57:46.444276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.125 [2024-09-27 15:57:46.444284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.125 qpair failed and we were unable to recover it. 00:39:06.125 [2024-09-27 15:57:46.444589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.125 [2024-09-27 15:57:46.444598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.125 qpair failed and we were unable to recover it. 00:39:06.125 [2024-09-27 15:57:46.444917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.125 [2024-09-27 15:57:46.444928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.125 qpair failed and we were unable to recover it. 00:39:06.125 [2024-09-27 15:57:46.445110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.125 [2024-09-27 15:57:46.445118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.125 qpair failed and we were unable to recover it. 00:39:06.125 [2024-09-27 15:57:46.445447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.125 [2024-09-27 15:57:46.445455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.125 qpair failed and we were unable to recover it. 00:39:06.125 [2024-09-27 15:57:46.445791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.125 [2024-09-27 15:57:46.445799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.125 qpair failed and we were unable to recover it. 00:39:06.125 [2024-09-27 15:57:46.446121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.125 [2024-09-27 15:57:46.446130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.125 qpair failed and we were unable to recover it. 00:39:06.125 [2024-09-27 15:57:46.446452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.125 [2024-09-27 15:57:46.446461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.125 qpair failed and we were unable to recover it. 00:39:06.125 [2024-09-27 15:57:46.446780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.125 [2024-09-27 15:57:46.446789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.125 qpair failed and we were unable to recover it. 00:39:06.125 [2024-09-27 15:57:46.446995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.125 [2024-09-27 15:57:46.447005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.125 qpair failed and we were unable to recover it. 00:39:06.125 [2024-09-27 15:57:46.447299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.125 [2024-09-27 15:57:46.447306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.125 qpair failed and we were unable to recover it. 00:39:06.125 [2024-09-27 15:57:46.447637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.125 [2024-09-27 15:57:46.447645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.125 qpair failed and we were unable to recover it. 00:39:06.125 [2024-09-27 15:57:46.447981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.125 [2024-09-27 15:57:46.447989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.125 qpair failed and we were unable to recover it. 00:39:06.125 [2024-09-27 15:57:46.448194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.125 [2024-09-27 15:57:46.448201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.125 qpair failed and we were unable to recover it. 00:39:06.125 [2024-09-27 15:57:46.448526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.125 [2024-09-27 15:57:46.448534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.125 qpair failed and we were unable to recover it. 00:39:06.125 [2024-09-27 15:57:46.448856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.125 [2024-09-27 15:57:46.448865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.125 qpair failed and we were unable to recover it. 00:39:06.125 [2024-09-27 15:57:46.449081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.125 [2024-09-27 15:57:46.449093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.125 qpair failed and we were unable to recover it. 00:39:06.125 [2024-09-27 15:57:46.449416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.125 [2024-09-27 15:57:46.449424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.125 qpair failed and we were unable to recover it. 00:39:06.125 [2024-09-27 15:57:46.449708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.125 [2024-09-27 15:57:46.449715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.125 qpair failed and we were unable to recover it. 00:39:06.125 [2024-09-27 15:57:46.450076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.125 [2024-09-27 15:57:46.450084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.125 qpair failed and we were unable to recover it. 00:39:06.125 [2024-09-27 15:57:46.450410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.125 [2024-09-27 15:57:46.450417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.125 qpair failed and we were unable to recover it. 00:39:06.125 [2024-09-27 15:57:46.450721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.125 [2024-09-27 15:57:46.450729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.125 qpair failed and we were unable to recover it. 00:39:06.125 [2024-09-27 15:57:46.451051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.125 [2024-09-27 15:57:46.451064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.125 qpair failed and we were unable to recover it. 00:39:06.125 [2024-09-27 15:57:46.451379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.125 [2024-09-27 15:57:46.451387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.125 qpair failed and we were unable to recover it. 00:39:06.125 [2024-09-27 15:57:46.451579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.125 [2024-09-27 15:57:46.451587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.125 qpair failed and we were unable to recover it. 00:39:06.125 [2024-09-27 15:57:46.451931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.125 [2024-09-27 15:57:46.451939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.125 qpair failed and we were unable to recover it. 00:39:06.125 [2024-09-27 15:57:46.452274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.125 [2024-09-27 15:57:46.452282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.125 qpair failed and we were unable to recover it. 00:39:06.125 [2024-09-27 15:57:46.452483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.125 [2024-09-27 15:57:46.452491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.125 qpair failed and we were unable to recover it. 00:39:06.125 [2024-09-27 15:57:46.452856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.125 [2024-09-27 15:57:46.452863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.125 qpair failed and we were unable to recover it. 00:39:06.125 [2024-09-27 15:57:46.453190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.125 [2024-09-27 15:57:46.453198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.125 qpair failed and we were unable to recover it. 00:39:06.125 [2024-09-27 15:57:46.453527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.125 [2024-09-27 15:57:46.453535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.125 qpair failed and we were unable to recover it. 00:39:06.125 [2024-09-27 15:57:46.453849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.125 [2024-09-27 15:57:46.453859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.125 qpair failed and we were unable to recover it. 00:39:06.125 [2024-09-27 15:57:46.454212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.125 [2024-09-27 15:57:46.454222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.125 qpair failed and we were unable to recover it. 00:39:06.125 [2024-09-27 15:57:46.454584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.125 [2024-09-27 15:57:46.454592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.125 qpair failed and we were unable to recover it. 00:39:06.125 [2024-09-27 15:57:46.454906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.126 [2024-09-27 15:57:46.454914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.126 qpair failed and we were unable to recover it. 00:39:06.126 [2024-09-27 15:57:46.455109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.126 [2024-09-27 15:57:46.455116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.126 qpair failed and we were unable to recover it. 00:39:06.126 [2024-09-27 15:57:46.455407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.126 [2024-09-27 15:57:46.455415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.126 qpair failed and we were unable to recover it. 00:39:06.126 [2024-09-27 15:57:46.455759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.126 [2024-09-27 15:57:46.455766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.126 qpair failed and we were unable to recover it. 00:39:06.126 [2024-09-27 15:57:46.456061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.126 [2024-09-27 15:57:46.456070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.126 qpair failed and we were unable to recover it. 00:39:06.126 [2024-09-27 15:57:46.456392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.126 [2024-09-27 15:57:46.456400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.126 qpair failed and we were unable to recover it. 00:39:06.126 [2024-09-27 15:57:46.456726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.126 [2024-09-27 15:57:46.456734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.126 qpair failed and we were unable to recover it. 00:39:06.126 [2024-09-27 15:57:46.457056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.126 [2024-09-27 15:57:46.457064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.126 qpair failed and we were unable to recover it. 00:39:06.126 [2024-09-27 15:57:46.457391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.126 [2024-09-27 15:57:46.457398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.126 qpair failed and we were unable to recover it. 00:39:06.126 [2024-09-27 15:57:46.457584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.126 [2024-09-27 15:57:46.457592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.126 qpair failed and we were unable to recover it. 00:39:06.126 [2024-09-27 15:57:46.457932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.126 [2024-09-27 15:57:46.457940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.126 qpair failed and we were unable to recover it. 00:39:06.126 [2024-09-27 15:57:46.458251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.126 [2024-09-27 15:57:46.458260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.126 qpair failed and we were unable to recover it. 00:39:06.126 [2024-09-27 15:57:46.458582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.126 [2024-09-27 15:57:46.458590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.126 qpair failed and we were unable to recover it. 00:39:06.126 [2024-09-27 15:57:46.458994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.126 [2024-09-27 15:57:46.459003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.126 qpair failed and we were unable to recover it. 00:39:06.126 [2024-09-27 15:57:46.459321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.126 [2024-09-27 15:57:46.459329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.126 qpair failed and we were unable to recover it. 00:39:06.126 [2024-09-27 15:57:46.459544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.126 [2024-09-27 15:57:46.459551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.126 qpair failed and we were unable to recover it. 00:39:06.126 [2024-09-27 15:57:46.459882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.126 [2024-09-27 15:57:46.459891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.126 qpair failed and we were unable to recover it. 00:39:06.126 [2024-09-27 15:57:46.460213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.126 [2024-09-27 15:57:46.460222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.126 qpair failed and we were unable to recover it. 00:39:06.126 [2024-09-27 15:57:46.460541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.126 [2024-09-27 15:57:46.460549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.126 qpair failed and we were unable to recover it. 00:39:06.126 [2024-09-27 15:57:46.460871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.126 [2024-09-27 15:57:46.460880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.126 qpair failed and we were unable to recover it. 00:39:06.126 [2024-09-27 15:57:46.461202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.126 [2024-09-27 15:57:46.461211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.126 qpair failed and we were unable to recover it. 00:39:06.126 [2024-09-27 15:57:46.461420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.126 [2024-09-27 15:57:46.461430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.126 qpair failed and we were unable to recover it. 00:39:06.126 [2024-09-27 15:57:46.461808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.126 [2024-09-27 15:57:46.461816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.126 qpair failed and we were unable to recover it. 00:39:06.126 [2024-09-27 15:57:46.462131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.126 [2024-09-27 15:57:46.462140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.126 qpair failed and we were unable to recover it. 00:39:06.126 [2024-09-27 15:57:46.462350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.126 [2024-09-27 15:57:46.462359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.126 qpair failed and we were unable to recover it. 00:39:06.126 [2024-09-27 15:57:46.462698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.126 [2024-09-27 15:57:46.462706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.126 qpair failed and we were unable to recover it. 00:39:06.126 [2024-09-27 15:57:46.463039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.126 [2024-09-27 15:57:46.463048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.126 qpair failed and we were unable to recover it. 00:39:06.126 [2024-09-27 15:57:46.463377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.126 [2024-09-27 15:57:46.463385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.126 qpair failed and we were unable to recover it. 00:39:06.126 [2024-09-27 15:57:46.463684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.126 [2024-09-27 15:57:46.463693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.126 qpair failed and we were unable to recover it. 00:39:06.126 [2024-09-27 15:57:46.463998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.126 [2024-09-27 15:57:46.464006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.126 qpair failed and we were unable to recover it. 00:39:06.126 [2024-09-27 15:57:46.464322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.126 [2024-09-27 15:57:46.464329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.126 qpair failed and we were unable to recover it. 00:39:06.126 [2024-09-27 15:57:46.464634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.126 [2024-09-27 15:57:46.464641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.126 qpair failed and we were unable to recover it. 00:39:06.126 [2024-09-27 15:57:46.464858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.126 [2024-09-27 15:57:46.464865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.126 qpair failed and we were unable to recover it. 00:39:06.126 [2024-09-27 15:57:46.465224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.126 [2024-09-27 15:57:46.465232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.126 qpair failed and we were unable to recover it. 00:39:06.126 [2024-09-27 15:57:46.465600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.126 [2024-09-27 15:57:46.465608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.126 qpair failed and we were unable to recover it. 00:39:06.126 [2024-09-27 15:57:46.465940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.126 [2024-09-27 15:57:46.465948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.126 qpair failed and we were unable to recover it. 00:39:06.126 [2024-09-27 15:57:46.466111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.126 [2024-09-27 15:57:46.466120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.126 qpair failed and we were unable to recover it. 00:39:06.126 [2024-09-27 15:57:46.466449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.127 [2024-09-27 15:57:46.466456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.127 qpair failed and we were unable to recover it. 00:39:06.127 [2024-09-27 15:57:46.466762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.127 [2024-09-27 15:57:46.466770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.127 qpair failed and we were unable to recover it. 00:39:06.127 [2024-09-27 15:57:46.467075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.127 [2024-09-27 15:57:46.467082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.127 qpair failed and we were unable to recover it. 00:39:06.127 [2024-09-27 15:57:46.467411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.127 [2024-09-27 15:57:46.467419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.127 qpair failed and we were unable to recover it. 00:39:06.127 [2024-09-27 15:57:46.467743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.127 [2024-09-27 15:57:46.467752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.127 qpair failed and we were unable to recover it. 00:39:06.127 [2024-09-27 15:57:46.468085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.127 [2024-09-27 15:57:46.468093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.127 qpair failed and we were unable to recover it. 00:39:06.127 [2024-09-27 15:57:46.468311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.127 [2024-09-27 15:57:46.468320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.127 qpair failed and we were unable to recover it. 00:39:06.127 [2024-09-27 15:57:46.468595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.127 [2024-09-27 15:57:46.468603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.127 qpair failed and we were unable to recover it. 00:39:06.127 [2024-09-27 15:57:46.468925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.127 [2024-09-27 15:57:46.468933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.127 qpair failed and we were unable to recover it. 00:39:06.127 [2024-09-27 15:57:46.469247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.127 [2024-09-27 15:57:46.469254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.127 qpair failed and we were unable to recover it. 00:39:06.127 [2024-09-27 15:57:46.469586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.127 [2024-09-27 15:57:46.469594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.127 qpair failed and we were unable to recover it. 00:39:06.127 [2024-09-27 15:57:46.469922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.127 [2024-09-27 15:57:46.469931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.127 qpair failed and we were unable to recover it. 00:39:06.127 [2024-09-27 15:57:46.470328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.127 [2024-09-27 15:57:46.470338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.127 qpair failed and we were unable to recover it. 00:39:06.127 [2024-09-27 15:57:46.470550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.127 [2024-09-27 15:57:46.470558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.127 qpair failed and we were unable to recover it. 00:39:06.127 [2024-09-27 15:57:46.470879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.127 [2024-09-27 15:57:46.470887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.127 qpair failed and we were unable to recover it. 00:39:06.127 [2024-09-27 15:57:46.471209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.127 [2024-09-27 15:57:46.471216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.127 qpair failed and we were unable to recover it. 00:39:06.127 [2024-09-27 15:57:46.471548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.127 [2024-09-27 15:57:46.471556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.127 qpair failed and we were unable to recover it. 00:39:06.127 [2024-09-27 15:57:46.471871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.127 [2024-09-27 15:57:46.471880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.127 qpair failed and we were unable to recover it. 00:39:06.127 [2024-09-27 15:57:46.472187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.127 [2024-09-27 15:57:46.472194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.127 qpair failed and we were unable to recover it. 00:39:06.127 [2024-09-27 15:57:46.472513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.127 [2024-09-27 15:57:46.472523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.127 qpair failed and we were unable to recover it. 00:39:06.127 [2024-09-27 15:57:46.472843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.127 [2024-09-27 15:57:46.472853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.127 qpair failed and we were unable to recover it. 00:39:06.127 [2024-09-27 15:57:46.473181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.127 [2024-09-27 15:57:46.473191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.127 qpair failed and we were unable to recover it. 00:39:06.127 [2024-09-27 15:57:46.473519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.127 [2024-09-27 15:57:46.473529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.127 qpair failed and we were unable to recover it. 00:39:06.127 [2024-09-27 15:57:46.473925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.127 [2024-09-27 15:57:46.473934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.127 qpair failed and we were unable to recover it. 00:39:06.127 [2024-09-27 15:57:46.474305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.127 [2024-09-27 15:57:46.474312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.127 qpair failed and we were unable to recover it. 00:39:06.127 [2024-09-27 15:57:46.474616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.127 [2024-09-27 15:57:46.474624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.127 qpair failed and we were unable to recover it. 00:39:06.127 [2024-09-27 15:57:46.474944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.127 [2024-09-27 15:57:46.474953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.127 qpair failed and we were unable to recover it. 00:39:06.127 [2024-09-27 15:57:46.475282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.127 [2024-09-27 15:57:46.475290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.127 qpair failed and we were unable to recover it. 00:39:06.127 [2024-09-27 15:57:46.475636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.127 [2024-09-27 15:57:46.475644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.127 qpair failed and we were unable to recover it. 00:39:06.127 [2024-09-27 15:57:46.475948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.127 [2024-09-27 15:57:46.475956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.127 qpair failed and we were unable to recover it. 00:39:06.127 [2024-09-27 15:57:46.476277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.127 [2024-09-27 15:57:46.476284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.127 qpair failed and we were unable to recover it. 00:39:06.127 [2024-09-27 15:57:46.476604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.127 [2024-09-27 15:57:46.476612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.127 qpair failed and we were unable to recover it. 00:39:06.127 [2024-09-27 15:57:46.476942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.127 [2024-09-27 15:57:46.476950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.127 qpair failed and we were unable to recover it. 00:39:06.127 [2024-09-27 15:57:46.477256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.127 [2024-09-27 15:57:46.477265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.127 qpair failed and we were unable to recover it. 00:39:06.127 [2024-09-27 15:57:46.477484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.127 [2024-09-27 15:57:46.477493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.127 qpair failed and we were unable to recover it. 00:39:06.127 [2024-09-27 15:57:46.477824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.127 [2024-09-27 15:57:46.477832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.127 qpair failed and we were unable to recover it. 00:39:06.127 [2024-09-27 15:57:46.478164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.127 [2024-09-27 15:57:46.478172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.127 qpair failed and we were unable to recover it. 00:39:06.127 [2024-09-27 15:57:46.478332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.128 [2024-09-27 15:57:46.478341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.128 qpair failed and we were unable to recover it. 00:39:06.128 [2024-09-27 15:57:46.478661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.128 [2024-09-27 15:57:46.478668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.128 qpair failed and we were unable to recover it. 00:39:06.128 [2024-09-27 15:57:46.478980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.128 [2024-09-27 15:57:46.478988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.128 qpair failed and we were unable to recover it. 00:39:06.128 [2024-09-27 15:57:46.479205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.128 [2024-09-27 15:57:46.479212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.128 qpair failed and we were unable to recover it. 00:39:06.128 [2024-09-27 15:57:46.479500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.128 [2024-09-27 15:57:46.479509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.128 qpair failed and we were unable to recover it. 00:39:06.128 [2024-09-27 15:57:46.479880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.128 [2024-09-27 15:57:46.479888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.128 qpair failed and we were unable to recover it. 00:39:06.128 [2024-09-27 15:57:46.480197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.128 [2024-09-27 15:57:46.480206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.128 qpair failed and we were unable to recover it. 00:39:06.128 [2024-09-27 15:57:46.480533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.128 [2024-09-27 15:57:46.480540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.128 qpair failed and we were unable to recover it. 00:39:06.128 [2024-09-27 15:57:46.480867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.128 [2024-09-27 15:57:46.480874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.128 qpair failed and we were unable to recover it. 00:39:06.128 [2024-09-27 15:57:46.481213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.128 [2024-09-27 15:57:46.481223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.128 qpair failed and we were unable to recover it. 00:39:06.128 [2024-09-27 15:57:46.481536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.128 [2024-09-27 15:57:46.481544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.128 qpair failed and we were unable to recover it. 00:39:06.128 [2024-09-27 15:57:46.481767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.128 [2024-09-27 15:57:46.481776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.128 qpair failed and we were unable to recover it. 00:39:06.128 [2024-09-27 15:57:46.482107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.128 [2024-09-27 15:57:46.482115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.128 qpair failed and we were unable to recover it. 00:39:06.128 [2024-09-27 15:57:46.482296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.128 [2024-09-27 15:57:46.482306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.128 qpair failed and we were unable to recover it. 00:39:06.128 [2024-09-27 15:57:46.482601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.128 [2024-09-27 15:57:46.482611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.128 qpair failed and we were unable to recover it. 00:39:06.128 [2024-09-27 15:57:46.482981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.128 [2024-09-27 15:57:46.482989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.128 qpair failed and we were unable to recover it. 00:39:06.128 [2024-09-27 15:57:46.483320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.128 [2024-09-27 15:57:46.483328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.128 qpair failed and we were unable to recover it. 00:39:06.128 [2024-09-27 15:57:46.483659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.128 [2024-09-27 15:57:46.483666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.128 qpair failed and we were unable to recover it. 00:39:06.128 [2024-09-27 15:57:46.483835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.128 [2024-09-27 15:57:46.483842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.128 qpair failed and we were unable to recover it. 00:39:06.128 [2024-09-27 15:57:46.484136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.128 [2024-09-27 15:57:46.484146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.128 qpair failed and we were unable to recover it. 00:39:06.128 [2024-09-27 15:57:46.484470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.128 [2024-09-27 15:57:46.484477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.128 qpair failed and we were unable to recover it. 00:39:06.128 [2024-09-27 15:57:46.484800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.128 [2024-09-27 15:57:46.484809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.128 qpair failed and we were unable to recover it. 00:39:06.128 [2024-09-27 15:57:46.485138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.128 [2024-09-27 15:57:46.485146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.128 qpair failed and we were unable to recover it. 00:39:06.128 [2024-09-27 15:57:46.485467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.128 [2024-09-27 15:57:46.485475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.128 qpair failed and we were unable to recover it. 00:39:06.128 [2024-09-27 15:57:46.485707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.128 [2024-09-27 15:57:46.485714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.128 qpair failed and we were unable to recover it. 00:39:06.128 [2024-09-27 15:57:46.486048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.128 [2024-09-27 15:57:46.486057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.128 qpair failed and we were unable to recover it. 00:39:06.128 [2024-09-27 15:57:46.486376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.128 [2024-09-27 15:57:46.486383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.128 qpair failed and we were unable to recover it. 00:39:06.128 [2024-09-27 15:57:46.486709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.128 [2024-09-27 15:57:46.486718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.128 qpair failed and we were unable to recover it. 00:39:06.128 [2024-09-27 15:57:46.487042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.128 [2024-09-27 15:57:46.487050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.128 qpair failed and we were unable to recover it. 00:39:06.128 [2024-09-27 15:57:46.487282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.128 [2024-09-27 15:57:46.487289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.128 qpair failed and we were unable to recover it. 00:39:06.128 [2024-09-27 15:57:46.487584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.128 [2024-09-27 15:57:46.487591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.128 qpair failed and we were unable to recover it. 00:39:06.128 [2024-09-27 15:57:46.487912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.128 [2024-09-27 15:57:46.487920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.128 qpair failed and we were unable to recover it. 00:39:06.128 [2024-09-27 15:57:46.488167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.128 [2024-09-27 15:57:46.488176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.128 qpair failed and we were unable to recover it. 00:39:06.128 [2024-09-27 15:57:46.488496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.128 [2024-09-27 15:57:46.488503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.128 qpair failed and we were unable to recover it. 00:39:06.128 [2024-09-27 15:57:46.488824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.128 [2024-09-27 15:57:46.488832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.128 qpair failed and we were unable to recover it. 00:39:06.128 [2024-09-27 15:57:46.489156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.128 [2024-09-27 15:57:46.489165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.128 qpair failed and we were unable to recover it. 00:39:06.128 [2024-09-27 15:57:46.489493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.128 [2024-09-27 15:57:46.489504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.128 qpair failed and we were unable to recover it. 00:39:06.128 [2024-09-27 15:57:46.489722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.128 [2024-09-27 15:57:46.489730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.129 qpair failed and we were unable to recover it. 00:39:06.129 [2024-09-27 15:57:46.490037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.129 [2024-09-27 15:57:46.490044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.129 qpair failed and we were unable to recover it. 00:39:06.129 [2024-09-27 15:57:46.490354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.129 [2024-09-27 15:57:46.490362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.129 qpair failed and we were unable to recover it. 00:39:06.129 [2024-09-27 15:57:46.490682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.129 [2024-09-27 15:57:46.490689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.129 qpair failed and we were unable to recover it. 00:39:06.129 [2024-09-27 15:57:46.490982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.129 [2024-09-27 15:57:46.490989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.129 qpair failed and we were unable to recover it. 00:39:06.129 [2024-09-27 15:57:46.491293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.129 [2024-09-27 15:57:46.491300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.129 qpair failed and we were unable to recover it. 00:39:06.129 [2024-09-27 15:57:46.491622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.129 [2024-09-27 15:57:46.491631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.129 qpair failed and we were unable to recover it. 00:39:06.129 [2024-09-27 15:57:46.491959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.129 [2024-09-27 15:57:46.491967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.129 qpair failed and we were unable to recover it. 00:39:06.129 [2024-09-27 15:57:46.492296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.129 [2024-09-27 15:57:46.492304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.129 qpair failed and we were unable to recover it. 00:39:06.129 [2024-09-27 15:57:46.492515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.129 [2024-09-27 15:57:46.492522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.129 qpair failed and we were unable to recover it. 00:39:06.129 [2024-09-27 15:57:46.492802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.129 [2024-09-27 15:57:46.492810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.129 qpair failed and we were unable to recover it. 00:39:06.129 [2024-09-27 15:57:46.493043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.129 [2024-09-27 15:57:46.493051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.129 qpair failed and we were unable to recover it. 00:39:06.129 [2024-09-27 15:57:46.493399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.129 [2024-09-27 15:57:46.493406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.129 qpair failed and we were unable to recover it. 00:39:06.129 [2024-09-27 15:57:46.493700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.129 [2024-09-27 15:57:46.493707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.129 qpair failed and we were unable to recover it. 00:39:06.129 [2024-09-27 15:57:46.494027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.129 [2024-09-27 15:57:46.494036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.129 qpair failed and we were unable to recover it. 00:39:06.129 [2024-09-27 15:57:46.494251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.129 [2024-09-27 15:57:46.494259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.129 qpair failed and we were unable to recover it. 00:39:06.129 [2024-09-27 15:57:46.494666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.129 [2024-09-27 15:57:46.494675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.129 qpair failed and we were unable to recover it. 00:39:06.129 [2024-09-27 15:57:46.494869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.129 [2024-09-27 15:57:46.494878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.129 qpair failed and we were unable to recover it. 00:39:06.129 [2024-09-27 15:57:46.495279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.129 [2024-09-27 15:57:46.495289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.129 qpair failed and we were unable to recover it. 00:39:06.129 [2024-09-27 15:57:46.495609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.129 [2024-09-27 15:57:46.495617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.129 qpair failed and we were unable to recover it. 00:39:06.129 [2024-09-27 15:57:46.495926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.129 [2024-09-27 15:57:46.495934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.129 qpair failed and we were unable to recover it. 00:39:06.129 [2024-09-27 15:57:46.496098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.129 [2024-09-27 15:57:46.496107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.129 qpair failed and we were unable to recover it. 00:39:06.129 [2024-09-27 15:57:46.496426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.129 [2024-09-27 15:57:46.496435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.129 qpair failed and we were unable to recover it. 00:39:06.129 [2024-09-27 15:57:46.496820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.129 [2024-09-27 15:57:46.496827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.129 qpair failed and we were unable to recover it. 00:39:06.129 [2024-09-27 15:57:46.497125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.129 [2024-09-27 15:57:46.497133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.129 qpair failed and we were unable to recover it. 00:39:06.129 [2024-09-27 15:57:46.497459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.129 [2024-09-27 15:57:46.497468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.129 qpair failed and we were unable to recover it. 00:39:06.129 [2024-09-27 15:57:46.497796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.129 [2024-09-27 15:57:46.497805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.129 qpair failed and we were unable to recover it. 00:39:06.129 [2024-09-27 15:57:46.498087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.129 [2024-09-27 15:57:46.498094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.129 qpair failed and we were unable to recover it. 00:39:06.129 [2024-09-27 15:57:46.498421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.129 [2024-09-27 15:57:46.498430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.129 qpair failed and we were unable to recover it. 00:39:06.129 [2024-09-27 15:57:46.498753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.129 [2024-09-27 15:57:46.498762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.129 qpair failed and we were unable to recover it. 00:39:06.129 [2024-09-27 15:57:46.499076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.129 [2024-09-27 15:57:46.499084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.129 qpair failed and we were unable to recover it. 00:39:06.129 [2024-09-27 15:57:46.499419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.129 [2024-09-27 15:57:46.499427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.129 qpair failed and we were unable to recover it. 00:39:06.129 [2024-09-27 15:57:46.499780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.129 [2024-09-27 15:57:46.499788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.129 qpair failed and we were unable to recover it. 00:39:06.129 [2024-09-27 15:57:46.500112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.130 [2024-09-27 15:57:46.500120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.130 qpair failed and we were unable to recover it. 00:39:06.130 [2024-09-27 15:57:46.500480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.130 [2024-09-27 15:57:46.500488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.130 qpair failed and we were unable to recover it. 00:39:06.130 [2024-09-27 15:57:46.500806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.130 [2024-09-27 15:57:46.500814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.130 qpair failed and we were unable to recover it. 00:39:06.130 [2024-09-27 15:57:46.501055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.130 [2024-09-27 15:57:46.501063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.130 qpair failed and we were unable to recover it. 00:39:06.130 [2024-09-27 15:57:46.501314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.130 [2024-09-27 15:57:46.501322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.130 qpair failed and we were unable to recover it. 00:39:06.130 [2024-09-27 15:57:46.501652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.130 [2024-09-27 15:57:46.501660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.130 qpair failed and we were unable to recover it. 00:39:06.130 [2024-09-27 15:57:46.501989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.130 [2024-09-27 15:57:46.501997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.130 qpair failed and we were unable to recover it. 00:39:06.130 [2024-09-27 15:57:46.502315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.130 [2024-09-27 15:57:46.502323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.130 qpair failed and we were unable to recover it. 00:39:06.130 [2024-09-27 15:57:46.502647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.130 [2024-09-27 15:57:46.502654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.130 qpair failed and we were unable to recover it. 00:39:06.130 [2024-09-27 15:57:46.503067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.130 [2024-09-27 15:57:46.503076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.130 qpair failed and we were unable to recover it. 00:39:06.130 [2024-09-27 15:57:46.503271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.130 [2024-09-27 15:57:46.503280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.130 qpair failed and we were unable to recover it. 00:39:06.130 [2024-09-27 15:57:46.503584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.130 [2024-09-27 15:57:46.503593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.130 qpair failed and we were unable to recover it. 00:39:06.130 [2024-09-27 15:57:46.503934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.130 [2024-09-27 15:57:46.503942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.130 qpair failed and we were unable to recover it. 00:39:06.130 [2024-09-27 15:57:46.504267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.130 [2024-09-27 15:57:46.504276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.130 qpair failed and we were unable to recover it. 00:39:06.130 [2024-09-27 15:57:46.504609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.130 [2024-09-27 15:57:46.504616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.130 qpair failed and we were unable to recover it. 00:39:06.130 [2024-09-27 15:57:46.504945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.130 [2024-09-27 15:57:46.504953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.130 qpair failed and we were unable to recover it. 00:39:06.130 [2024-09-27 15:57:46.505285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.130 [2024-09-27 15:57:46.505293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.130 qpair failed and we were unable to recover it. 00:39:06.130 [2024-09-27 15:57:46.505520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.130 [2024-09-27 15:57:46.505529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.130 qpair failed and we were unable to recover it. 00:39:06.130 [2024-09-27 15:57:46.505874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.130 [2024-09-27 15:57:46.505882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.130 qpair failed and we were unable to recover it. 00:39:06.130 [2024-09-27 15:57:46.506197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.130 [2024-09-27 15:57:46.506206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.130 qpair failed and we were unable to recover it. 00:39:06.130 [2024-09-27 15:57:46.506532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.130 [2024-09-27 15:57:46.506539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.130 qpair failed and we were unable to recover it. 00:39:06.130 [2024-09-27 15:57:46.506747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.130 [2024-09-27 15:57:46.506754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.130 qpair failed and we were unable to recover it. 00:39:06.130 [2024-09-27 15:57:46.507020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.130 [2024-09-27 15:57:46.507029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.130 qpair failed and we were unable to recover it. 00:39:06.130 [2024-09-27 15:57:46.507357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.130 [2024-09-27 15:57:46.507364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.130 qpair failed and we were unable to recover it. 00:39:06.130 [2024-09-27 15:57:46.507678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.130 [2024-09-27 15:57:46.507686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.130 qpair failed and we were unable to recover it. 00:39:06.130 [2024-09-27 15:57:46.507891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.130 [2024-09-27 15:57:46.507907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.130 qpair failed and we were unable to recover it. 00:39:06.130 [2024-09-27 15:57:46.508115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.130 [2024-09-27 15:57:46.508123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.130 qpair failed and we were unable to recover it. 00:39:06.130 [2024-09-27 15:57:46.508457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.130 [2024-09-27 15:57:46.508466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.130 qpair failed and we were unable to recover it. 00:39:06.130 [2024-09-27 15:57:46.508783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.130 [2024-09-27 15:57:46.508791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.130 qpair failed and we were unable to recover it. 00:39:06.130 [2024-09-27 15:57:46.508989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.130 [2024-09-27 15:57:46.508996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.130 qpair failed and we were unable to recover it. 00:39:06.130 [2024-09-27 15:57:46.509280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.130 [2024-09-27 15:57:46.509287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.130 qpair failed and we were unable to recover it. 00:39:06.130 [2024-09-27 15:57:46.509640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.130 [2024-09-27 15:57:46.509647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.130 qpair failed and we were unable to recover it. 00:39:06.130 [2024-09-27 15:57:46.509956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.130 [2024-09-27 15:57:46.509964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.130 qpair failed and we were unable to recover it. 00:39:06.130 [2024-09-27 15:57:46.510028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.130 [2024-09-27 15:57:46.510036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.130 qpair failed and we were unable to recover it. 00:39:06.130 [2024-09-27 15:57:46.510337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.130 [2024-09-27 15:57:46.510346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.130 qpair failed and we were unable to recover it. 00:39:06.130 [2024-09-27 15:57:46.510674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.130 [2024-09-27 15:57:46.510683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.130 qpair failed and we were unable to recover it. 00:39:06.130 [2024-09-27 15:57:46.511022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.130 [2024-09-27 15:57:46.511031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.131 qpair failed and we were unable to recover it. 00:39:06.131 [2024-09-27 15:57:46.511371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.131 [2024-09-27 15:57:46.511379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.131 qpair failed and we were unable to recover it. 00:39:06.131 [2024-09-27 15:57:46.511587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.131 [2024-09-27 15:57:46.511595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.131 qpair failed and we were unable to recover it. 00:39:06.131 [2024-09-27 15:57:46.511788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.131 [2024-09-27 15:57:46.511794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.131 qpair failed and we were unable to recover it. 00:39:06.131 [2024-09-27 15:57:46.512085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.131 [2024-09-27 15:57:46.512093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.131 qpair failed and we were unable to recover it. 00:39:06.131 [2024-09-27 15:57:46.512295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.131 [2024-09-27 15:57:46.512303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.131 qpair failed and we were unable to recover it. 00:39:06.131 [2024-09-27 15:57:46.512640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.131 [2024-09-27 15:57:46.512647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.131 qpair failed and we were unable to recover it. 00:39:06.131 [2024-09-27 15:57:46.512970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.131 [2024-09-27 15:57:46.512979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.131 qpair failed and we were unable to recover it. 00:39:06.131 [2024-09-27 15:57:46.513227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.131 [2024-09-27 15:57:46.513236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.131 qpair failed and we were unable to recover it. 00:39:06.131 [2024-09-27 15:57:46.513535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.131 [2024-09-27 15:57:46.513543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.131 qpair failed and we were unable to recover it. 00:39:06.131 [2024-09-27 15:57:46.513851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.131 [2024-09-27 15:57:46.513859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.131 qpair failed and we were unable to recover it. 00:39:06.131 [2024-09-27 15:57:46.514181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.131 [2024-09-27 15:57:46.514189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.131 qpair failed and we were unable to recover it. 00:39:06.131 [2024-09-27 15:57:46.514517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.131 [2024-09-27 15:57:46.514524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.131 qpair failed and we were unable to recover it. 00:39:06.131 [2024-09-27 15:57:46.514870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.131 [2024-09-27 15:57:46.514877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.131 qpair failed and we were unable to recover it. 00:39:06.131 [2024-09-27 15:57:46.515250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.131 [2024-09-27 15:57:46.515259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.131 qpair failed and we were unable to recover it. 00:39:06.131 [2024-09-27 15:57:46.515624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.131 [2024-09-27 15:57:46.515632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.131 qpair failed and we were unable to recover it. 00:39:06.131 [2024-09-27 15:57:46.515960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.131 [2024-09-27 15:57:46.515969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.131 qpair failed and we were unable to recover it. 00:39:06.131 [2024-09-27 15:57:46.516284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.131 [2024-09-27 15:57:46.516293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.131 qpair failed and we were unable to recover it. 00:39:06.131 [2024-09-27 15:57:46.516610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.131 [2024-09-27 15:57:46.516617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.131 qpair failed and we were unable to recover it. 00:39:06.131 [2024-09-27 15:57:46.517021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.131 [2024-09-27 15:57:46.517029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.131 qpair failed and we were unable to recover it. 00:39:06.131 [2024-09-27 15:57:46.517363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.131 [2024-09-27 15:57:46.517372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.131 qpair failed and we were unable to recover it. 00:39:06.131 [2024-09-27 15:57:46.517688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.131 [2024-09-27 15:57:46.517696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.131 qpair failed and we were unable to recover it. 00:39:06.131 [2024-09-27 15:57:46.518019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.131 [2024-09-27 15:57:46.518027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.131 qpair failed and we were unable to recover it. 00:39:06.131 [2024-09-27 15:57:46.518390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.131 [2024-09-27 15:57:46.518399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.131 qpair failed and we were unable to recover it. 00:39:06.131 [2024-09-27 15:57:46.518648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.131 [2024-09-27 15:57:46.518655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.131 qpair failed and we were unable to recover it. 00:39:06.131 [2024-09-27 15:57:46.518957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.131 [2024-09-27 15:57:46.518967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.131 qpair failed and we were unable to recover it. 00:39:06.131 [2024-09-27 15:57:46.519308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.131 [2024-09-27 15:57:46.519315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.131 qpair failed and we were unable to recover it. 00:39:06.131 [2024-09-27 15:57:46.519614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.131 [2024-09-27 15:57:46.519622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.131 qpair failed and we were unable to recover it. 00:39:06.131 [2024-09-27 15:57:46.519947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.131 [2024-09-27 15:57:46.519955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.131 qpair failed and we were unable to recover it. 00:39:06.131 [2024-09-27 15:57:46.520247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.131 [2024-09-27 15:57:46.520257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.131 qpair failed and we were unable to recover it. 00:39:06.131 [2024-09-27 15:57:46.520580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.131 [2024-09-27 15:57:46.520588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.131 qpair failed and we were unable to recover it. 00:39:06.131 [2024-09-27 15:57:46.520919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.131 [2024-09-27 15:57:46.520928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.131 qpair failed and we were unable to recover it. 00:39:06.131 [2024-09-27 15:57:46.521284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.131 [2024-09-27 15:57:46.521293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.131 qpair failed and we were unable to recover it. 00:39:06.131 [2024-09-27 15:57:46.521477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.131 [2024-09-27 15:57:46.521485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.131 qpair failed and we were unable to recover it. 00:39:06.131 [2024-09-27 15:57:46.521859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.131 [2024-09-27 15:57:46.521866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.131 qpair failed and we were unable to recover it. 00:39:06.131 [2024-09-27 15:57:46.522191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.131 [2024-09-27 15:57:46.522199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.131 qpair failed and we were unable to recover it. 00:39:06.131 [2024-09-27 15:57:46.522574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.131 [2024-09-27 15:57:46.522582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.131 qpair failed and we were unable to recover it. 00:39:06.131 [2024-09-27 15:57:46.522781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.132 [2024-09-27 15:57:46.522789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.132 qpair failed and we were unable to recover it. 00:39:06.132 [2024-09-27 15:57:46.522996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.132 [2024-09-27 15:57:46.523006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.132 qpair failed and we were unable to recover it. 00:39:06.132 [2024-09-27 15:57:46.523332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.132 [2024-09-27 15:57:46.523340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.132 qpair failed and we were unable to recover it. 00:39:06.132 [2024-09-27 15:57:46.523617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.132 [2024-09-27 15:57:46.523624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.132 qpair failed and we were unable to recover it. 00:39:06.132 [2024-09-27 15:57:46.523815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.132 [2024-09-27 15:57:46.523823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.132 qpair failed and we were unable to recover it. 00:39:06.132 [2024-09-27 15:57:46.524006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.132 [2024-09-27 15:57:46.524014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.132 qpair failed and we were unable to recover it. 00:39:06.132 [2024-09-27 15:57:46.524299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.132 [2024-09-27 15:57:46.524307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.132 qpair failed and we were unable to recover it. 00:39:06.132 [2024-09-27 15:57:46.524642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.132 [2024-09-27 15:57:46.524649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.132 qpair failed and we were unable to recover it. 00:39:06.132 [2024-09-27 15:57:46.524860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.132 [2024-09-27 15:57:46.524870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.132 qpair failed and we were unable to recover it. 00:39:06.132 [2024-09-27 15:57:46.525200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.132 [2024-09-27 15:57:46.525208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.132 qpair failed and we were unable to recover it. 00:39:06.132 [2024-09-27 15:57:46.525526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.132 [2024-09-27 15:57:46.525533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.132 qpair failed and we were unable to recover it. 00:39:06.132 [2024-09-27 15:57:46.525874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.132 [2024-09-27 15:57:46.525881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.132 qpair failed and we were unable to recover it. 00:39:06.132 [2024-09-27 15:57:46.526171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.132 [2024-09-27 15:57:46.526179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.132 qpair failed and we were unable to recover it. 00:39:06.132 [2024-09-27 15:57:46.526504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.132 [2024-09-27 15:57:46.526512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.132 qpair failed and we were unable to recover it. 00:39:06.132 [2024-09-27 15:57:46.526814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.132 [2024-09-27 15:57:46.526822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.132 qpair failed and we were unable to recover it. 00:39:06.132 [2024-09-27 15:57:46.527027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.132 [2024-09-27 15:57:46.527036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.132 qpair failed and we were unable to recover it. 00:39:06.132 [2024-09-27 15:57:46.527373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.132 [2024-09-27 15:57:46.527381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.132 qpair failed and we were unable to recover it. 00:39:06.132 [2024-09-27 15:57:46.527706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.132 [2024-09-27 15:57:46.527714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.132 qpair failed and we were unable to recover it. 00:39:06.132 [2024-09-27 15:57:46.527911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.132 [2024-09-27 15:57:46.527919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.132 qpair failed and we were unable to recover it. 00:39:06.132 [2024-09-27 15:57:46.528255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.132 [2024-09-27 15:57:46.528263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.132 qpair failed and we were unable to recover it. 00:39:06.132 [2024-09-27 15:57:46.528466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.132 [2024-09-27 15:57:46.528475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.132 qpair failed and we were unable to recover it. 00:39:06.132 [2024-09-27 15:57:46.528808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.132 [2024-09-27 15:57:46.528816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.132 qpair failed and we were unable to recover it. 00:39:06.132 [2024-09-27 15:57:46.529117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.132 [2024-09-27 15:57:46.529125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.132 qpair failed and we were unable to recover it. 00:39:06.132 [2024-09-27 15:57:46.529448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.132 [2024-09-27 15:57:46.529456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.132 qpair failed and we were unable to recover it. 00:39:06.132 [2024-09-27 15:57:46.529784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.132 [2024-09-27 15:57:46.529794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.132 qpair failed and we were unable to recover it. 00:39:06.132 [2024-09-27 15:57:46.530114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.132 [2024-09-27 15:57:46.530123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.132 qpair failed and we were unable to recover it. 00:39:06.132 [2024-09-27 15:57:46.530423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.132 [2024-09-27 15:57:46.530431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.132 qpair failed and we were unable to recover it. 00:39:06.132 [2024-09-27 15:57:46.530620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.132 [2024-09-27 15:57:46.530628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.132 qpair failed and we were unable to recover it. 00:39:06.132 [2024-09-27 15:57:46.530972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.132 [2024-09-27 15:57:46.530980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.132 qpair failed and we were unable to recover it. 00:39:06.132 [2024-09-27 15:57:46.531197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.132 [2024-09-27 15:57:46.531205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.132 qpair failed and we were unable to recover it. 00:39:06.132 [2024-09-27 15:57:46.531418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.132 [2024-09-27 15:57:46.531426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.132 qpair failed and we were unable to recover it. 00:39:06.132 [2024-09-27 15:57:46.531665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.132 [2024-09-27 15:57:46.531672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.132 qpair failed and we were unable to recover it. 00:39:06.132 [2024-09-27 15:57:46.531845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.132 [2024-09-27 15:57:46.531853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.132 qpair failed and we were unable to recover it. 00:39:06.132 [2024-09-27 15:57:46.532102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.132 [2024-09-27 15:57:46.532111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.132 qpair failed and we were unable to recover it. 00:39:06.132 [2024-09-27 15:57:46.532294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.132 [2024-09-27 15:57:46.532302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.132 qpair failed and we were unable to recover it. 00:39:06.132 [2024-09-27 15:57:46.532679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.132 [2024-09-27 15:57:46.532687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.132 qpair failed and we were unable to recover it. 00:39:06.132 [2024-09-27 15:57:46.533005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.132 [2024-09-27 15:57:46.533013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.132 qpair failed and we were unable to recover it. 00:39:06.132 [2024-09-27 15:57:46.533341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.132 [2024-09-27 15:57:46.533348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.132 qpair failed and we were unable to recover it. 00:39:06.133 [2024-09-27 15:57:46.533756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.133 [2024-09-27 15:57:46.533763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.133 qpair failed and we were unable to recover it. 00:39:06.133 [2024-09-27 15:57:46.534200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.133 [2024-09-27 15:57:46.534208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.133 qpair failed and we were unable to recover it. 00:39:06.133 [2024-09-27 15:57:46.534293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.133 [2024-09-27 15:57:46.534301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.133 qpair failed and we were unable to recover it. 00:39:06.133 [2024-09-27 15:57:46.534533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.133 [2024-09-27 15:57:46.534545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.133 qpair failed and we were unable to recover it. 00:39:06.133 [2024-09-27 15:57:46.534884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.133 [2024-09-27 15:57:46.534892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.133 qpair failed and we were unable to recover it. 00:39:06.133 [2024-09-27 15:57:46.535222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.133 [2024-09-27 15:57:46.535231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.133 qpair failed and we were unable to recover it. 00:39:06.133 [2024-09-27 15:57:46.535553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.133 [2024-09-27 15:57:46.535560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.133 qpair failed and we were unable to recover it. 00:39:06.133 [2024-09-27 15:57:46.535883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.133 [2024-09-27 15:57:46.535891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.133 qpair failed and we were unable to recover it. 00:39:06.133 [2024-09-27 15:57:46.536222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.133 [2024-09-27 15:57:46.536230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.133 qpair failed and we were unable to recover it. 00:39:06.133 [2024-09-27 15:57:46.536558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.133 [2024-09-27 15:57:46.536566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.133 qpair failed and we were unable to recover it. 00:39:06.133 [2024-09-27 15:57:46.536900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.133 [2024-09-27 15:57:46.536910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.133 qpair failed and we were unable to recover it. 00:39:06.133 [2024-09-27 15:57:46.537240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.133 [2024-09-27 15:57:46.537255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.133 qpair failed and we were unable to recover it. 00:39:06.133 [2024-09-27 15:57:46.537471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.133 [2024-09-27 15:57:46.537479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.133 qpair failed and we were unable to recover it. 00:39:06.133 [2024-09-27 15:57:46.537901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.133 [2024-09-27 15:57:46.537910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.133 qpair failed and we were unable to recover it. 00:39:06.133 [2024-09-27 15:57:46.538196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.133 [2024-09-27 15:57:46.538204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.133 qpair failed and we were unable to recover it. 00:39:06.133 [2024-09-27 15:57:46.538411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.133 [2024-09-27 15:57:46.538419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.133 qpair failed and we were unable to recover it. 00:39:06.133 [2024-09-27 15:57:46.538710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.133 [2024-09-27 15:57:46.538717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.133 qpair failed and we were unable to recover it. 00:39:06.133 [2024-09-27 15:57:46.539041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.133 [2024-09-27 15:57:46.539050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.133 qpair failed and we were unable to recover it. 00:39:06.133 [2024-09-27 15:57:46.539230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.133 [2024-09-27 15:57:46.539240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.133 qpair failed and we were unable to recover it. 00:39:06.133 [2024-09-27 15:57:46.539678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.133 [2024-09-27 15:57:46.539687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.133 qpair failed and we were unable to recover it. 00:39:06.133 [2024-09-27 15:57:46.540010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.133 [2024-09-27 15:57:46.540020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.133 qpair failed and we were unable to recover it. 00:39:06.133 [2024-09-27 15:57:46.540383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.133 [2024-09-27 15:57:46.540392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.133 qpair failed and we were unable to recover it. 00:39:06.133 [2024-09-27 15:57:46.540567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.133 [2024-09-27 15:57:46.540576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.133 qpair failed and we were unable to recover it. 00:39:06.133 [2024-09-27 15:57:46.540916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.133 [2024-09-27 15:57:46.540925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.133 qpair failed and we were unable to recover it. 00:39:06.133 [2024-09-27 15:57:46.541112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.133 [2024-09-27 15:57:46.541120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.133 qpair failed and we were unable to recover it. 00:39:06.133 [2024-09-27 15:57:46.541460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.133 [2024-09-27 15:57:46.541467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.133 qpair failed and we were unable to recover it. 00:39:06.133 [2024-09-27 15:57:46.541683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.133 [2024-09-27 15:57:46.541693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.133 qpair failed and we were unable to recover it. 00:39:06.133 [2024-09-27 15:57:46.542057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.133 [2024-09-27 15:57:46.542065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.133 qpair failed and we were unable to recover it. 00:39:06.133 [2024-09-27 15:57:46.542385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.133 [2024-09-27 15:57:46.542393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.133 qpair failed and we were unable to recover it. 00:39:06.133 [2024-09-27 15:57:46.542578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.133 [2024-09-27 15:57:46.542586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.133 qpair failed and we were unable to recover it. 00:39:06.133 [2024-09-27 15:57:46.542921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.133 [2024-09-27 15:57:46.542929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.133 qpair failed and we were unable to recover it. 00:39:06.133 [2024-09-27 15:57:46.543254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.133 [2024-09-27 15:57:46.543262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.133 qpair failed and we were unable to recover it. 00:39:06.133 [2024-09-27 15:57:46.543456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.133 [2024-09-27 15:57:46.543464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.133 qpair failed and we were unable to recover it. 00:39:06.133 [2024-09-27 15:57:46.543820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.133 [2024-09-27 15:57:46.543829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.133 qpair failed and we were unable to recover it. 00:39:06.133 [2024-09-27 15:57:46.544037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.133 [2024-09-27 15:57:46.544045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.133 qpair failed and we were unable to recover it. 00:39:06.133 [2024-09-27 15:57:46.544399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.133 [2024-09-27 15:57:46.544407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.133 qpair failed and we were unable to recover it. 00:39:06.133 [2024-09-27 15:57:46.544729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.133 [2024-09-27 15:57:46.544737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.134 qpair failed and we were unable to recover it. 00:39:06.134 [2024-09-27 15:57:46.545070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.134 [2024-09-27 15:57:46.545078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.134 qpair failed and we were unable to recover it. 00:39:06.134 [2024-09-27 15:57:46.545474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.134 [2024-09-27 15:57:46.545483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.134 qpair failed and we were unable to recover it. 00:39:06.134 [2024-09-27 15:57:46.545800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.134 [2024-09-27 15:57:46.545807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.134 qpair failed and we were unable to recover it. 00:39:06.134 [2024-09-27 15:57:46.546132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.134 [2024-09-27 15:57:46.546141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.134 qpair failed and we were unable to recover it. 00:39:06.134 [2024-09-27 15:57:46.546348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.134 [2024-09-27 15:57:46.546356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.134 qpair failed and we were unable to recover it. 00:39:06.134 [2024-09-27 15:57:46.546556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.134 [2024-09-27 15:57:46.546564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.134 qpair failed and we were unable to recover it. 00:39:06.134 [2024-09-27 15:57:46.546903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.134 [2024-09-27 15:57:46.546911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.134 qpair failed and we were unable to recover it. 00:39:06.134 [2024-09-27 15:57:46.547227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.134 [2024-09-27 15:57:46.547234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.134 qpair failed and we were unable to recover it. 00:39:06.134 [2024-09-27 15:57:46.547568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.134 [2024-09-27 15:57:46.547577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.134 qpair failed and we were unable to recover it. 00:39:06.134 [2024-09-27 15:57:46.547890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.134 [2024-09-27 15:57:46.547906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.134 qpair failed and we were unable to recover it. 00:39:06.134 [2024-09-27 15:57:46.548228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.134 [2024-09-27 15:57:46.548235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.134 qpair failed and we were unable to recover it. 00:39:06.134 [2024-09-27 15:57:46.548564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.134 [2024-09-27 15:57:46.548571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.134 qpair failed and we were unable to recover it. 00:39:06.134 [2024-09-27 15:57:46.548762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.134 [2024-09-27 15:57:46.548770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.134 qpair failed and we were unable to recover it. 00:39:06.134 [2024-09-27 15:57:46.549139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.134 [2024-09-27 15:57:46.549147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.134 qpair failed and we were unable to recover it. 00:39:06.134 [2024-09-27 15:57:46.549452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.134 [2024-09-27 15:57:46.549461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.134 qpair failed and we were unable to recover it. 00:39:06.134 [2024-09-27 15:57:46.549792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.134 [2024-09-27 15:57:46.549801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.134 qpair failed and we were unable to recover it. 00:39:06.134 [2024-09-27 15:57:46.550110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.134 [2024-09-27 15:57:46.550118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.134 qpair failed and we were unable to recover it. 00:39:06.134 [2024-09-27 15:57:46.550428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.134 [2024-09-27 15:57:46.550436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.134 qpair failed and we were unable to recover it. 00:39:06.134 [2024-09-27 15:57:46.550758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.134 [2024-09-27 15:57:46.550767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.134 qpair failed and we were unable to recover it. 00:39:06.134 [2024-09-27 15:57:46.551072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.134 [2024-09-27 15:57:46.551081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.134 qpair failed and we were unable to recover it. 00:39:06.134 [2024-09-27 15:57:46.551403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.134 [2024-09-27 15:57:46.551412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.134 qpair failed and we were unable to recover it. 00:39:06.134 [2024-09-27 15:57:46.551736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.134 [2024-09-27 15:57:46.551744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.134 qpair failed and we were unable to recover it. 00:39:06.134 [2024-09-27 15:57:46.552068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.134 [2024-09-27 15:57:46.552076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.134 qpair failed and we were unable to recover it. 00:39:06.134 [2024-09-27 15:57:46.552288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.134 [2024-09-27 15:57:46.552295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.134 qpair failed and we were unable to recover it. 00:39:06.134 [2024-09-27 15:57:46.552502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.134 [2024-09-27 15:57:46.552511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.134 qpair failed and we were unable to recover it. 00:39:06.134 [2024-09-27 15:57:46.552858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.134 [2024-09-27 15:57:46.552865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.134 qpair failed and we were unable to recover it. 00:39:06.134 [2024-09-27 15:57:46.553173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.134 [2024-09-27 15:57:46.553181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.134 qpair failed and we were unable to recover it. 00:39:06.134 [2024-09-27 15:57:46.553506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.134 [2024-09-27 15:57:46.553515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.134 qpair failed and we were unable to recover it. 00:39:06.134 [2024-09-27 15:57:46.553847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.134 [2024-09-27 15:57:46.553855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.134 qpair failed and we were unable to recover it. 00:39:06.134 [2024-09-27 15:57:46.554177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.134 [2024-09-27 15:57:46.554185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.134 qpair failed and we were unable to recover it. 00:39:06.134 [2024-09-27 15:57:46.554502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.134 [2024-09-27 15:57:46.554511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.134 qpair failed and we were unable to recover it. 00:39:06.134 [2024-09-27 15:57:46.554693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.134 [2024-09-27 15:57:46.554703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.134 qpair failed and we were unable to recover it. 00:39:06.134 [2024-09-27 15:57:46.555029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.134 [2024-09-27 15:57:46.555037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.134 qpair failed and we were unable to recover it. 00:39:06.135 [2024-09-27 15:57:46.555356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.135 [2024-09-27 15:57:46.555364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.135 qpair failed and we were unable to recover it. 00:39:06.135 [2024-09-27 15:57:46.555678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.135 [2024-09-27 15:57:46.555687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.135 qpair failed and we were unable to recover it. 00:39:06.135 [2024-09-27 15:57:46.556000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.135 [2024-09-27 15:57:46.556011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.135 qpair failed and we were unable to recover it. 00:39:06.135 [2024-09-27 15:57:46.556341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.135 [2024-09-27 15:57:46.556350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.135 qpair failed and we were unable to recover it. 00:39:06.135 [2024-09-27 15:57:46.556675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.135 [2024-09-27 15:57:46.556682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.135 qpair failed and we were unable to recover it. 00:39:06.135 [2024-09-27 15:57:46.557012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.135 [2024-09-27 15:57:46.557021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.135 qpair failed and we were unable to recover it. 00:39:06.135 [2024-09-27 15:57:46.557339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.135 [2024-09-27 15:57:46.557347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.135 qpair failed and we were unable to recover it. 00:39:06.135 [2024-09-27 15:57:46.557673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.135 [2024-09-27 15:57:46.557680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.135 qpair failed and we were unable to recover it. 00:39:06.135 [2024-09-27 15:57:46.557853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.135 [2024-09-27 15:57:46.557861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.135 qpair failed and we were unable to recover it. 00:39:06.135 [2024-09-27 15:57:46.558169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.135 [2024-09-27 15:57:46.558177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.135 qpair failed and we were unable to recover it. 00:39:06.135 [2024-09-27 15:57:46.558496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.135 [2024-09-27 15:57:46.558505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.135 qpair failed and we were unable to recover it. 00:39:06.135 [2024-09-27 15:57:46.558819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.135 [2024-09-27 15:57:46.558827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.135 qpair failed and we were unable to recover it. 00:39:06.135 [2024-09-27 15:57:46.559152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.135 [2024-09-27 15:57:46.559160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.135 qpair failed and we were unable to recover it. 00:39:06.135 [2024-09-27 15:57:46.559565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.135 [2024-09-27 15:57:46.559574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.135 qpair failed and we were unable to recover it. 00:39:06.135 [2024-09-27 15:57:46.559915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.135 [2024-09-27 15:57:46.559924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.135 qpair failed and we were unable to recover it. 00:39:06.135 [2024-09-27 15:57:46.560115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.135 [2024-09-27 15:57:46.560122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.135 qpair failed and we were unable to recover it. 00:39:06.135 [2024-09-27 15:57:46.560510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.135 [2024-09-27 15:57:46.560518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.135 qpair failed and we were unable to recover it. 00:39:06.135 [2024-09-27 15:57:46.560727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.135 [2024-09-27 15:57:46.560736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.135 qpair failed and we were unable to recover it. 00:39:06.135 [2024-09-27 15:57:46.561056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.135 [2024-09-27 15:57:46.561065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.135 qpair failed and we were unable to recover it. 00:39:06.135 [2024-09-27 15:57:46.561391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.135 [2024-09-27 15:57:46.561399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.135 qpair failed and we were unable to recover it. 00:39:06.135 [2024-09-27 15:57:46.561727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.135 [2024-09-27 15:57:46.561734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.135 qpair failed and we were unable to recover it. 00:39:06.135 [2024-09-27 15:57:46.562049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.135 [2024-09-27 15:57:46.562057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.135 qpair failed and we were unable to recover it. 00:39:06.135 [2024-09-27 15:57:46.562412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.135 [2024-09-27 15:57:46.562419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.135 qpair failed and we were unable to recover it. 00:39:06.135 [2024-09-27 15:57:46.562610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.135 [2024-09-27 15:57:46.562618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.135 qpair failed and we were unable to recover it. 00:39:06.135 [2024-09-27 15:57:46.562989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.135 [2024-09-27 15:57:46.562998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.135 qpair failed and we were unable to recover it. 00:39:06.135 [2024-09-27 15:57:46.563330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.135 [2024-09-27 15:57:46.563338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.135 qpair failed and we were unable to recover it. 00:39:06.135 [2024-09-27 15:57:46.563555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.135 [2024-09-27 15:57:46.563571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.135 qpair failed and we were unable to recover it. 00:39:06.135 [2024-09-27 15:57:46.563792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.135 [2024-09-27 15:57:46.563799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.135 qpair failed and we were unable to recover it. 00:39:06.135 [2024-09-27 15:57:46.564123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.135 [2024-09-27 15:57:46.564130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.135 qpair failed and we were unable to recover it. 00:39:06.135 [2024-09-27 15:57:46.564314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.135 [2024-09-27 15:57:46.564322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.135 qpair failed and we were unable to recover it. 00:39:06.135 [2024-09-27 15:57:46.564541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.135 [2024-09-27 15:57:46.564548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.135 qpair failed and we were unable to recover it. 00:39:06.135 [2024-09-27 15:57:46.564845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.135 [2024-09-27 15:57:46.564852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.135 qpair failed and we were unable to recover it. 00:39:06.135 [2024-09-27 15:57:46.565044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.135 [2024-09-27 15:57:46.565052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.135 qpair failed and we were unable to recover it. 00:39:06.135 [2024-09-27 15:57:46.565332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.135 [2024-09-27 15:57:46.565342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.135 qpair failed and we were unable to recover it. 00:39:06.135 [2024-09-27 15:57:46.565665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.135 [2024-09-27 15:57:46.565673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.135 qpair failed and we were unable to recover it. 00:39:06.135 [2024-09-27 15:57:46.566039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.135 [2024-09-27 15:57:46.566048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.135 qpair failed and we were unable to recover it. 00:39:06.135 [2024-09-27 15:57:46.566345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.135 [2024-09-27 15:57:46.566353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.135 qpair failed and we were unable to recover it. 00:39:06.135 [2024-09-27 15:57:46.566696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.136 [2024-09-27 15:57:46.566704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.136 qpair failed and we were unable to recover it. 00:39:06.136 [2024-09-27 15:57:46.567017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.136 [2024-09-27 15:57:46.567025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.136 qpair failed and we were unable to recover it. 00:39:06.136 [2024-09-27 15:57:46.567342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.136 [2024-09-27 15:57:46.567350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.136 qpair failed and we were unable to recover it. 00:39:06.136 [2024-09-27 15:57:46.567647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.136 [2024-09-27 15:57:46.567655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.136 qpair failed and we were unable to recover it. 00:39:06.136 [2024-09-27 15:57:46.567982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.136 [2024-09-27 15:57:46.567990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.136 qpair failed and we were unable to recover it. 00:39:06.136 [2024-09-27 15:57:46.568186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.136 [2024-09-27 15:57:46.568195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.136 qpair failed and we were unable to recover it. 00:39:06.136 [2024-09-27 15:57:46.568450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.136 [2024-09-27 15:57:46.568460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.136 qpair failed and we were unable to recover it. 00:39:06.136 [2024-09-27 15:57:46.568752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.136 [2024-09-27 15:57:46.568760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.136 qpair failed and we were unable to recover it. 00:39:06.136 [2024-09-27 15:57:46.569158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.136 [2024-09-27 15:57:46.569166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.136 qpair failed and we were unable to recover it. 00:39:06.136 [2024-09-27 15:57:46.569521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.136 [2024-09-27 15:57:46.569529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.136 qpair failed and we were unable to recover it. 00:39:06.136 [2024-09-27 15:57:46.569854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.136 [2024-09-27 15:57:46.569863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.136 qpair failed and we were unable to recover it. 00:39:06.136 [2024-09-27 15:57:46.570174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.136 [2024-09-27 15:57:46.570183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.136 qpair failed and we were unable to recover it. 00:39:06.136 [2024-09-27 15:57:46.570500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.136 [2024-09-27 15:57:46.570508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.136 qpair failed and we were unable to recover it. 00:39:06.136 [2024-09-27 15:57:46.570828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.136 [2024-09-27 15:57:46.570838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.136 qpair failed and we were unable to recover it. 00:39:06.136 [2024-09-27 15:57:46.571019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.136 [2024-09-27 15:57:46.571028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.136 qpair failed and we were unable to recover it. 00:39:06.136 [2024-09-27 15:57:46.571336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.136 [2024-09-27 15:57:46.571345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.136 qpair failed and we were unable to recover it. 00:39:06.136 [2024-09-27 15:57:46.571659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.136 [2024-09-27 15:57:46.571668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.136 qpair failed and we were unable to recover it. 00:39:06.136 [2024-09-27 15:57:46.571866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.136 [2024-09-27 15:57:46.571876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.136 qpair failed and we were unable to recover it. 00:39:06.136 [2024-09-27 15:57:46.572100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.136 [2024-09-27 15:57:46.572110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.136 qpair failed and we were unable to recover it. 00:39:06.136 [2024-09-27 15:57:46.572444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.136 [2024-09-27 15:57:46.572453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.136 qpair failed and we were unable to recover it. 00:39:06.136 [2024-09-27 15:57:46.572782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.136 [2024-09-27 15:57:46.572793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.136 qpair failed and we were unable to recover it. 00:39:06.136 [2024-09-27 15:57:46.573117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.136 [2024-09-27 15:57:46.573127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.136 qpair failed and we were unable to recover it. 00:39:06.136 [2024-09-27 15:57:46.573448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.136 [2024-09-27 15:57:46.573458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.136 qpair failed and we were unable to recover it. 00:39:06.136 [2024-09-27 15:57:46.573808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.136 [2024-09-27 15:57:46.573817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.136 qpair failed and we were unable to recover it. 00:39:06.136 [2024-09-27 15:57:46.574162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.136 [2024-09-27 15:57:46.574171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.136 qpair failed and we were unable to recover it. 00:39:06.136 [2024-09-27 15:57:46.574491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.136 [2024-09-27 15:57:46.574500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.136 qpair failed and we were unable to recover it. 00:39:06.136 [2024-09-27 15:57:46.574825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.136 [2024-09-27 15:57:46.574834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.136 qpair failed and we were unable to recover it. 00:39:06.136 [2024-09-27 15:57:46.575161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.136 [2024-09-27 15:57:46.575171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.136 qpair failed and we were unable to recover it. 00:39:06.136 [2024-09-27 15:57:46.575498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.136 [2024-09-27 15:57:46.575508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.136 qpair failed and we were unable to recover it. 00:39:06.136 [2024-09-27 15:57:46.575840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.136 [2024-09-27 15:57:46.575849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.136 qpair failed and we were unable to recover it. 00:39:06.136 [2024-09-27 15:57:46.576043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.136 [2024-09-27 15:57:46.576054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.136 qpair failed and we were unable to recover it. 00:39:06.136 [2024-09-27 15:57:46.576422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.136 [2024-09-27 15:57:46.576431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.136 qpair failed and we were unable to recover it. 00:39:06.136 [2024-09-27 15:57:46.576749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.136 [2024-09-27 15:57:46.576758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.136 qpair failed and we were unable to recover it. 00:39:06.136 [2024-09-27 15:57:46.577068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.136 [2024-09-27 15:57:46.577081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.136 qpair failed and we were unable to recover it. 00:39:06.136 [2024-09-27 15:57:46.577260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.136 [2024-09-27 15:57:46.577268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.136 qpair failed and we were unable to recover it. 00:39:06.136 [2024-09-27 15:57:46.577545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.136 [2024-09-27 15:57:46.577553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.136 qpair failed and we were unable to recover it. 00:39:06.136 [2024-09-27 15:57:46.577872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.136 [2024-09-27 15:57:46.577882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.136 qpair failed and we were unable to recover it. 00:39:06.136 [2024-09-27 15:57:46.578199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.136 [2024-09-27 15:57:46.578207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.137 qpair failed and we were unable to recover it. 00:39:06.137 [2024-09-27 15:57:46.578531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.137 [2024-09-27 15:57:46.578539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.137 qpair failed and we were unable to recover it. 00:39:06.137 [2024-09-27 15:57:46.578860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.137 [2024-09-27 15:57:46.578867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.137 qpair failed and we were unable to recover it. 00:39:06.137 [2024-09-27 15:57:46.579278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.137 [2024-09-27 15:57:46.579286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.137 qpair failed and we were unable to recover it. 00:39:06.137 [2024-09-27 15:57:46.579482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.137 [2024-09-27 15:57:46.579489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.137 qpair failed and we were unable to recover it. 00:39:06.137 [2024-09-27 15:57:46.579855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.137 [2024-09-27 15:57:46.579863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.137 qpair failed and we were unable to recover it. 00:39:06.137 [2024-09-27 15:57:46.580172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.137 [2024-09-27 15:57:46.580182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.137 qpair failed and we were unable to recover it. 00:39:06.137 [2024-09-27 15:57:46.580346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.137 [2024-09-27 15:57:46.580355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.137 qpair failed and we were unable to recover it. 00:39:06.137 [2024-09-27 15:57:46.580668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.137 [2024-09-27 15:57:46.580676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.137 qpair failed and we were unable to recover it. 00:39:06.137 [2024-09-27 15:57:46.581001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.137 [2024-09-27 15:57:46.581010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.137 qpair failed and we were unable to recover it. 00:39:06.137 [2024-09-27 15:57:46.581195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.137 [2024-09-27 15:57:46.581205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.137 qpair failed and we were unable to recover it. 00:39:06.137 [2024-09-27 15:57:46.581549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.137 [2024-09-27 15:57:46.581557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.137 qpair failed and we were unable to recover it. 00:39:06.137 [2024-09-27 15:57:46.581869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.137 [2024-09-27 15:57:46.581877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.137 qpair failed and we were unable to recover it. 00:39:06.137 [2024-09-27 15:57:46.582088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.137 [2024-09-27 15:57:46.582096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.137 qpair failed and we were unable to recover it. 00:39:06.137 [2024-09-27 15:57:46.582438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.137 [2024-09-27 15:57:46.582446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.137 qpair failed and we were unable to recover it. 00:39:06.137 [2024-09-27 15:57:46.582765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.137 [2024-09-27 15:57:46.582773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.137 qpair failed and we were unable to recover it. 00:39:06.137 [2024-09-27 15:57:46.583122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.137 [2024-09-27 15:57:46.583130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.137 qpair failed and we were unable to recover it. 00:39:06.137 [2024-09-27 15:57:46.583325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.137 [2024-09-27 15:57:46.583333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.137 qpair failed and we were unable to recover it. 00:39:06.137 [2024-09-27 15:57:46.583625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.137 [2024-09-27 15:57:46.583632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.137 qpair failed and we were unable to recover it. 00:39:06.414 [2024-09-27 15:57:46.583964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.414 [2024-09-27 15:57:46.583975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.414 qpair failed and we were unable to recover it. 00:39:06.414 [2024-09-27 15:57:46.584308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.414 [2024-09-27 15:57:46.584318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.414 qpair failed and we were unable to recover it. 00:39:06.414 [2024-09-27 15:57:46.584649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.414 [2024-09-27 15:57:46.584660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.414 qpair failed and we were unable to recover it. 00:39:06.414 [2024-09-27 15:57:46.584984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.414 [2024-09-27 15:57:46.584993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.414 qpair failed and we were unable to recover it. 00:39:06.414 [2024-09-27 15:57:46.585206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.414 [2024-09-27 15:57:46.585218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.414 qpair failed and we were unable to recover it. 00:39:06.414 [2024-09-27 15:57:46.585574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.414 [2024-09-27 15:57:46.585582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.414 qpair failed and we were unable to recover it. 00:39:06.414 [2024-09-27 15:57:46.585905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.414 [2024-09-27 15:57:46.585913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.414 qpair failed and we were unable to recover it. 00:39:06.414 [2024-09-27 15:57:46.586125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.414 [2024-09-27 15:57:46.586134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.414 qpair failed and we were unable to recover it. 00:39:06.414 [2024-09-27 15:57:46.586372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.414 [2024-09-27 15:57:46.586381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.414 qpair failed and we were unable to recover it. 00:39:06.414 [2024-09-27 15:57:46.586578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.414 [2024-09-27 15:57:46.586587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.414 qpair failed and we were unable to recover it. 00:39:06.415 [2024-09-27 15:57:46.586916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.415 [2024-09-27 15:57:46.586926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.415 qpair failed and we were unable to recover it. 00:39:06.415 [2024-09-27 15:57:46.587254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.415 [2024-09-27 15:57:46.587261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.415 qpair failed and we were unable to recover it. 00:39:06.415 [2024-09-27 15:57:46.587582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.415 [2024-09-27 15:57:46.587591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.415 qpair failed and we were unable to recover it. 00:39:06.415 [2024-09-27 15:57:46.587923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.415 [2024-09-27 15:57:46.587932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.415 qpair failed and we were unable to recover it. 00:39:06.415 [2024-09-27 15:57:46.588315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.415 [2024-09-27 15:57:46.588326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.415 qpair failed and we were unable to recover it. 00:39:06.415 [2024-09-27 15:57:46.588647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.415 [2024-09-27 15:57:46.588654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.415 qpair failed and we were unable to recover it. 00:39:06.415 [2024-09-27 15:57:46.588960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.415 [2024-09-27 15:57:46.588968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.415 qpair failed and we were unable to recover it. 00:39:06.415 [2024-09-27 15:57:46.589300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.415 [2024-09-27 15:57:46.589308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.415 qpair failed and we were unable to recover it. 00:39:06.415 [2024-09-27 15:57:46.589501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.415 [2024-09-27 15:57:46.589510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.415 qpair failed and we were unable to recover it. 00:39:06.415 [2024-09-27 15:57:46.589876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.415 [2024-09-27 15:57:46.589885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.415 qpair failed and we were unable to recover it. 00:39:06.415 [2024-09-27 15:57:46.590224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.415 [2024-09-27 15:57:46.590232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.415 qpair failed and we were unable to recover it. 00:39:06.415 [2024-09-27 15:57:46.590464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.415 [2024-09-27 15:57:46.590472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.415 qpair failed and we were unable to recover it. 00:39:06.415 [2024-09-27 15:57:46.590753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.415 [2024-09-27 15:57:46.590761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.415 qpair failed and we were unable to recover it. 00:39:06.415 [2024-09-27 15:57:46.590961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.415 [2024-09-27 15:57:46.590970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.415 qpair failed and we were unable to recover it. 00:39:06.415 [2024-09-27 15:57:46.591309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.415 [2024-09-27 15:57:46.591317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.415 qpair failed and we were unable to recover it. 00:39:06.415 [2024-09-27 15:57:46.591608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.415 [2024-09-27 15:57:46.591616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.415 qpair failed and we were unable to recover it. 00:39:06.415 [2024-09-27 15:57:46.591966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.415 [2024-09-27 15:57:46.591977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.415 qpair failed and we were unable to recover it. 00:39:06.415 [2024-09-27 15:57:46.592189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.415 [2024-09-27 15:57:46.592198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.415 qpair failed and we were unable to recover it. 00:39:06.415 [2024-09-27 15:57:46.592528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.415 [2024-09-27 15:57:46.592536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.415 qpair failed and we were unable to recover it. 00:39:06.415 [2024-09-27 15:57:46.592879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.415 [2024-09-27 15:57:46.592887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.415 qpair failed and we were unable to recover it. 00:39:06.415 [2024-09-27 15:57:46.593214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.415 [2024-09-27 15:57:46.593221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.415 qpair failed and we were unable to recover it. 00:39:06.415 [2024-09-27 15:57:46.593534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.415 [2024-09-27 15:57:46.593542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.415 qpair failed and we were unable to recover it. 00:39:06.415 [2024-09-27 15:57:46.593864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.415 [2024-09-27 15:57:46.593873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.415 qpair failed and we were unable to recover it. 00:39:06.415 [2024-09-27 15:57:46.594203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.415 [2024-09-27 15:57:46.594213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.415 qpair failed and we were unable to recover it. 00:39:06.415 [2024-09-27 15:57:46.594401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.415 [2024-09-27 15:57:46.594410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.415 qpair failed and we were unable to recover it. 00:39:06.415 [2024-09-27 15:57:46.594611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.415 [2024-09-27 15:57:46.594620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.415 qpair failed and we were unable to recover it. 00:39:06.415 [2024-09-27 15:57:46.594951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.415 [2024-09-27 15:57:46.594959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.415 qpair failed and we were unable to recover it. 00:39:06.415 [2024-09-27 15:57:46.595292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.415 [2024-09-27 15:57:46.595300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.415 qpair failed and we were unable to recover it. 00:39:06.415 [2024-09-27 15:57:46.595624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.415 [2024-09-27 15:57:46.595632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.415 qpair failed and we were unable to recover it. 00:39:06.415 [2024-09-27 15:57:46.595950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.415 [2024-09-27 15:57:46.595958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.415 qpair failed and we were unable to recover it. 00:39:06.415 [2024-09-27 15:57:46.596279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.415 [2024-09-27 15:57:46.596287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.415 qpair failed and we were unable to recover it. 00:39:06.415 [2024-09-27 15:57:46.596607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.415 [2024-09-27 15:57:46.596615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.415 qpair failed and we were unable to recover it. 00:39:06.415 [2024-09-27 15:57:46.596942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.415 [2024-09-27 15:57:46.596952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.415 qpair failed and we were unable to recover it. 00:39:06.415 [2024-09-27 15:57:46.597267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.415 [2024-09-27 15:57:46.597276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.415 qpair failed and we were unable to recover it. 00:39:06.415 [2024-09-27 15:57:46.597601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.415 [2024-09-27 15:57:46.597608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.415 qpair failed and we were unable to recover it. 00:39:06.415 [2024-09-27 15:57:46.597956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.415 [2024-09-27 15:57:46.597964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.415 qpair failed and we were unable to recover it. 00:39:06.415 [2024-09-27 15:57:46.598251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.415 [2024-09-27 15:57:46.598259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.415 qpair failed and we were unable to recover it. 00:39:06.416 [2024-09-27 15:57:46.598656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.416 [2024-09-27 15:57:46.598664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.416 qpair failed and we were unable to recover it. 00:39:06.416 [2024-09-27 15:57:46.598973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.416 [2024-09-27 15:57:46.598980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.416 qpair failed and we were unable to recover it. 00:39:06.416 [2024-09-27 15:57:46.599298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.416 [2024-09-27 15:57:46.599307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.416 qpair failed and we were unable to recover it. 00:39:06.416 [2024-09-27 15:57:46.599497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.416 [2024-09-27 15:57:46.599506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.416 qpair failed and we were unable to recover it. 00:39:06.416 [2024-09-27 15:57:46.599833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.416 [2024-09-27 15:57:46.599840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.416 qpair failed and we were unable to recover it. 00:39:06.416 [2024-09-27 15:57:46.600155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.416 [2024-09-27 15:57:46.600163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.416 qpair failed and we were unable to recover it. 00:39:06.416 [2024-09-27 15:57:46.600480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.416 [2024-09-27 15:57:46.600487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.416 qpair failed and we were unable to recover it. 00:39:06.416 [2024-09-27 15:57:46.600799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.416 [2024-09-27 15:57:46.600807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.416 qpair failed and we were unable to recover it. 00:39:06.416 [2024-09-27 15:57:46.601120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.416 [2024-09-27 15:57:46.601128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.416 qpair failed and we were unable to recover it. 00:39:06.416 [2024-09-27 15:57:46.601450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.416 [2024-09-27 15:57:46.601459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.416 qpair failed and we were unable to recover it. 00:39:06.416 [2024-09-27 15:57:46.601656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.416 [2024-09-27 15:57:46.601663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.416 qpair failed and we were unable to recover it. 00:39:06.416 [2024-09-27 15:57:46.601999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.416 [2024-09-27 15:57:46.602008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.416 qpair failed and we were unable to recover it. 00:39:06.416 [2024-09-27 15:57:46.602203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.416 [2024-09-27 15:57:46.602211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.416 qpair failed and we were unable to recover it. 00:39:06.416 [2024-09-27 15:57:46.602518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.416 [2024-09-27 15:57:46.602525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.416 qpair failed and we were unable to recover it. 00:39:06.416 [2024-09-27 15:57:46.602848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.416 [2024-09-27 15:57:46.602856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.416 qpair failed and we were unable to recover it. 00:39:06.416 [2024-09-27 15:57:46.603064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.416 [2024-09-27 15:57:46.603073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.416 qpair failed and we were unable to recover it. 00:39:06.416 [2024-09-27 15:57:46.603412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.416 [2024-09-27 15:57:46.603420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.416 qpair failed and we were unable to recover it. 00:39:06.416 [2024-09-27 15:57:46.603815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.416 [2024-09-27 15:57:46.603825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.416 qpair failed and we were unable to recover it. 00:39:06.416 [2024-09-27 15:57:46.604005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.416 [2024-09-27 15:57:46.604014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.416 qpair failed and we were unable to recover it. 00:39:06.416 [2024-09-27 15:57:46.604306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.416 [2024-09-27 15:57:46.604314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.416 qpair failed and we were unable to recover it. 00:39:06.416 [2024-09-27 15:57:46.604594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.416 [2024-09-27 15:57:46.604601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.416 qpair failed and we were unable to recover it. 00:39:06.416 [2024-09-27 15:57:46.604917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.416 [2024-09-27 15:57:46.604926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.416 qpair failed and we were unable to recover it. 00:39:06.416 [2024-09-27 15:57:46.605286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.416 [2024-09-27 15:57:46.605294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.416 qpair failed and we were unable to recover it. 00:39:06.416 [2024-09-27 15:57:46.605554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.416 [2024-09-27 15:57:46.605562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.416 qpair failed and we were unable to recover it. 00:39:06.416 [2024-09-27 15:57:46.605773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.416 [2024-09-27 15:57:46.605781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.416 qpair failed and we were unable to recover it. 00:39:06.416 [2024-09-27 15:57:46.606078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.416 [2024-09-27 15:57:46.606087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.416 qpair failed and we were unable to recover it. 00:39:06.416 [2024-09-27 15:57:46.606403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.416 [2024-09-27 15:57:46.606411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.416 qpair failed and we were unable to recover it. 00:39:06.416 [2024-09-27 15:57:46.606737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.416 [2024-09-27 15:57:46.606746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.416 qpair failed and we were unable to recover it. 00:39:06.416 [2024-09-27 15:57:46.607125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.416 [2024-09-27 15:57:46.607135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.416 qpair failed and we were unable to recover it. 00:39:06.416 [2024-09-27 15:57:46.607453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.416 [2024-09-27 15:57:46.607461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.416 qpair failed and we were unable to recover it. 00:39:06.416 [2024-09-27 15:57:46.607778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.416 [2024-09-27 15:57:46.607786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.416 qpair failed and we were unable to recover it. 00:39:06.416 [2024-09-27 15:57:46.608086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.416 [2024-09-27 15:57:46.608094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.416 qpair failed and we were unable to recover it. 00:39:06.416 [2024-09-27 15:57:46.608435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.416 [2024-09-27 15:57:46.608443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.416 qpair failed and we were unable to recover it. 00:39:06.416 [2024-09-27 15:57:46.608660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.416 [2024-09-27 15:57:46.608668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.416 qpair failed and we were unable to recover it. 00:39:06.416 [2024-09-27 15:57:46.609031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.416 [2024-09-27 15:57:46.609040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.416 qpair failed and we were unable to recover it. 00:39:06.416 [2024-09-27 15:57:46.609362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.416 [2024-09-27 15:57:46.609371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.416 qpair failed and we were unable to recover it. 00:39:06.416 [2024-09-27 15:57:46.609691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.416 [2024-09-27 15:57:46.609699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.416 qpair failed and we were unable to recover it. 00:39:06.416 [2024-09-27 15:57:46.610015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.417 [2024-09-27 15:57:46.610025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.417 qpair failed and we were unable to recover it. 00:39:06.417 [2024-09-27 15:57:46.610386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.417 [2024-09-27 15:57:46.610393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.417 qpair failed and we were unable to recover it. 00:39:06.417 [2024-09-27 15:57:46.610698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.417 [2024-09-27 15:57:46.610706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.417 qpair failed and we were unable to recover it. 00:39:06.417 [2024-09-27 15:57:46.610915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.417 [2024-09-27 15:57:46.610923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.417 qpair failed and we were unable to recover it. 00:39:06.417 [2024-09-27 15:57:46.611122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.417 [2024-09-27 15:57:46.611131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.417 qpair failed and we were unable to recover it. 00:39:06.417 [2024-09-27 15:57:46.611367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.417 [2024-09-27 15:57:46.611376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.417 qpair failed and we were unable to recover it. 00:39:06.417 [2024-09-27 15:57:46.611698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.417 [2024-09-27 15:57:46.611705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.417 qpair failed and we were unable to recover it. 00:39:06.417 [2024-09-27 15:57:46.612024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.417 [2024-09-27 15:57:46.612032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.417 qpair failed and we were unable to recover it. 00:39:06.417 [2024-09-27 15:57:46.612232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.417 [2024-09-27 15:57:46.612240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.417 qpair failed and we were unable to recover it. 00:39:06.417 [2024-09-27 15:57:46.612518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.417 [2024-09-27 15:57:46.612526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.417 qpair failed and we were unable to recover it. 00:39:06.417 [2024-09-27 15:57:46.612841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.417 [2024-09-27 15:57:46.612849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.417 qpair failed and we were unable to recover it. 00:39:06.417 [2024-09-27 15:57:46.613168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.417 [2024-09-27 15:57:46.613176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.417 qpair failed and we were unable to recover it. 00:39:06.417 [2024-09-27 15:57:46.613405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.417 [2024-09-27 15:57:46.613413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.417 qpair failed and we were unable to recover it. 00:39:06.417 [2024-09-27 15:57:46.613646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.417 [2024-09-27 15:57:46.613655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.417 qpair failed and we were unable to recover it. 00:39:06.417 [2024-09-27 15:57:46.613840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.417 [2024-09-27 15:57:46.613849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.417 qpair failed and we were unable to recover it. 00:39:06.417 [2024-09-27 15:57:46.614184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.417 [2024-09-27 15:57:46.614194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.417 qpair failed and we were unable to recover it. 00:39:06.417 [2024-09-27 15:57:46.614477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.417 [2024-09-27 15:57:46.614485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.417 qpair failed and we were unable to recover it. 00:39:06.417 [2024-09-27 15:57:46.614823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.417 [2024-09-27 15:57:46.614831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.417 qpair failed and we were unable to recover it. 00:39:06.417 [2024-09-27 15:57:46.615158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.417 [2024-09-27 15:57:46.615166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.417 qpair failed and we were unable to recover it. 00:39:06.417 [2024-09-27 15:57:46.615375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.417 [2024-09-27 15:57:46.615384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.417 qpair failed and we were unable to recover it. 00:39:06.417 [2024-09-27 15:57:46.615731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.417 [2024-09-27 15:57:46.615740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.417 qpair failed and we were unable to recover it. 00:39:06.417 [2024-09-27 15:57:46.616066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.417 [2024-09-27 15:57:46.616074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.417 qpair failed and we were unable to recover it. 00:39:06.417 [2024-09-27 15:57:46.616393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.417 [2024-09-27 15:57:46.616402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.417 qpair failed and we were unable to recover it. 00:39:06.417 [2024-09-27 15:57:46.616728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.417 [2024-09-27 15:57:46.616735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.417 qpair failed and we were unable to recover it. 00:39:06.417 [2024-09-27 15:57:46.617042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.417 [2024-09-27 15:57:46.617050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.417 qpair failed and we were unable to recover it. 00:39:06.417 [2024-09-27 15:57:46.617372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.417 [2024-09-27 15:57:46.617379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.417 qpair failed and we were unable to recover it. 00:39:06.417 [2024-09-27 15:57:46.617711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.417 [2024-09-27 15:57:46.617719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.417 qpair failed and we were unable to recover it. 00:39:06.417 [2024-09-27 15:57:46.618045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.417 [2024-09-27 15:57:46.618055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.417 qpair failed and we were unable to recover it. 00:39:06.417 [2024-09-27 15:57:46.618363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.417 [2024-09-27 15:57:46.618371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.417 qpair failed and we were unable to recover it. 00:39:06.417 [2024-09-27 15:57:46.618726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.417 [2024-09-27 15:57:46.618736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.417 qpair failed and we were unable to recover it. 00:39:06.417 [2024-09-27 15:57:46.619138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.417 [2024-09-27 15:57:46.619146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.417 qpair failed and we were unable to recover it. 00:39:06.417 [2024-09-27 15:57:46.619316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.417 [2024-09-27 15:57:46.619325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.417 qpair failed and we were unable to recover it. 00:39:06.417 [2024-09-27 15:57:46.619595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.417 [2024-09-27 15:57:46.619604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.417 qpair failed and we were unable to recover it. 00:39:06.417 [2024-09-27 15:57:46.619947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.417 [2024-09-27 15:57:46.619956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.417 qpair failed and we were unable to recover it. 00:39:06.417 [2024-09-27 15:57:46.620286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.417 [2024-09-27 15:57:46.620294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.417 qpair failed and we were unable to recover it. 00:39:06.417 [2024-09-27 15:57:46.620583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.417 [2024-09-27 15:57:46.620592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.417 qpair failed and we were unable to recover it. 00:39:06.417 [2024-09-27 15:57:46.620918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.417 [2024-09-27 15:57:46.620927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.417 qpair failed and we were unable to recover it. 00:39:06.417 [2024-09-27 15:57:46.621192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.418 [2024-09-27 15:57:46.621199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.418 qpair failed and we were unable to recover it. 00:39:06.418 [2024-09-27 15:57:46.621530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.418 [2024-09-27 15:57:46.621538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.418 qpair failed and we were unable to recover it. 00:39:06.418 [2024-09-27 15:57:46.621728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.418 [2024-09-27 15:57:46.621735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.418 qpair failed and we were unable to recover it. 00:39:06.418 [2024-09-27 15:57:46.622031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.418 [2024-09-27 15:57:46.622039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.418 qpair failed and we were unable to recover it. 00:39:06.418 [2024-09-27 15:57:46.622365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.418 [2024-09-27 15:57:46.622373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.418 qpair failed and we were unable to recover it. 00:39:06.418 [2024-09-27 15:57:46.622729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.418 [2024-09-27 15:57:46.622740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.418 qpair failed and we were unable to recover it. 00:39:06.418 [2024-09-27 15:57:46.623051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.418 [2024-09-27 15:57:46.623060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.418 qpair failed and we were unable to recover it. 00:39:06.418 [2024-09-27 15:57:46.623401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.418 [2024-09-27 15:57:46.623408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.418 qpair failed and we were unable to recover it. 00:39:06.418 [2024-09-27 15:57:46.623735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.418 [2024-09-27 15:57:46.623743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.418 qpair failed and we were unable to recover it. 00:39:06.418 [2024-09-27 15:57:46.624071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.418 [2024-09-27 15:57:46.624079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.418 qpair failed and we were unable to recover it. 00:39:06.418 [2024-09-27 15:57:46.624396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.418 [2024-09-27 15:57:46.624404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.418 qpair failed and we were unable to recover it. 00:39:06.418 [2024-09-27 15:57:46.624737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.418 [2024-09-27 15:57:46.624745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.418 qpair failed and we were unable to recover it. 00:39:06.418 [2024-09-27 15:57:46.624948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.418 [2024-09-27 15:57:46.624956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.418 qpair failed and we were unable to recover it. 00:39:06.418 [2024-09-27 15:57:46.625146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.418 [2024-09-27 15:57:46.625152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.418 qpair failed and we were unable to recover it. 00:39:06.418 [2024-09-27 15:57:46.625492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.418 [2024-09-27 15:57:46.625501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.418 qpair failed and we were unable to recover it. 00:39:06.418 [2024-09-27 15:57:46.625747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.418 [2024-09-27 15:57:46.625754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.418 qpair failed and we were unable to recover it. 00:39:06.418 [2024-09-27 15:57:46.626073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.418 [2024-09-27 15:57:46.626082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.418 qpair failed and we were unable to recover it. 00:39:06.418 [2024-09-27 15:57:46.626406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.418 [2024-09-27 15:57:46.626414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.418 qpair failed and we were unable to recover it. 00:39:06.418 [2024-09-27 15:57:46.626758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.418 [2024-09-27 15:57:46.626767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.418 qpair failed and we were unable to recover it. 00:39:06.418 [2024-09-27 15:57:46.627112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.418 [2024-09-27 15:57:46.627120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.418 qpair failed and we were unable to recover it. 00:39:06.418 [2024-09-27 15:57:46.627446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.418 [2024-09-27 15:57:46.627454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.418 qpair failed and we were unable to recover it. 00:39:06.418 [2024-09-27 15:57:46.627820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.418 [2024-09-27 15:57:46.627829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.418 qpair failed and we were unable to recover it. 00:39:06.418 [2024-09-27 15:57:46.628148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.418 [2024-09-27 15:57:46.628157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.418 qpair failed and we were unable to recover it. 00:39:06.418 [2024-09-27 15:57:46.628484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.418 [2024-09-27 15:57:46.628491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.418 qpair failed and we were unable to recover it. 00:39:06.418 [2024-09-27 15:57:46.628887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.418 [2024-09-27 15:57:46.628905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.418 qpair failed and we were unable to recover it. 00:39:06.418 [2024-09-27 15:57:46.629233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.418 [2024-09-27 15:57:46.629240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.418 qpair failed and we were unable to recover it. 00:39:06.418 [2024-09-27 15:57:46.629546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.418 [2024-09-27 15:57:46.629553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.418 qpair failed and we were unable to recover it. 00:39:06.418 [2024-09-27 15:57:46.629868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.418 [2024-09-27 15:57:46.629876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.418 qpair failed and we were unable to recover it. 00:39:06.418 [2024-09-27 15:57:46.630080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.418 [2024-09-27 15:57:46.630089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.418 qpair failed and we were unable to recover it. 00:39:06.418 [2024-09-27 15:57:46.630357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.418 [2024-09-27 15:57:46.630366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.418 qpair failed and we were unable to recover it. 00:39:06.418 [2024-09-27 15:57:46.630711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.418 [2024-09-27 15:57:46.630719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.418 qpair failed and we were unable to recover it. 00:39:06.418 [2024-09-27 15:57:46.631038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.418 [2024-09-27 15:57:46.631046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.418 qpair failed and we were unable to recover it. 00:39:06.418 [2024-09-27 15:57:46.631351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.418 [2024-09-27 15:57:46.631360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.418 qpair failed and we were unable to recover it. 00:39:06.418 [2024-09-27 15:57:46.631685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.418 [2024-09-27 15:57:46.631695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.418 qpair failed and we were unable to recover it. 00:39:06.418 [2024-09-27 15:57:46.632018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.418 [2024-09-27 15:57:46.632026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.418 qpair failed and we were unable to recover it. 00:39:06.418 [2024-09-27 15:57:46.632335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.418 [2024-09-27 15:57:46.632343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.418 qpair failed and we were unable to recover it. 00:39:06.418 [2024-09-27 15:57:46.632669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.418 [2024-09-27 15:57:46.632677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.418 qpair failed and we were unable to recover it. 00:39:06.418 [2024-09-27 15:57:46.632995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.419 [2024-09-27 15:57:46.633003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.419 qpair failed and we were unable to recover it. 00:39:06.419 [2024-09-27 15:57:46.633312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.419 [2024-09-27 15:57:46.633321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.419 qpair failed and we were unable to recover it. 00:39:06.419 [2024-09-27 15:57:46.633638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.419 [2024-09-27 15:57:46.633647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.419 qpair failed and we were unable to recover it. 00:39:06.419 [2024-09-27 15:57:46.634048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.419 [2024-09-27 15:57:46.634056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.419 qpair failed and we were unable to recover it. 00:39:06.419 [2024-09-27 15:57:46.634331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.419 [2024-09-27 15:57:46.634339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.419 qpair failed and we were unable to recover it. 00:39:06.419 [2024-09-27 15:57:46.634668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.419 [2024-09-27 15:57:46.634675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.419 qpair failed and we were unable to recover it. 00:39:06.419 [2024-09-27 15:57:46.635011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.419 [2024-09-27 15:57:46.635020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.419 qpair failed and we were unable to recover it. 00:39:06.419 [2024-09-27 15:57:46.635365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.419 [2024-09-27 15:57:46.635373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.419 qpair failed and we were unable to recover it. 00:39:06.419 [2024-09-27 15:57:46.635588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.419 [2024-09-27 15:57:46.635596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.419 qpair failed and we were unable to recover it. 00:39:06.419 [2024-09-27 15:57:46.635921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.419 [2024-09-27 15:57:46.635929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.419 qpair failed and we were unable to recover it. 00:39:06.419 [2024-09-27 15:57:46.636212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.419 [2024-09-27 15:57:46.636220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.419 qpair failed and we were unable to recover it. 00:39:06.419 [2024-09-27 15:57:46.636540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.419 [2024-09-27 15:57:46.636547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.419 qpair failed and we were unable to recover it. 00:39:06.419 [2024-09-27 15:57:46.636873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.419 [2024-09-27 15:57:46.636881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.419 qpair failed and we were unable to recover it. 00:39:06.419 [2024-09-27 15:57:46.637207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.419 [2024-09-27 15:57:46.637218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.419 qpair failed and we were unable to recover it. 00:39:06.419 [2024-09-27 15:57:46.637536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.419 [2024-09-27 15:57:46.637545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.419 qpair failed and we were unable to recover it. 00:39:06.419 [2024-09-27 15:57:46.637860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.419 [2024-09-27 15:57:46.637868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.419 qpair failed and we were unable to recover it. 00:39:06.419 [2024-09-27 15:57:46.638037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.419 [2024-09-27 15:57:46.638046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.419 qpair failed and we were unable to recover it. 00:39:06.419 [2024-09-27 15:57:46.638409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.419 [2024-09-27 15:57:46.638418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.419 qpair failed and we were unable to recover it. 00:39:06.419 [2024-09-27 15:57:46.638655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.419 [2024-09-27 15:57:46.638662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.419 qpair failed and we were unable to recover it. 00:39:06.419 [2024-09-27 15:57:46.638971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.419 [2024-09-27 15:57:46.638979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.419 qpair failed and we were unable to recover it. 00:39:06.419 [2024-09-27 15:57:46.639335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.419 [2024-09-27 15:57:46.639343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.419 qpair failed and we were unable to recover it. 00:39:06.419 [2024-09-27 15:57:46.639659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.419 [2024-09-27 15:57:46.639668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.419 qpair failed and we were unable to recover it. 00:39:06.419 [2024-09-27 15:57:46.640032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.419 [2024-09-27 15:57:46.640041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.419 qpair failed and we were unable to recover it. 00:39:06.419 [2024-09-27 15:57:46.640365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.419 [2024-09-27 15:57:46.640372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.419 qpair failed and we were unable to recover it. 00:39:06.419 [2024-09-27 15:57:46.640696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.419 [2024-09-27 15:57:46.640703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.419 qpair failed and we were unable to recover it. 00:39:06.419 [2024-09-27 15:57:46.641023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.419 [2024-09-27 15:57:46.641031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.419 qpair failed and we were unable to recover it. 00:39:06.419 [2024-09-27 15:57:46.641361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.419 [2024-09-27 15:57:46.641368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.419 qpair failed and we were unable to recover it. 00:39:06.419 [2024-09-27 15:57:46.641685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.419 [2024-09-27 15:57:46.641694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.419 qpair failed and we were unable to recover it. 00:39:06.419 [2024-09-27 15:57:46.642039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.419 [2024-09-27 15:57:46.642048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.419 qpair failed and we were unable to recover it. 00:39:06.419 [2024-09-27 15:57:46.642365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.419 [2024-09-27 15:57:46.642372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.419 qpair failed and we were unable to recover it. 00:39:06.419 [2024-09-27 15:57:46.642696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.419 [2024-09-27 15:57:46.642705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.419 qpair failed and we were unable to recover it. 00:39:06.419 [2024-09-27 15:57:46.642917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.419 [2024-09-27 15:57:46.642926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.420 qpair failed and we were unable to recover it. 00:39:06.420 [2024-09-27 15:57:46.643109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.420 [2024-09-27 15:57:46.643118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.420 qpair failed and we were unable to recover it. 00:39:06.420 [2024-09-27 15:57:46.643440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.420 [2024-09-27 15:57:46.643447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.420 qpair failed and we were unable to recover it. 00:39:06.420 [2024-09-27 15:57:46.643632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.420 [2024-09-27 15:57:46.643640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.420 qpair failed and we were unable to recover it. 00:39:06.420 [2024-09-27 15:57:46.643829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.420 [2024-09-27 15:57:46.643837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.420 qpair failed and we were unable to recover it. 00:39:06.420 [2024-09-27 15:57:46.644067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.420 [2024-09-27 15:57:46.644077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.420 qpair failed and we were unable to recover it. 00:39:06.420 [2024-09-27 15:57:46.644394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.420 [2024-09-27 15:57:46.644403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.420 qpair failed and we were unable to recover it. 00:39:06.420 [2024-09-27 15:57:46.644729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.420 [2024-09-27 15:57:46.644737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.420 qpair failed and we were unable to recover it. 00:39:06.420 [2024-09-27 15:57:46.645074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.420 [2024-09-27 15:57:46.645083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.420 qpair failed and we were unable to recover it. 00:39:06.420 [2024-09-27 15:57:46.645415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.420 [2024-09-27 15:57:46.645422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.420 qpair failed and we were unable to recover it. 00:39:06.420 [2024-09-27 15:57:46.645768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.420 [2024-09-27 15:57:46.645777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.420 qpair failed and we were unable to recover it. 00:39:06.420 [2024-09-27 15:57:46.646100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.420 [2024-09-27 15:57:46.646107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.420 qpair failed and we were unable to recover it. 00:39:06.420 [2024-09-27 15:57:46.646388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.420 [2024-09-27 15:57:46.646396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.420 qpair failed and we were unable to recover it. 00:39:06.420 [2024-09-27 15:57:46.646731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.420 [2024-09-27 15:57:46.646739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.420 qpair failed and we were unable to recover it. 00:39:06.420 [2024-09-27 15:57:46.647049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.420 [2024-09-27 15:57:46.647058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.420 qpair failed and we were unable to recover it. 00:39:06.420 [2024-09-27 15:57:46.647386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.420 [2024-09-27 15:57:46.647394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.420 qpair failed and we were unable to recover it. 00:39:06.420 [2024-09-27 15:57:46.647684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.420 [2024-09-27 15:57:46.647692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.420 qpair failed and we were unable to recover it. 00:39:06.420 [2024-09-27 15:57:46.647892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.420 [2024-09-27 15:57:46.647906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.420 qpair failed and we were unable to recover it. 00:39:06.420 [2024-09-27 15:57:46.648188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.420 [2024-09-27 15:57:46.648196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.420 qpair failed and we were unable to recover it. 00:39:06.420 [2024-09-27 15:57:46.648527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.420 [2024-09-27 15:57:46.648535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.420 qpair failed and we were unable to recover it. 00:39:06.420 [2024-09-27 15:57:46.648857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.420 [2024-09-27 15:57:46.648865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.420 qpair failed and we were unable to recover it. 00:39:06.420 [2024-09-27 15:57:46.649174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.420 [2024-09-27 15:57:46.649184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.420 qpair failed and we were unable to recover it. 00:39:06.420 [2024-09-27 15:57:46.649354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.420 [2024-09-27 15:57:46.649364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.420 qpair failed and we were unable to recover it. 00:39:06.420 [2024-09-27 15:57:46.649600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.420 [2024-09-27 15:57:46.649608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.420 qpair failed and we were unable to recover it. 00:39:06.420 [2024-09-27 15:57:46.649920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.420 [2024-09-27 15:57:46.649929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.420 qpair failed and we were unable to recover it. 00:39:06.420 [2024-09-27 15:57:46.650150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.420 [2024-09-27 15:57:46.650157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.420 qpair failed and we were unable to recover it. 00:39:06.420 [2024-09-27 15:57:46.650494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.420 [2024-09-27 15:57:46.650502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.420 qpair failed and we were unable to recover it. 00:39:06.420 [2024-09-27 15:57:46.650828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.420 [2024-09-27 15:57:46.650836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.420 qpair failed and we were unable to recover it. 00:39:06.420 [2024-09-27 15:57:46.651147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.420 [2024-09-27 15:57:46.651156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.420 qpair failed and we were unable to recover it. 00:39:06.420 [2024-09-27 15:57:46.651471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.420 [2024-09-27 15:57:46.651481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.420 qpair failed and we were unable to recover it. 00:39:06.420 [2024-09-27 15:57:46.651819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.420 [2024-09-27 15:57:46.651829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.420 qpair failed and we were unable to recover it. 00:39:06.420 [2024-09-27 15:57:46.652049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.420 [2024-09-27 15:57:46.652058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.420 qpair failed and we were unable to recover it. 00:39:06.420 [2024-09-27 15:57:46.652391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.420 [2024-09-27 15:57:46.652401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.420 qpair failed and we were unable to recover it. 00:39:06.420 [2024-09-27 15:57:46.652727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.420 [2024-09-27 15:57:46.652735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.420 qpair failed and we were unable to recover it. 00:39:06.420 [2024-09-27 15:57:46.653070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.420 [2024-09-27 15:57:46.653079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.420 qpair failed and we were unable to recover it. 00:39:06.420 [2024-09-27 15:57:46.653415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.420 [2024-09-27 15:57:46.653423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.420 qpair failed and we were unable to recover it. 00:39:06.420 [2024-09-27 15:57:46.653750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.420 [2024-09-27 15:57:46.653761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.420 qpair failed and we were unable to recover it. 00:39:06.420 [2024-09-27 15:57:46.654070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.420 [2024-09-27 15:57:46.654078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.420 qpair failed and we were unable to recover it. 00:39:06.421 [2024-09-27 15:57:46.654405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.421 [2024-09-27 15:57:46.654414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.421 qpair failed and we were unable to recover it. 00:39:06.421 [2024-09-27 15:57:46.654781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.421 [2024-09-27 15:57:46.654788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.421 qpair failed and we were unable to recover it. 00:39:06.421 [2024-09-27 15:57:46.655203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.421 [2024-09-27 15:57:46.655213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.421 qpair failed and we were unable to recover it. 00:39:06.421 [2024-09-27 15:57:46.655424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.421 [2024-09-27 15:57:46.655431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.421 qpair failed and we were unable to recover it. 00:39:06.421 [2024-09-27 15:57:46.655618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.421 [2024-09-27 15:57:46.655624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.421 qpair failed and we were unable to recover it. 00:39:06.421 [2024-09-27 15:57:46.655912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.421 [2024-09-27 15:57:46.655920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.421 qpair failed and we were unable to recover it. 00:39:06.421 [2024-09-27 15:57:46.656232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.421 [2024-09-27 15:57:46.656241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.421 qpair failed and we were unable to recover it. 00:39:06.421 [2024-09-27 15:57:46.656550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.421 [2024-09-27 15:57:46.656559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.421 qpair failed and we were unable to recover it. 00:39:06.421 [2024-09-27 15:57:46.656882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.421 [2024-09-27 15:57:46.656889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.421 qpair failed and we were unable to recover it. 00:39:06.421 [2024-09-27 15:57:46.657217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.421 [2024-09-27 15:57:46.657225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.421 qpair failed and we were unable to recover it. 00:39:06.421 [2024-09-27 15:57:46.657551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.421 [2024-09-27 15:57:46.657558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.421 qpair failed and we were unable to recover it. 00:39:06.421 [2024-09-27 15:57:46.657889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.421 [2024-09-27 15:57:46.657902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.421 qpair failed and we were unable to recover it. 00:39:06.421 [2024-09-27 15:57:46.658262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.421 [2024-09-27 15:57:46.658270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.421 qpair failed and we were unable to recover it. 00:39:06.421 [2024-09-27 15:57:46.658576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.421 [2024-09-27 15:57:46.658584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.421 qpair failed and we were unable to recover it. 00:39:06.421 [2024-09-27 15:57:46.658912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.421 [2024-09-27 15:57:46.658921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.421 qpair failed and we were unable to recover it. 00:39:06.421 [2024-09-27 15:57:46.659308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.421 [2024-09-27 15:57:46.659316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.421 qpair failed and we were unable to recover it. 00:39:06.421 [2024-09-27 15:57:46.659629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.421 [2024-09-27 15:57:46.659637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.421 qpair failed and we were unable to recover it. 00:39:06.421 [2024-09-27 15:57:46.659944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.421 [2024-09-27 15:57:46.659952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.421 qpair failed and we were unable to recover it. 00:39:06.421 [2024-09-27 15:57:46.660284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.421 [2024-09-27 15:57:46.660291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.421 qpair failed and we were unable to recover it. 00:39:06.421 [2024-09-27 15:57:46.660491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.421 [2024-09-27 15:57:46.660498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.421 qpair failed and we were unable to recover it. 00:39:06.421 [2024-09-27 15:57:46.660727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.421 [2024-09-27 15:57:46.660736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.421 qpair failed and we were unable to recover it. 00:39:06.421 [2024-09-27 15:57:46.661028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.421 [2024-09-27 15:57:46.661036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.421 qpair failed and we were unable to recover it. 00:39:06.421 [2024-09-27 15:57:46.661342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.421 [2024-09-27 15:57:46.661349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.421 qpair failed and we were unable to recover it. 00:39:06.421 [2024-09-27 15:57:46.661666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.421 [2024-09-27 15:57:46.661673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.421 qpair failed and we were unable to recover it. 00:39:06.421 [2024-09-27 15:57:46.661912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.421 [2024-09-27 15:57:46.661922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.421 qpair failed and we were unable to recover it. 00:39:06.421 [2024-09-27 15:57:46.662201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.421 [2024-09-27 15:57:46.662209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.421 qpair failed and we were unable to recover it. 00:39:06.421 [2024-09-27 15:57:46.662541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.421 [2024-09-27 15:57:46.662550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.421 qpair failed and we were unable to recover it. 00:39:06.421 [2024-09-27 15:57:46.662880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.421 [2024-09-27 15:57:46.662889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.421 qpair failed and we were unable to recover it. 00:39:06.421 [2024-09-27 15:57:46.663060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.421 [2024-09-27 15:57:46.663068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.421 qpair failed and we were unable to recover it. 00:39:06.421 [2024-09-27 15:57:46.663297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.421 [2024-09-27 15:57:46.663306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.421 qpair failed and we were unable to recover it. 00:39:06.421 [2024-09-27 15:57:46.663485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.421 [2024-09-27 15:57:46.663494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.421 qpair failed and we were unable to recover it. 00:39:06.421 [2024-09-27 15:57:46.663708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.421 [2024-09-27 15:57:46.663716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.421 qpair failed and we were unable to recover it. 00:39:06.421 [2024-09-27 15:57:46.664038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.421 [2024-09-27 15:57:46.664046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.421 qpair failed and we were unable to recover it. 00:39:06.421 [2024-09-27 15:57:46.664368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.421 [2024-09-27 15:57:46.664377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.421 qpair failed and we were unable to recover it. 00:39:06.421 [2024-09-27 15:57:46.664624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.421 [2024-09-27 15:57:46.664634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.421 qpair failed and we were unable to recover it. 00:39:06.421 [2024-09-27 15:57:46.664960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.421 [2024-09-27 15:57:46.664968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.421 qpair failed and we were unable to recover it. 00:39:06.421 [2024-09-27 15:57:46.665304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.421 [2024-09-27 15:57:46.665314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.421 qpair failed and we were unable to recover it. 00:39:06.422 [2024-09-27 15:57:46.665518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.422 [2024-09-27 15:57:46.665527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.422 qpair failed and we were unable to recover it. 00:39:06.422 [2024-09-27 15:57:46.665840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.422 [2024-09-27 15:57:46.665847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.422 qpair failed and we were unable to recover it. 00:39:06.422 [2024-09-27 15:57:46.666234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.422 [2024-09-27 15:57:46.666246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.422 qpair failed and we were unable to recover it. 00:39:06.422 [2024-09-27 15:57:46.666570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.422 [2024-09-27 15:57:46.666580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.422 qpair failed and we were unable to recover it. 00:39:06.422 [2024-09-27 15:57:46.666854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.422 [2024-09-27 15:57:46.666862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.422 qpair failed and we were unable to recover it. 00:39:06.422 [2024-09-27 15:57:46.667182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.422 [2024-09-27 15:57:46.667190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.422 qpair failed and we were unable to recover it. 00:39:06.422 [2024-09-27 15:57:46.667416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.422 [2024-09-27 15:57:46.667423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.422 qpair failed and we were unable to recover it. 00:39:06.422 [2024-09-27 15:57:46.667706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.422 [2024-09-27 15:57:46.667714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.422 qpair failed and we were unable to recover it. 00:39:06.422 [2024-09-27 15:57:46.668020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.422 [2024-09-27 15:57:46.668028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.422 qpair failed and we were unable to recover it. 00:39:06.422 [2024-09-27 15:57:46.668351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.422 [2024-09-27 15:57:46.668359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.422 qpair failed and we were unable to recover it. 00:39:06.422 [2024-09-27 15:57:46.668687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.422 [2024-09-27 15:57:46.668695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.422 qpair failed and we were unable to recover it. 00:39:06.422 [2024-09-27 15:57:46.669017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.422 [2024-09-27 15:57:46.669025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.422 qpair failed and we were unable to recover it. 00:39:06.422 [2024-09-27 15:57:46.669313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.422 [2024-09-27 15:57:46.669321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.422 qpair failed and we were unable to recover it. 00:39:06.422 [2024-09-27 15:57:46.669641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.422 [2024-09-27 15:57:46.669649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.422 qpair failed and we were unable to recover it. 00:39:06.422 [2024-09-27 15:57:46.669840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.422 [2024-09-27 15:57:46.669849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.422 qpair failed and we were unable to recover it. 00:39:06.422 [2024-09-27 15:57:46.670170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.422 [2024-09-27 15:57:46.670178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.422 qpair failed and we were unable to recover it. 00:39:06.422 [2024-09-27 15:57:46.670485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.422 [2024-09-27 15:57:46.670493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.422 qpair failed and we were unable to recover it. 00:39:06.422 [2024-09-27 15:57:46.670816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.422 [2024-09-27 15:57:46.670824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.422 qpair failed and we were unable to recover it. 00:39:06.422 [2024-09-27 15:57:46.671218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.422 [2024-09-27 15:57:46.671227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.422 qpair failed and we were unable to recover it. 00:39:06.422 [2024-09-27 15:57:46.671525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.422 [2024-09-27 15:57:46.671532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.422 qpair failed and we were unable to recover it. 00:39:06.422 [2024-09-27 15:57:46.671838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.422 [2024-09-27 15:57:46.671846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.422 qpair failed and we were unable to recover it. 00:39:06.422 [2024-09-27 15:57:46.672214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.422 [2024-09-27 15:57:46.672222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.422 qpair failed and we were unable to recover it. 00:39:06.422 [2024-09-27 15:57:46.672529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.422 [2024-09-27 15:57:46.672537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.422 qpair failed and we were unable to recover it. 00:39:06.422 [2024-09-27 15:57:46.672860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.422 [2024-09-27 15:57:46.672867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.422 qpair failed and we were unable to recover it. 00:39:06.422 [2024-09-27 15:57:46.673154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.422 [2024-09-27 15:57:46.673162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.422 qpair failed and we were unable to recover it. 00:39:06.422 [2024-09-27 15:57:46.673537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.422 [2024-09-27 15:57:46.673547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.422 qpair failed and we were unable to recover it. 00:39:06.422 [2024-09-27 15:57:46.673733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.422 [2024-09-27 15:57:46.673741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.422 qpair failed and we were unable to recover it. 00:39:06.422 [2024-09-27 15:57:46.674087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.422 [2024-09-27 15:57:46.674096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.422 qpair failed and we were unable to recover it. 00:39:06.422 [2024-09-27 15:57:46.674421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.422 [2024-09-27 15:57:46.674429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.422 qpair failed and we were unable to recover it. 00:39:06.422 [2024-09-27 15:57:46.674788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.422 [2024-09-27 15:57:46.674796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.422 qpair failed and we were unable to recover it. 00:39:06.422 [2024-09-27 15:57:46.675116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.422 [2024-09-27 15:57:46.675124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.422 qpair failed and we were unable to recover it. 00:39:06.422 [2024-09-27 15:57:46.675450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.422 [2024-09-27 15:57:46.675458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.422 qpair failed and we were unable to recover it. 00:39:06.422 [2024-09-27 15:57:46.675782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.422 [2024-09-27 15:57:46.675790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.422 qpair failed and we were unable to recover it. 00:39:06.422 [2024-09-27 15:57:46.676114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.422 [2024-09-27 15:57:46.676122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.422 qpair failed and we were unable to recover it. 00:39:06.422 [2024-09-27 15:57:46.676440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.422 [2024-09-27 15:57:46.676447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.422 qpair failed and we were unable to recover it. 00:39:06.422 [2024-09-27 15:57:46.676772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.422 [2024-09-27 15:57:46.676780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.422 qpair failed and we were unable to recover it. 00:39:06.422 [2024-09-27 15:57:46.677075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.422 [2024-09-27 15:57:46.677083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.423 qpair failed and we were unable to recover it. 00:39:06.423 [2024-09-27 15:57:46.677437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.423 [2024-09-27 15:57:46.677446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.423 qpair failed and we were unable to recover it. 00:39:06.423 [2024-09-27 15:57:46.677780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.423 [2024-09-27 15:57:46.677788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.423 qpair failed and we were unable to recover it. 00:39:06.423 [2024-09-27 15:57:46.678121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.423 [2024-09-27 15:57:46.678129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.423 qpair failed and we were unable to recover it. 00:39:06.423 [2024-09-27 15:57:46.678434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.423 [2024-09-27 15:57:46.678441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.423 qpair failed and we were unable to recover it. 00:39:06.423 [2024-09-27 15:57:46.678757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.423 [2024-09-27 15:57:46.678766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.423 qpair failed and we were unable to recover it. 00:39:06.423 [2024-09-27 15:57:46.679079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.423 [2024-09-27 15:57:46.679087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.423 qpair failed and we were unable to recover it. 00:39:06.423 [2024-09-27 15:57:46.679426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.423 [2024-09-27 15:57:46.679434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.423 qpair failed and we were unable to recover it. 00:39:06.423 [2024-09-27 15:57:46.679759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.423 [2024-09-27 15:57:46.679766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.423 qpair failed and we were unable to recover it. 00:39:06.423 [2024-09-27 15:57:46.680080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.423 [2024-09-27 15:57:46.680088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.423 qpair failed and we were unable to recover it. 00:39:06.423 [2024-09-27 15:57:46.680421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.423 [2024-09-27 15:57:46.680428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.423 qpair failed and we were unable to recover it. 00:39:06.423 [2024-09-27 15:57:46.680742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.423 [2024-09-27 15:57:46.680750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.423 qpair failed and we were unable to recover it. 00:39:06.423 [2024-09-27 15:57:46.681069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.423 [2024-09-27 15:57:46.681076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.423 qpair failed and we were unable to recover it. 00:39:06.423 [2024-09-27 15:57:46.681271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.423 [2024-09-27 15:57:46.681279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.423 qpair failed and we were unable to recover it. 00:39:06.423 [2024-09-27 15:57:46.681542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.423 [2024-09-27 15:57:46.681550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.423 qpair failed and we were unable to recover it. 00:39:06.423 [2024-09-27 15:57:46.681878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.423 [2024-09-27 15:57:46.681887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.423 qpair failed and we were unable to recover it. 00:39:06.423 [2024-09-27 15:57:46.682214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.423 [2024-09-27 15:57:46.682226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.423 qpair failed and we were unable to recover it. 00:39:06.423 [2024-09-27 15:57:46.682541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.423 [2024-09-27 15:57:46.682548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.423 qpair failed and we were unable to recover it. 00:39:06.423 [2024-09-27 15:57:46.682746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.423 [2024-09-27 15:57:46.682753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.423 qpair failed and we were unable to recover it. 00:39:06.423 [2024-09-27 15:57:46.683125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.423 [2024-09-27 15:57:46.683133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.423 qpair failed and we were unable to recover it. 00:39:06.423 [2024-09-27 15:57:46.683442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.423 [2024-09-27 15:57:46.683450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.423 qpair failed and we were unable to recover it. 00:39:06.423 [2024-09-27 15:57:46.683766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.423 [2024-09-27 15:57:46.683775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.423 qpair failed and we were unable to recover it. 00:39:06.423 [2024-09-27 15:57:46.684117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.423 [2024-09-27 15:57:46.684126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.423 qpair failed and we were unable to recover it. 00:39:06.423 [2024-09-27 15:57:46.684446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.423 [2024-09-27 15:57:46.684453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.423 qpair failed and we were unable to recover it. 00:39:06.423 [2024-09-27 15:57:46.684775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.423 [2024-09-27 15:57:46.684782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.423 qpair failed and we were unable to recover it. 00:39:06.423 [2024-09-27 15:57:46.685133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.423 [2024-09-27 15:57:46.685140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.423 qpair failed and we were unable to recover it. 00:39:06.423 [2024-09-27 15:57:46.685443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.423 [2024-09-27 15:57:46.685451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.423 qpair failed and we were unable to recover it. 00:39:06.423 [2024-09-27 15:57:46.685778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.423 [2024-09-27 15:57:46.685785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.423 qpair failed and we were unable to recover it. 00:39:06.423 [2024-09-27 15:57:46.686104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.423 [2024-09-27 15:57:46.686112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.423 qpair failed and we were unable to recover it. 00:39:06.423 [2024-09-27 15:57:46.686433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.423 [2024-09-27 15:57:46.686442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.423 qpair failed and we were unable to recover it. 00:39:06.423 [2024-09-27 15:57:46.686762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.423 [2024-09-27 15:57:46.686771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.423 qpair failed and we were unable to recover it. 00:39:06.423 [2024-09-27 15:57:46.687088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.423 [2024-09-27 15:57:46.687096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.423 qpair failed and we were unable to recover it. 00:39:06.423 [2024-09-27 15:57:46.687416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.423 [2024-09-27 15:57:46.687424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.423 qpair failed and we were unable to recover it. 00:39:06.423 [2024-09-27 15:57:46.687758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.423 [2024-09-27 15:57:46.687767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.423 qpair failed and we were unable to recover it. 00:39:06.423 [2024-09-27 15:57:46.687983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.423 [2024-09-27 15:57:46.687992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.423 qpair failed and we were unable to recover it. 00:39:06.423 [2024-09-27 15:57:46.688324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.423 [2024-09-27 15:57:46.688331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.423 qpair failed and we were unable to recover it. 00:39:06.423 [2024-09-27 15:57:46.688652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.423 [2024-09-27 15:57:46.688660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.423 qpair failed and we were unable to recover it. 00:39:06.423 [2024-09-27 15:57:46.688990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.423 [2024-09-27 15:57:46.688998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.423 qpair failed and we were unable to recover it. 00:39:06.424 [2024-09-27 15:57:46.689313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.424 [2024-09-27 15:57:46.689320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.424 qpair failed and we were unable to recover it. 00:39:06.424 [2024-09-27 15:57:46.689638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.424 [2024-09-27 15:57:46.689645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.424 qpair failed and we were unable to recover it. 00:39:06.424 [2024-09-27 15:57:46.689967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.424 [2024-09-27 15:57:46.689975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.424 qpair failed and we were unable to recover it. 00:39:06.424 [2024-09-27 15:57:46.690301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.424 [2024-09-27 15:57:46.690308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.424 qpair failed and we were unable to recover it. 00:39:06.424 [2024-09-27 15:57:46.690614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.424 [2024-09-27 15:57:46.690621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.424 qpair failed and we were unable to recover it. 00:39:06.424 [2024-09-27 15:57:46.690945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.424 [2024-09-27 15:57:46.690954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.424 qpair failed and we were unable to recover it. 00:39:06.424 [2024-09-27 15:57:46.691152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.424 [2024-09-27 15:57:46.691160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.424 qpair failed and we were unable to recover it. 00:39:06.424 [2024-09-27 15:57:46.691482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.424 [2024-09-27 15:57:46.691490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.424 qpair failed and we were unable to recover it. 00:39:06.424 [2024-09-27 15:57:46.691813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.424 [2024-09-27 15:57:46.691820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.424 qpair failed and we were unable to recover it. 00:39:06.424 [2024-09-27 15:57:46.692140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.424 [2024-09-27 15:57:46.692148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.424 qpair failed and we were unable to recover it. 00:39:06.424 [2024-09-27 15:57:46.692474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.424 [2024-09-27 15:57:46.692482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.424 qpair failed and we were unable to recover it. 00:39:06.424 [2024-09-27 15:57:46.692806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.424 [2024-09-27 15:57:46.692814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.424 qpair failed and we were unable to recover it. 00:39:06.424 [2024-09-27 15:57:46.693125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.424 [2024-09-27 15:57:46.693133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.424 qpair failed and we were unable to recover it. 00:39:06.424 [2024-09-27 15:57:46.693460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.424 [2024-09-27 15:57:46.693468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.424 qpair failed and we were unable to recover it. 00:39:06.424 [2024-09-27 15:57:46.693790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.424 [2024-09-27 15:57:46.693797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.424 qpair failed and we were unable to recover it. 00:39:06.424 [2024-09-27 15:57:46.694122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.424 [2024-09-27 15:57:46.694130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.424 qpair failed and we were unable to recover it. 00:39:06.424 [2024-09-27 15:57:46.694447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.424 [2024-09-27 15:57:46.694455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.424 qpair failed and we were unable to recover it. 00:39:06.424 [2024-09-27 15:57:46.694773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.424 [2024-09-27 15:57:46.694781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.424 qpair failed and we were unable to recover it. 00:39:06.424 [2024-09-27 15:57:46.695118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.424 [2024-09-27 15:57:46.695125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.424 qpair failed and we were unable to recover it. 00:39:06.424 [2024-09-27 15:57:46.695447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.424 [2024-09-27 15:57:46.695455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.424 qpair failed and we were unable to recover it. 00:39:06.424 [2024-09-27 15:57:46.695651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.424 [2024-09-27 15:57:46.695658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.424 qpair failed and we were unable to recover it. 00:39:06.424 [2024-09-27 15:57:46.695984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.424 [2024-09-27 15:57:46.695992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.424 qpair failed and we were unable to recover it. 00:39:06.424 [2024-09-27 15:57:46.696298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.424 [2024-09-27 15:57:46.696306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.424 qpair failed and we were unable to recover it. 00:39:06.424 [2024-09-27 15:57:46.696625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.424 [2024-09-27 15:57:46.696632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.424 qpair failed and we were unable to recover it. 00:39:06.424 [2024-09-27 15:57:46.696949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.424 [2024-09-27 15:57:46.696957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.424 qpair failed and we were unable to recover it. 00:39:06.424 [2024-09-27 15:57:46.697277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.424 [2024-09-27 15:57:46.697285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.424 qpair failed and we were unable to recover it. 00:39:06.424 [2024-09-27 15:57:46.697498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.424 [2024-09-27 15:57:46.697507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.424 qpair failed and we were unable to recover it. 00:39:06.424 [2024-09-27 15:57:46.697718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.424 [2024-09-27 15:57:46.697729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.424 qpair failed and we were unable to recover it. 00:39:06.424 [2024-09-27 15:57:46.697915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.424 [2024-09-27 15:57:46.697924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.424 qpair failed and we were unable to recover it. 00:39:06.424 [2024-09-27 15:57:46.698246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.424 [2024-09-27 15:57:46.698253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.424 qpair failed and we were unable to recover it. 00:39:06.424 [2024-09-27 15:57:46.698570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.424 [2024-09-27 15:57:46.698578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.424 qpair failed and we were unable to recover it. 00:39:06.424 [2024-09-27 15:57:46.698901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.424 [2024-09-27 15:57:46.698908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.424 qpair failed and we were unable to recover it. 00:39:06.424 [2024-09-27 15:57:46.699236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.424 [2024-09-27 15:57:46.699244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.424 qpair failed and we were unable to recover it. 00:39:06.424 [2024-09-27 15:57:46.699565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.425 [2024-09-27 15:57:46.699573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.425 qpair failed and we were unable to recover it. 00:39:06.425 [2024-09-27 15:57:46.699898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.425 [2024-09-27 15:57:46.699906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.425 qpair failed and we were unable to recover it. 00:39:06.425 [2024-09-27 15:57:46.700226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.425 [2024-09-27 15:57:46.700233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.425 qpair failed and we were unable to recover it. 00:39:06.425 [2024-09-27 15:57:46.700550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.425 [2024-09-27 15:57:46.700557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.425 qpair failed and we were unable to recover it. 00:39:06.425 [2024-09-27 15:57:46.700883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.425 [2024-09-27 15:57:46.700891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.425 qpair failed and we were unable to recover it. 00:39:06.425 [2024-09-27 15:57:46.701187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.425 [2024-09-27 15:57:46.701195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.425 qpair failed and we were unable to recover it. 00:39:06.425 [2024-09-27 15:57:46.701523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.425 [2024-09-27 15:57:46.701531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.425 qpair failed and we were unable to recover it. 00:39:06.425 [2024-09-27 15:57:46.701839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.425 [2024-09-27 15:57:46.701847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.425 qpair failed and we were unable to recover it. 00:39:06.425 [2024-09-27 15:57:46.702138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.425 [2024-09-27 15:57:46.702145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.425 qpair failed and we were unable to recover it. 00:39:06.425 [2024-09-27 15:57:46.702370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.425 [2024-09-27 15:57:46.702378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.425 qpair failed and we were unable to recover it. 00:39:06.425 [2024-09-27 15:57:46.702683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.425 [2024-09-27 15:57:46.702692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.425 qpair failed and we were unable to recover it. 00:39:06.425 [2024-09-27 15:57:46.702903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.425 [2024-09-27 15:57:46.702911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.425 qpair failed and we were unable to recover it. 00:39:06.425 [2024-09-27 15:57:46.703268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.425 [2024-09-27 15:57:46.703277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.425 qpair failed and we were unable to recover it. 00:39:06.425 [2024-09-27 15:57:46.703623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.425 [2024-09-27 15:57:46.703633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.425 qpair failed and we were unable to recover it. 00:39:06.425 [2024-09-27 15:57:46.703954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.425 [2024-09-27 15:57:46.703963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.425 qpair failed and we were unable to recover it. 00:39:06.425 [2024-09-27 15:57:46.704336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.425 [2024-09-27 15:57:46.704344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.425 qpair failed and we were unable to recover it. 00:39:06.425 [2024-09-27 15:57:46.704544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.425 [2024-09-27 15:57:46.704553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.425 qpair failed and we were unable to recover it. 00:39:06.425 [2024-09-27 15:57:46.704906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.425 [2024-09-27 15:57:46.704915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.425 qpair failed and we were unable to recover it. 00:39:06.425 [2024-09-27 15:57:46.705275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.425 [2024-09-27 15:57:46.705284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.425 qpair failed and we were unable to recover it. 00:39:06.425 [2024-09-27 15:57:46.705623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.425 [2024-09-27 15:57:46.705631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.425 qpair failed and we were unable to recover it. 00:39:06.425 [2024-09-27 15:57:46.705935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.425 [2024-09-27 15:57:46.705944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.425 qpair failed and we were unable to recover it. 00:39:06.425 [2024-09-27 15:57:46.706280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.425 [2024-09-27 15:57:46.706288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.425 qpair failed and we were unable to recover it. 00:39:06.425 [2024-09-27 15:57:46.706501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.425 [2024-09-27 15:57:46.706509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.425 qpair failed and we were unable to recover it. 00:39:06.425 [2024-09-27 15:57:46.706845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.425 [2024-09-27 15:57:46.706853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.425 qpair failed and we were unable to recover it. 00:39:06.425 [2024-09-27 15:57:46.707037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.425 [2024-09-27 15:57:46.707047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.425 qpair failed and we were unable to recover it. 00:39:06.425 [2024-09-27 15:57:46.707392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.425 [2024-09-27 15:57:46.707400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.425 qpair failed and we were unable to recover it. 00:39:06.425 [2024-09-27 15:57:46.707584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.425 [2024-09-27 15:57:46.707591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.425 qpair failed and we were unable to recover it. 00:39:06.425 [2024-09-27 15:57:46.707928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.425 [2024-09-27 15:57:46.707936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.425 qpair failed and we were unable to recover it. 00:39:06.425 [2024-09-27 15:57:46.708277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.425 [2024-09-27 15:57:46.708286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.425 qpair failed and we were unable to recover it. 00:39:06.425 [2024-09-27 15:57:46.708602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.425 [2024-09-27 15:57:46.708611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.425 qpair failed and we were unable to recover it. 00:39:06.425 [2024-09-27 15:57:46.708959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.425 [2024-09-27 15:57:46.708967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.425 qpair failed and we were unable to recover it. 00:39:06.425 [2024-09-27 15:57:46.709306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.425 [2024-09-27 15:57:46.709314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.425 qpair failed and we were unable to recover it. 00:39:06.425 [2024-09-27 15:57:46.709655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.425 [2024-09-27 15:57:46.709662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.425 qpair failed and we were unable to recover it. 00:39:06.425 [2024-09-27 15:57:46.709865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.425 [2024-09-27 15:57:46.709872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.425 qpair failed and we were unable to recover it. 00:39:06.425 [2024-09-27 15:57:46.710059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.425 [2024-09-27 15:57:46.710067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.425 qpair failed and we were unable to recover it. 00:39:06.425 [2024-09-27 15:57:46.710399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.425 [2024-09-27 15:57:46.710407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.425 qpair failed and we were unable to recover it. 00:39:06.425 [2024-09-27 15:57:46.710590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.425 [2024-09-27 15:57:46.710598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.425 qpair failed and we were unable to recover it. 00:39:06.425 [2024-09-27 15:57:46.710928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.426 [2024-09-27 15:57:46.710935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.426 qpair failed and we were unable to recover it. 00:39:06.426 [2024-09-27 15:57:46.711274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.426 [2024-09-27 15:57:46.711282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.426 qpair failed and we were unable to recover it. 00:39:06.426 [2024-09-27 15:57:46.711608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.426 [2024-09-27 15:57:46.711616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.426 qpair failed and we were unable to recover it. 00:39:06.426 [2024-09-27 15:57:46.711949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.426 [2024-09-27 15:57:46.711960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.426 qpair failed and we were unable to recover it. 00:39:06.426 [2024-09-27 15:57:46.712280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.426 [2024-09-27 15:57:46.712288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.426 qpair failed and we were unable to recover it. 00:39:06.426 [2024-09-27 15:57:46.712619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.426 [2024-09-27 15:57:46.712627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.426 qpair failed and we were unable to recover it. 00:39:06.426 [2024-09-27 15:57:46.712847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.426 [2024-09-27 15:57:46.712856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.426 qpair failed and we were unable to recover it. 00:39:06.426 [2024-09-27 15:57:46.713170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.426 [2024-09-27 15:57:46.713179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.426 qpair failed and we were unable to recover it. 00:39:06.426 [2024-09-27 15:57:46.713499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.426 [2024-09-27 15:57:46.713508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.426 qpair failed and we were unable to recover it. 00:39:06.426 [2024-09-27 15:57:46.713798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.426 [2024-09-27 15:57:46.713805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.426 qpair failed and we were unable to recover it. 00:39:06.426 [2024-09-27 15:57:46.713990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.426 [2024-09-27 15:57:46.713999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.426 qpair failed and we were unable to recover it. 00:39:06.426 [2024-09-27 15:57:46.714360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.426 [2024-09-27 15:57:46.714368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.426 qpair failed and we were unable to recover it. 00:39:06.426 [2024-09-27 15:57:46.714711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.426 [2024-09-27 15:57:46.714719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.426 qpair failed and we were unable to recover it. 00:39:06.426 [2024-09-27 15:57:46.714931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.426 [2024-09-27 15:57:46.714940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.426 qpair failed and we were unable to recover it. 00:39:06.426 [2024-09-27 15:57:46.715325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.426 [2024-09-27 15:57:46.715332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.426 qpair failed and we were unable to recover it. 00:39:06.426 [2024-09-27 15:57:46.715625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.426 [2024-09-27 15:57:46.715634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.426 qpair failed and we were unable to recover it. 00:39:06.426 [2024-09-27 15:57:46.715810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.426 [2024-09-27 15:57:46.715819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.426 qpair failed and we were unable to recover it. 00:39:06.426 [2024-09-27 15:57:46.716096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.426 [2024-09-27 15:57:46.716105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.426 qpair failed and we were unable to recover it. 00:39:06.426 [2024-09-27 15:57:46.716428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.426 [2024-09-27 15:57:46.716435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.426 qpair failed and we were unable to recover it. 00:39:06.426 [2024-09-27 15:57:46.716644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.426 [2024-09-27 15:57:46.716652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.426 qpair failed and we were unable to recover it. 00:39:06.426 [2024-09-27 15:57:46.716974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.426 [2024-09-27 15:57:46.716982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.426 qpair failed and we were unable to recover it. 00:39:06.426 [2024-09-27 15:57:46.717332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.426 [2024-09-27 15:57:46.717340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.426 qpair failed and we were unable to recover it. 00:39:06.426 [2024-09-27 15:57:46.717687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.426 [2024-09-27 15:57:46.717694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.426 qpair failed and we were unable to recover it. 00:39:06.426 [2024-09-27 15:57:46.718017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.426 [2024-09-27 15:57:46.718025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.426 qpair failed and we were unable to recover it. 00:39:06.426 [2024-09-27 15:57:46.718335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.426 [2024-09-27 15:57:46.718342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.426 qpair failed and we were unable to recover it. 00:39:06.426 [2024-09-27 15:57:46.718667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.426 [2024-09-27 15:57:46.718675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.426 qpair failed and we were unable to recover it. 00:39:06.426 [2024-09-27 15:57:46.719020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.426 [2024-09-27 15:57:46.719027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.426 qpair failed and we were unable to recover it. 00:39:06.426 [2024-09-27 15:57:46.719356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.426 [2024-09-27 15:57:46.719364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.426 qpair failed and we were unable to recover it. 00:39:06.426 [2024-09-27 15:57:46.719704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.426 [2024-09-27 15:57:46.719712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.426 qpair failed and we were unable to recover it. 00:39:06.426 [2024-09-27 15:57:46.720040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.426 [2024-09-27 15:57:46.720048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.426 qpair failed and we were unable to recover it. 00:39:06.426 [2024-09-27 15:57:46.720242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.426 [2024-09-27 15:57:46.720255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.426 qpair failed and we were unable to recover it. 00:39:06.426 [2024-09-27 15:57:46.720572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.426 [2024-09-27 15:57:46.720582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.426 qpair failed and we were unable to recover it. 00:39:06.426 [2024-09-27 15:57:46.720881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.426 [2024-09-27 15:57:46.720892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.426 qpair failed and we were unable to recover it. 00:39:06.426 [2024-09-27 15:57:46.721117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.426 [2024-09-27 15:57:46.721125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.426 qpair failed and we were unable to recover it. 00:39:06.426 [2024-09-27 15:57:46.721452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.426 [2024-09-27 15:57:46.721461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.426 qpair failed and we were unable to recover it. 00:39:06.426 [2024-09-27 15:57:46.721787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.426 [2024-09-27 15:57:46.721795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.426 qpair failed and we were unable to recover it. 00:39:06.426 [2024-09-27 15:57:46.722017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.426 [2024-09-27 15:57:46.722028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.426 qpair failed and we were unable to recover it. 00:39:06.427 [2024-09-27 15:57:46.722386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.427 [2024-09-27 15:57:46.722396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.427 qpair failed and we were unable to recover it. 00:39:06.427 [2024-09-27 15:57:46.722708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.427 [2024-09-27 15:57:46.722717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.427 qpair failed and we were unable to recover it. 00:39:06.427 [2024-09-27 15:57:46.723089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.427 [2024-09-27 15:57:46.723097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.427 qpair failed and we were unable to recover it. 00:39:06.427 [2024-09-27 15:57:46.723398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.427 [2024-09-27 15:57:46.723406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.427 qpair failed and we were unable to recover it. 00:39:06.427 [2024-09-27 15:57:46.723728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.427 [2024-09-27 15:57:46.723736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.427 qpair failed and we were unable to recover it. 00:39:06.427 [2024-09-27 15:57:46.724076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.427 [2024-09-27 15:57:46.724084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.427 qpair failed and we were unable to recover it. 00:39:06.427 [2024-09-27 15:57:46.724411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.427 [2024-09-27 15:57:46.724420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.427 qpair failed and we were unable to recover it. 00:39:06.427 [2024-09-27 15:57:46.724743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.427 [2024-09-27 15:57:46.724752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.427 qpair failed and we were unable to recover it. 00:39:06.427 [2024-09-27 15:57:46.724994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.427 [2024-09-27 15:57:46.725003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.427 qpair failed and we were unable to recover it. 00:39:06.427 [2024-09-27 15:57:46.725340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.427 [2024-09-27 15:57:46.725349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.427 qpair failed and we were unable to recover it. 00:39:06.427 [2024-09-27 15:57:46.725638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.427 [2024-09-27 15:57:46.725647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.427 qpair failed and we were unable to recover it. 00:39:06.427 [2024-09-27 15:57:46.725969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.427 [2024-09-27 15:57:46.725978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.427 qpair failed and we were unable to recover it. 00:39:06.427 [2024-09-27 15:57:46.726295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.427 [2024-09-27 15:57:46.726304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.427 qpair failed and we were unable to recover it. 00:39:06.427 [2024-09-27 15:57:46.726622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.427 [2024-09-27 15:57:46.726631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.427 qpair failed and we were unable to recover it. 00:39:06.427 [2024-09-27 15:57:46.726961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.427 [2024-09-27 15:57:46.726969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.427 qpair failed and we were unable to recover it. 00:39:06.427 [2024-09-27 15:57:46.727293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.427 [2024-09-27 15:57:46.727302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.427 qpair failed and we were unable to recover it. 00:39:06.427 [2024-09-27 15:57:46.727621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.427 [2024-09-27 15:57:46.727628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.427 qpair failed and we were unable to recover it. 00:39:06.427 [2024-09-27 15:57:46.727949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.427 [2024-09-27 15:57:46.727958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.427 qpair failed and we were unable to recover it. 00:39:06.427 [2024-09-27 15:57:46.728295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.427 [2024-09-27 15:57:46.728303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.427 qpair failed and we were unable to recover it. 00:39:06.427 [2024-09-27 15:57:46.728611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.427 [2024-09-27 15:57:46.728619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.427 qpair failed and we were unable to recover it. 00:39:06.427 [2024-09-27 15:57:46.728942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.427 [2024-09-27 15:57:46.728950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.427 qpair failed and we were unable to recover it. 00:39:06.427 [2024-09-27 15:57:46.729389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.427 [2024-09-27 15:57:46.729397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.427 qpair failed and we were unable to recover it. 00:39:06.427 [2024-09-27 15:57:46.729808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.427 [2024-09-27 15:57:46.729818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.427 qpair failed and we were unable to recover it. 00:39:06.427 [2024-09-27 15:57:46.730117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.427 [2024-09-27 15:57:46.730125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.427 qpair failed and we were unable to recover it. 00:39:06.427 [2024-09-27 15:57:46.730465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.427 [2024-09-27 15:57:46.730474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.427 qpair failed and we were unable to recover it. 00:39:06.427 [2024-09-27 15:57:46.730810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.427 [2024-09-27 15:57:46.730818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.427 qpair failed and we were unable to recover it. 00:39:06.427 [2024-09-27 15:57:46.731141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.427 [2024-09-27 15:57:46.731150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.427 qpair failed and we were unable to recover it. 00:39:06.427 [2024-09-27 15:57:46.731487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.427 [2024-09-27 15:57:46.731495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.427 qpair failed and we were unable to recover it. 00:39:06.427 [2024-09-27 15:57:46.731802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.427 [2024-09-27 15:57:46.731812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.427 qpair failed and we were unable to recover it. 00:39:06.427 [2024-09-27 15:57:46.732128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.427 [2024-09-27 15:57:46.732136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.427 qpair failed and we were unable to recover it. 00:39:06.427 [2024-09-27 15:57:46.732349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.427 [2024-09-27 15:57:46.732357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.427 qpair failed and we were unable to recover it. 00:39:06.427 [2024-09-27 15:57:46.732697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.427 [2024-09-27 15:57:46.732706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.427 qpair failed and we were unable to recover it. 00:39:06.427 [2024-09-27 15:57:46.733049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.427 [2024-09-27 15:57:46.733058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.427 qpair failed and we were unable to recover it. 00:39:06.427 [2024-09-27 15:57:46.733425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.427 [2024-09-27 15:57:46.733432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.427 qpair failed and we were unable to recover it. 00:39:06.427 [2024-09-27 15:57:46.733740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.427 [2024-09-27 15:57:46.733747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.427 qpair failed and we were unable to recover it. 00:39:06.427 [2024-09-27 15:57:46.734065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.428 [2024-09-27 15:57:46.734073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.428 qpair failed and we were unable to recover it. 00:39:06.428 [2024-09-27 15:57:46.734396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.428 [2024-09-27 15:57:46.734404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.428 qpair failed and we were unable to recover it. 00:39:06.428 [2024-09-27 15:57:46.734773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.428 [2024-09-27 15:57:46.734782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.428 qpair failed and we were unable to recover it. 00:39:06.428 [2024-09-27 15:57:46.735087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.428 [2024-09-27 15:57:46.735096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.428 qpair failed and we were unable to recover it. 00:39:06.428 [2024-09-27 15:57:46.735415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.428 [2024-09-27 15:57:46.735422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.428 qpair failed and we were unable to recover it. 00:39:06.428 [2024-09-27 15:57:46.735751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.428 [2024-09-27 15:57:46.735759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.428 qpair failed and we were unable to recover it. 00:39:06.428 [2024-09-27 15:57:46.736078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.428 [2024-09-27 15:57:46.736085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.428 qpair failed and we were unable to recover it. 00:39:06.428 [2024-09-27 15:57:46.736428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.428 [2024-09-27 15:57:46.736435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.428 qpair failed and we were unable to recover it. 00:39:06.428 [2024-09-27 15:57:46.736763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.428 [2024-09-27 15:57:46.736770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.428 qpair failed and we were unable to recover it. 00:39:06.428 [2024-09-27 15:57:46.737079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.428 [2024-09-27 15:57:46.737087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.428 qpair failed and we were unable to recover it. 00:39:06.428 [2024-09-27 15:57:46.737423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.428 [2024-09-27 15:57:46.737432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.428 qpair failed and we were unable to recover it. 00:39:06.428 [2024-09-27 15:57:46.737753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.428 [2024-09-27 15:57:46.737760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.428 qpair failed and we were unable to recover it. 00:39:06.428 [2024-09-27 15:57:46.738075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.428 [2024-09-27 15:57:46.738083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.428 qpair failed and we were unable to recover it. 00:39:06.428 [2024-09-27 15:57:46.738272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.428 [2024-09-27 15:57:46.738281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.428 qpair failed and we were unable to recover it. 00:39:06.428 [2024-09-27 15:57:46.738612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.428 [2024-09-27 15:57:46.738621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.428 qpair failed and we were unable to recover it. 00:39:06.428 [2024-09-27 15:57:46.738946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.428 [2024-09-27 15:57:46.738955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.428 qpair failed and we were unable to recover it. 00:39:06.428 [2024-09-27 15:57:46.739282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.428 [2024-09-27 15:57:46.739290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.428 qpair failed and we were unable to recover it. 00:39:06.428 [2024-09-27 15:57:46.739483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.428 [2024-09-27 15:57:46.739491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.428 qpair failed and we were unable to recover it. 00:39:06.428 [2024-09-27 15:57:46.739858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.428 [2024-09-27 15:57:46.739865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.428 qpair failed and we were unable to recover it. 00:39:06.428 [2024-09-27 15:57:46.740207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.428 [2024-09-27 15:57:46.740216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.428 qpair failed and we were unable to recover it. 00:39:06.428 [2024-09-27 15:57:46.740546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.428 [2024-09-27 15:57:46.740554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.428 qpair failed and we were unable to recover it. 00:39:06.428 [2024-09-27 15:57:46.740753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.428 [2024-09-27 15:57:46.740761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.428 qpair failed and we were unable to recover it. 00:39:06.428 [2024-09-27 15:57:46.741049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.428 [2024-09-27 15:57:46.741058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.428 qpair failed and we were unable to recover it. 00:39:06.428 [2024-09-27 15:57:46.741377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.428 [2024-09-27 15:57:46.741384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.428 qpair failed and we were unable to recover it. 00:39:06.428 [2024-09-27 15:57:46.741689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.428 [2024-09-27 15:57:46.741697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.428 qpair failed and we were unable to recover it. 00:39:06.428 [2024-09-27 15:57:46.742026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.428 [2024-09-27 15:57:46.742034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.428 qpair failed and we were unable to recover it. 00:39:06.428 [2024-09-27 15:57:46.742238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.428 [2024-09-27 15:57:46.742248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.428 qpair failed and we were unable to recover it. 00:39:06.428 [2024-09-27 15:57:46.742440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.428 [2024-09-27 15:57:46.742448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.428 qpair failed and we were unable to recover it. 00:39:06.428 [2024-09-27 15:57:46.742772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.428 [2024-09-27 15:57:46.742780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.428 qpair failed and we were unable to recover it. 00:39:06.428 [2024-09-27 15:57:46.742984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.428 [2024-09-27 15:57:46.742992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.428 qpair failed and we were unable to recover it. 00:39:06.428 [2024-09-27 15:57:46.743358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.428 [2024-09-27 15:57:46.743366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.428 qpair failed and we were unable to recover it. 00:39:06.428 [2024-09-27 15:57:46.743702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.428 [2024-09-27 15:57:46.743710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.428 qpair failed and we were unable to recover it. 00:39:06.428 [2024-09-27 15:57:46.743931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.428 [2024-09-27 15:57:46.743947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.428 qpair failed and we were unable to recover it. 00:39:06.428 [2024-09-27 15:57:46.744273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.428 [2024-09-27 15:57:46.744281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.428 qpair failed and we were unable to recover it. 00:39:06.429 [2024-09-27 15:57:46.744602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.429 [2024-09-27 15:57:46.744609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.429 qpair failed and we were unable to recover it. 00:39:06.429 [2024-09-27 15:57:46.744934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.429 [2024-09-27 15:57:46.744941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.429 qpair failed and we were unable to recover it. 00:39:06.429 [2024-09-27 15:57:46.745266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.429 [2024-09-27 15:57:46.745274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.429 qpair failed and we were unable to recover it. 00:39:06.429 [2024-09-27 15:57:46.745595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.429 [2024-09-27 15:57:46.745602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.429 qpair failed and we were unable to recover it. 00:39:06.429 [2024-09-27 15:57:46.745805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.429 [2024-09-27 15:57:46.745812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.429 qpair failed and we were unable to recover it. 00:39:06.429 [2024-09-27 15:57:46.746013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.429 [2024-09-27 15:57:46.746022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.429 qpair failed and we were unable to recover it. 00:39:06.429 [2024-09-27 15:57:46.746383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.429 [2024-09-27 15:57:46.746390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.429 qpair failed and we were unable to recover it. 00:39:06.429 [2024-09-27 15:57:46.746693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.429 [2024-09-27 15:57:46.746700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.429 qpair failed and we were unable to recover it. 00:39:06.429 [2024-09-27 15:57:46.747021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.429 [2024-09-27 15:57:46.747030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.429 qpair failed and we were unable to recover it. 00:39:06.429 [2024-09-27 15:57:46.747353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.429 [2024-09-27 15:57:46.747361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.429 qpair failed and we were unable to recover it. 00:39:06.429 [2024-09-27 15:57:46.747529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.429 [2024-09-27 15:57:46.747539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.429 qpair failed and we were unable to recover it. 00:39:06.429 [2024-09-27 15:57:46.747874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.429 [2024-09-27 15:57:46.747881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.429 qpair failed and we were unable to recover it. 00:39:06.429 [2024-09-27 15:57:46.748128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.429 [2024-09-27 15:57:46.748136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.429 qpair failed and we were unable to recover it. 00:39:06.429 [2024-09-27 15:57:46.748443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.429 [2024-09-27 15:57:46.748450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.429 qpair failed and we were unable to recover it. 00:39:06.429 [2024-09-27 15:57:46.748784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.429 [2024-09-27 15:57:46.748792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.429 qpair failed and we were unable to recover it. 00:39:06.429 [2024-09-27 15:57:46.749105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.429 [2024-09-27 15:57:46.749113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.429 qpair failed and we were unable to recover it. 00:39:06.429 [2024-09-27 15:57:46.749449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.429 [2024-09-27 15:57:46.749456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.429 qpair failed and we were unable to recover it. 00:39:06.429 [2024-09-27 15:57:46.749784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.429 [2024-09-27 15:57:46.749792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.429 qpair failed and we were unable to recover it. 00:39:06.429 [2024-09-27 15:57:46.750107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.429 [2024-09-27 15:57:46.750115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.429 qpair failed and we were unable to recover it. 00:39:06.429 [2024-09-27 15:57:46.750489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.429 [2024-09-27 15:57:46.750499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.429 qpair failed and we were unable to recover it. 00:39:06.429 [2024-09-27 15:57:46.750707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.429 [2024-09-27 15:57:46.750714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.429 qpair failed and we were unable to recover it. 00:39:06.429 [2024-09-27 15:57:46.751057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.429 [2024-09-27 15:57:46.751064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.429 qpair failed and we were unable to recover it. 00:39:06.429 [2024-09-27 15:57:46.751385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.429 [2024-09-27 15:57:46.751393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.429 qpair failed and we were unable to recover it. 00:39:06.429 [2024-09-27 15:57:46.751718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.429 [2024-09-27 15:57:46.751726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.429 qpair failed and we were unable to recover it. 00:39:06.429 [2024-09-27 15:57:46.751946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.429 [2024-09-27 15:57:46.751953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.429 qpair failed and we were unable to recover it. 00:39:06.429 [2024-09-27 15:57:46.752321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.429 [2024-09-27 15:57:46.752328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.429 qpair failed and we were unable to recover it. 00:39:06.429 [2024-09-27 15:57:46.752514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.429 [2024-09-27 15:57:46.752521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.429 qpair failed and we were unable to recover it. 00:39:06.429 [2024-09-27 15:57:46.752823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.429 [2024-09-27 15:57:46.752830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.429 qpair failed and we were unable to recover it. 00:39:06.429 [2024-09-27 15:57:46.753146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.429 [2024-09-27 15:57:46.753155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.429 qpair failed and we were unable to recover it. 00:39:06.429 [2024-09-27 15:57:46.753263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.429 [2024-09-27 15:57:46.753270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.429 qpair failed and we were unable to recover it. 00:39:06.429 [2024-09-27 15:57:46.753547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.429 [2024-09-27 15:57:46.753556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.429 qpair failed and we were unable to recover it. 00:39:06.429 [2024-09-27 15:57:46.753754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.429 [2024-09-27 15:57:46.753763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.429 qpair failed and we were unable to recover it. 00:39:06.429 [2024-09-27 15:57:46.754109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.429 [2024-09-27 15:57:46.754117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.429 qpair failed and we were unable to recover it. 00:39:06.429 [2024-09-27 15:57:46.754428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.429 [2024-09-27 15:57:46.754436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.429 qpair failed and we were unable to recover it. 00:39:06.429 [2024-09-27 15:57:46.754771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.429 [2024-09-27 15:57:46.754778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.429 qpair failed and we were unable to recover it. 00:39:06.429 [2024-09-27 15:57:46.755077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.429 [2024-09-27 15:57:46.755085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.429 qpair failed and we were unable to recover it. 00:39:06.429 [2024-09-27 15:57:46.755254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.430 [2024-09-27 15:57:46.755262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.430 qpair failed and we were unable to recover it. 00:39:06.430 [2024-09-27 15:57:46.755467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.430 [2024-09-27 15:57:46.755475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.430 qpair failed and we were unable to recover it. 00:39:06.430 [2024-09-27 15:57:46.755798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.430 [2024-09-27 15:57:46.755806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.430 qpair failed and we were unable to recover it. 00:39:06.430 [2024-09-27 15:57:46.756116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.430 [2024-09-27 15:57:46.756125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.430 qpair failed and we were unable to recover it. 00:39:06.430 [2024-09-27 15:57:46.756448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.430 [2024-09-27 15:57:46.756455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.430 qpair failed and we were unable to recover it. 00:39:06.430 [2024-09-27 15:57:46.756784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.430 [2024-09-27 15:57:46.756792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.430 qpair failed and we were unable to recover it. 00:39:06.430 [2024-09-27 15:57:46.757005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.430 [2024-09-27 15:57:46.757014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.430 qpair failed and we were unable to recover it. 00:39:06.430 [2024-09-27 15:57:46.757392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.430 [2024-09-27 15:57:46.757399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.430 qpair failed and we were unable to recover it. 00:39:06.430 [2024-09-27 15:57:46.757696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.430 [2024-09-27 15:57:46.757704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.430 qpair failed and we were unable to recover it. 00:39:06.430 [2024-09-27 15:57:46.757988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.430 [2024-09-27 15:57:46.757995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.430 qpair failed and we were unable to recover it. 00:39:06.430 [2024-09-27 15:57:46.758325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.430 [2024-09-27 15:57:46.758335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.430 qpair failed and we were unable to recover it. 00:39:06.430 [2024-09-27 15:57:46.758540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.430 [2024-09-27 15:57:46.758548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.430 qpair failed and we were unable to recover it. 00:39:06.430 [2024-09-27 15:57:46.758867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.430 [2024-09-27 15:57:46.758875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.430 qpair failed and we were unable to recover it. 00:39:06.430 [2024-09-27 15:57:46.759201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.430 [2024-09-27 15:57:46.759209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.430 qpair failed and we were unable to recover it. 00:39:06.430 [2024-09-27 15:57:46.759529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.430 [2024-09-27 15:57:46.759537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.430 qpair failed and we were unable to recover it. 00:39:06.430 [2024-09-27 15:57:46.759890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.430 [2024-09-27 15:57:46.759904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.430 qpair failed and we were unable to recover it. 00:39:06.430 [2024-09-27 15:57:46.760064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.430 [2024-09-27 15:57:46.760072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.430 qpair failed and we were unable to recover it. 00:39:06.430 [2024-09-27 15:57:46.760365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.430 [2024-09-27 15:57:46.760374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.430 qpair failed and we were unable to recover it. 00:39:06.430 [2024-09-27 15:57:46.760710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.430 [2024-09-27 15:57:46.760718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.430 qpair failed and we were unable to recover it. 00:39:06.430 [2024-09-27 15:57:46.760956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.430 [2024-09-27 15:57:46.760964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.430 qpair failed and we were unable to recover it. 00:39:06.430 [2024-09-27 15:57:46.761268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.430 [2024-09-27 15:57:46.761276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.430 qpair failed and we were unable to recover it. 00:39:06.430 [2024-09-27 15:57:46.761484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.430 [2024-09-27 15:57:46.761492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.430 qpair failed and we were unable to recover it. 00:39:06.430 [2024-09-27 15:57:46.761867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.430 [2024-09-27 15:57:46.761874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.430 qpair failed and we were unable to recover it. 00:39:06.430 [2024-09-27 15:57:46.762206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.430 [2024-09-27 15:57:46.762214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.430 qpair failed and we were unable to recover it. 00:39:06.430 [2024-09-27 15:57:46.762414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.430 [2024-09-27 15:57:46.762421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.430 qpair failed and we were unable to recover it. 00:39:06.430 [2024-09-27 15:57:46.762734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.430 [2024-09-27 15:57:46.762742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.430 qpair failed and we were unable to recover it. 00:39:06.430 [2024-09-27 15:57:46.763034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.430 [2024-09-27 15:57:46.763042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.430 qpair failed and we were unable to recover it. 00:39:06.430 [2024-09-27 15:57:46.763390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.430 [2024-09-27 15:57:46.763398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.430 qpair failed and we were unable to recover it. 00:39:06.430 [2024-09-27 15:57:46.763516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.430 [2024-09-27 15:57:46.763523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.430 qpair failed and we were unable to recover it. 00:39:06.430 [2024-09-27 15:57:46.763794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.430 [2024-09-27 15:57:46.763801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.430 qpair failed and we were unable to recover it. 00:39:06.430 [2024-09-27 15:57:46.764123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.430 [2024-09-27 15:57:46.764131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.430 qpair failed and we were unable to recover it. 00:39:06.430 [2024-09-27 15:57:46.764493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.430 [2024-09-27 15:57:46.764502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.430 qpair failed and we were unable to recover it. 00:39:06.430 [2024-09-27 15:57:46.764818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.430 [2024-09-27 15:57:46.764827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.430 qpair failed and we were unable to recover it. 00:39:06.430 [2024-09-27 15:57:46.765156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.430 [2024-09-27 15:57:46.765164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.430 qpair failed and we were unable to recover it. 00:39:06.430 [2024-09-27 15:57:46.765477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.430 [2024-09-27 15:57:46.765485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.430 qpair failed and we were unable to recover it. 00:39:06.430 [2024-09-27 15:57:46.765824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.430 [2024-09-27 15:57:46.765832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.430 qpair failed and we were unable to recover it. 00:39:06.430 [2024-09-27 15:57:46.766156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.430 [2024-09-27 15:57:46.766164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.431 qpair failed and we were unable to recover it. 00:39:06.431 [2024-09-27 15:57:46.766363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.431 [2024-09-27 15:57:46.766370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.431 qpair failed and we were unable to recover it. 00:39:06.431 [2024-09-27 15:57:46.766528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.431 [2024-09-27 15:57:46.766536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.431 qpair failed and we were unable to recover it. 00:39:06.431 [2024-09-27 15:57:46.766608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.431 [2024-09-27 15:57:46.766616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.431 qpair failed and we were unable to recover it. 00:39:06.431 [2024-09-27 15:57:46.766733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.431 [2024-09-27 15:57:46.766740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.431 qpair failed and we were unable to recover it. 00:39:06.431 [2024-09-27 15:57:46.767055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.431 [2024-09-27 15:57:46.767063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.431 qpair failed and we were unable to recover it. 00:39:06.431 [2024-09-27 15:57:46.767416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.431 [2024-09-27 15:57:46.767423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.431 qpair failed and we were unable to recover it. 00:39:06.431 [2024-09-27 15:57:46.767770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.431 [2024-09-27 15:57:46.767778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.431 qpair failed and we were unable to recover it. 00:39:06.431 [2024-09-27 15:57:46.768119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.431 [2024-09-27 15:57:46.768126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.431 qpair failed and we were unable to recover it. 00:39:06.431 [2024-09-27 15:57:46.768501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.431 [2024-09-27 15:57:46.768510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.431 qpair failed and we were unable to recover it. 00:39:06.431 [2024-09-27 15:57:46.768844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.431 [2024-09-27 15:57:46.768852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.431 qpair failed and we were unable to recover it. 00:39:06.431 [2024-09-27 15:57:46.769175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.431 [2024-09-27 15:57:46.769183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.431 qpair failed and we were unable to recover it. 00:39:06.431 [2024-09-27 15:57:46.769507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.431 [2024-09-27 15:57:46.769515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.431 qpair failed and we were unable to recover it. 00:39:06.431 [2024-09-27 15:57:46.769843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.431 [2024-09-27 15:57:46.769850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.431 qpair failed and we were unable to recover it. 00:39:06.431 [2024-09-27 15:57:46.770157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.431 [2024-09-27 15:57:46.770165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.431 qpair failed and we were unable to recover it. 00:39:06.431 [2024-09-27 15:57:46.770488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.431 [2024-09-27 15:57:46.770497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.431 qpair failed and we were unable to recover it. 00:39:06.431 [2024-09-27 15:57:46.770815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.431 [2024-09-27 15:57:46.770823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.431 qpair failed and we were unable to recover it. 00:39:06.431 [2024-09-27 15:57:46.771188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.431 [2024-09-27 15:57:46.771196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.431 qpair failed and we were unable to recover it. 00:39:06.431 [2024-09-27 15:57:46.771513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.431 [2024-09-27 15:57:46.771521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.431 qpair failed and we were unable to recover it. 00:39:06.431 [2024-09-27 15:57:46.771844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.431 [2024-09-27 15:57:46.771851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.431 qpair failed and we were unable to recover it. 00:39:06.431 [2024-09-27 15:57:46.772142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.431 [2024-09-27 15:57:46.772150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.431 qpair failed and we were unable to recover it. 00:39:06.431 [2024-09-27 15:57:46.772348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.431 [2024-09-27 15:57:46.772356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.431 qpair failed and we were unable to recover it. 00:39:06.431 [2024-09-27 15:57:46.772727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.431 [2024-09-27 15:57:46.772734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.431 qpair failed and we were unable to recover it. 00:39:06.431 [2024-09-27 15:57:46.773033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.431 [2024-09-27 15:57:46.773041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.431 qpair failed and we were unable to recover it. 00:39:06.431 [2024-09-27 15:57:46.773345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.431 [2024-09-27 15:57:46.773354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.431 qpair failed and we were unable to recover it. 00:39:06.431 [2024-09-27 15:57:46.773679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.431 [2024-09-27 15:57:46.773688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.431 qpair failed and we were unable to recover it. 00:39:06.431 [2024-09-27 15:57:46.773991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.431 [2024-09-27 15:57:46.773999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.431 qpair failed and we were unable to recover it. 00:39:06.431 [2024-09-27 15:57:46.774374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.431 [2024-09-27 15:57:46.774381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.431 qpair failed and we were unable to recover it. 00:39:06.431 [2024-09-27 15:57:46.774627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.431 [2024-09-27 15:57:46.774635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.431 qpair failed and we were unable to recover it. 00:39:06.431 [2024-09-27 15:57:46.774869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.431 [2024-09-27 15:57:46.774876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.431 qpair failed and we were unable to recover it. 00:39:06.431 [2024-09-27 15:57:46.775084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.431 [2024-09-27 15:57:46.775092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.431 qpair failed and we were unable to recover it. 00:39:06.431 [2024-09-27 15:57:46.775419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.431 [2024-09-27 15:57:46.775428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.431 qpair failed and we were unable to recover it. 00:39:06.431 [2024-09-27 15:57:46.775753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.431 [2024-09-27 15:57:46.775762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.431 qpair failed and we were unable to recover it. 00:39:06.431 [2024-09-27 15:57:46.775967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.431 [2024-09-27 15:57:46.775975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.431 qpair failed and we were unable to recover it. 00:39:06.431 [2024-09-27 15:57:46.776280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.431 [2024-09-27 15:57:46.776288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.431 qpair failed and we were unable to recover it. 00:39:06.431 [2024-09-27 15:57:46.776580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.431 [2024-09-27 15:57:46.776588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.431 qpair failed and we were unable to recover it. 00:39:06.431 [2024-09-27 15:57:46.776788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.431 [2024-09-27 15:57:46.776796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.431 qpair failed and we were unable to recover it. 00:39:06.431 [2024-09-27 15:57:46.777127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.431 [2024-09-27 15:57:46.777135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.431 qpair failed and we were unable to recover it. 00:39:06.431 [2024-09-27 15:57:46.777332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.432 [2024-09-27 15:57:46.777340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.432 qpair failed and we were unable to recover it. 00:39:06.432 [2024-09-27 15:57:46.777681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.432 [2024-09-27 15:57:46.777688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.432 qpair failed and we were unable to recover it. 00:39:06.432 [2024-09-27 15:57:46.777914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.432 [2024-09-27 15:57:46.777922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.432 qpair failed and we were unable to recover it. 00:39:06.432 [2024-09-27 15:57:46.778277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.432 [2024-09-27 15:57:46.778286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.432 qpair failed and we were unable to recover it. 00:39:06.432 [2024-09-27 15:57:46.778614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.432 [2024-09-27 15:57:46.778625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.432 qpair failed and we were unable to recover it. 00:39:06.432 [2024-09-27 15:57:46.778940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.432 [2024-09-27 15:57:46.778948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.432 qpair failed and we were unable to recover it. 00:39:06.432 [2024-09-27 15:57:46.779151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.432 [2024-09-27 15:57:46.779159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.432 qpair failed and we were unable to recover it. 00:39:06.432 [2024-09-27 15:57:46.779368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.432 [2024-09-27 15:57:46.779376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.432 qpair failed and we were unable to recover it. 00:39:06.432 [2024-09-27 15:57:46.779672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.432 [2024-09-27 15:57:46.779680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.432 qpair failed and we were unable to recover it. 00:39:06.432 [2024-09-27 15:57:46.780002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.432 [2024-09-27 15:57:46.780010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.432 qpair failed and we were unable to recover it. 00:39:06.432 [2024-09-27 15:57:46.780349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.432 [2024-09-27 15:57:46.780356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.432 qpair failed and we were unable to recover it. 00:39:06.432 [2024-09-27 15:57:46.780638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.432 [2024-09-27 15:57:46.780648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.432 qpair failed and we were unable to recover it. 00:39:06.432 [2024-09-27 15:57:46.780979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.432 [2024-09-27 15:57:46.780990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.432 qpair failed and we were unable to recover it. 00:39:06.432 [2024-09-27 15:57:46.781356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.432 [2024-09-27 15:57:46.781366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.432 qpair failed and we were unable to recover it. 00:39:06.432 [2024-09-27 15:57:46.781568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.432 [2024-09-27 15:57:46.781578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.432 qpair failed and we were unable to recover it. 00:39:06.432 [2024-09-27 15:57:46.781766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.432 [2024-09-27 15:57:46.781776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.432 qpair failed and we were unable to recover it. 00:39:06.432 [2024-09-27 15:57:46.782058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.432 [2024-09-27 15:57:46.782070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.432 qpair failed and we were unable to recover it. 00:39:06.432 [2024-09-27 15:57:46.782436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.432 [2024-09-27 15:57:46.782447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.432 qpair failed and we were unable to recover it. 00:39:06.432 [2024-09-27 15:57:46.782787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.432 [2024-09-27 15:57:46.782796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.432 qpair failed and we were unable to recover it. 00:39:06.432 [2024-09-27 15:57:46.783128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.432 [2024-09-27 15:57:46.783139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.432 qpair failed and we were unable to recover it. 00:39:06.432 [2024-09-27 15:57:46.783451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.432 [2024-09-27 15:57:46.783461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.432 qpair failed and we were unable to recover it. 00:39:06.432 [2024-09-27 15:57:46.783776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.432 [2024-09-27 15:57:46.783785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.432 qpair failed and we were unable to recover it. 00:39:06.432 [2024-09-27 15:57:46.784062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.432 [2024-09-27 15:57:46.784071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.432 qpair failed and we were unable to recover it. 00:39:06.432 [2024-09-27 15:57:46.784439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.432 [2024-09-27 15:57:46.784450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.432 qpair failed and we were unable to recover it. 00:39:06.432 [2024-09-27 15:57:46.784654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.432 [2024-09-27 15:57:46.784663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.432 qpair failed and we were unable to recover it. 00:39:06.432 [2024-09-27 15:57:46.784862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.432 [2024-09-27 15:57:46.784873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.432 qpair failed and we were unable to recover it. 00:39:06.432 [2024-09-27 15:57:46.785192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.432 [2024-09-27 15:57:46.785203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.432 qpair failed and we were unable to recover it. 00:39:06.432 [2024-09-27 15:57:46.785535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.432 [2024-09-27 15:57:46.785545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.432 qpair failed and we were unable to recover it. 00:39:06.432 [2024-09-27 15:57:46.785860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.432 [2024-09-27 15:57:46.785870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.432 qpair failed and we were unable to recover it. 00:39:06.432 [2024-09-27 15:57:46.786188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.432 [2024-09-27 15:57:46.786198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.432 qpair failed and we were unable to recover it. 00:39:06.432 [2024-09-27 15:57:46.786513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.432 [2024-09-27 15:57:46.786522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.432 qpair failed and we were unable to recover it. 00:39:06.432 [2024-09-27 15:57:46.786842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.432 [2024-09-27 15:57:46.786854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.432 qpair failed and we were unable to recover it. 00:39:06.432 [2024-09-27 15:57:46.787059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.432 [2024-09-27 15:57:46.787070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.432 qpair failed and we were unable to recover it. 00:39:06.432 [2024-09-27 15:57:46.787394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.432 [2024-09-27 15:57:46.787405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.432 qpair failed and we were unable to recover it. 00:39:06.432 [2024-09-27 15:57:46.787611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.432 [2024-09-27 15:57:46.787620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.432 qpair failed and we were unable to recover it. 00:39:06.432 [2024-09-27 15:57:46.787944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.432 [2024-09-27 15:57:46.787955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.432 qpair failed and we were unable to recover it. 00:39:06.432 [2024-09-27 15:57:46.788255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.432 [2024-09-27 15:57:46.788266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.432 qpair failed and we were unable to recover it. 00:39:06.432 [2024-09-27 15:57:46.788586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.432 [2024-09-27 15:57:46.788598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.433 qpair failed and we were unable to recover it. 00:39:06.433 [2024-09-27 15:57:46.788879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.433 [2024-09-27 15:57:46.788891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.433 qpair failed and we were unable to recover it. 00:39:06.433 [2024-09-27 15:57:46.789229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.433 [2024-09-27 15:57:46.789239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.433 qpair failed and we were unable to recover it. 00:39:06.433 [2024-09-27 15:57:46.789431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.433 [2024-09-27 15:57:46.789439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.433 qpair failed and we were unable to recover it. 00:39:06.433 [2024-09-27 15:57:46.789773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.433 [2024-09-27 15:57:46.789783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.433 qpair failed and we were unable to recover it. 00:39:06.433 [2024-09-27 15:57:46.790160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.433 [2024-09-27 15:57:46.790169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.433 qpair failed and we were unable to recover it. 00:39:06.433 [2024-09-27 15:57:46.790496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.433 [2024-09-27 15:57:46.790505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.433 qpair failed and we were unable to recover it. 00:39:06.433 [2024-09-27 15:57:46.790723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.433 [2024-09-27 15:57:46.790733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.433 qpair failed and we were unable to recover it. 00:39:06.433 [2024-09-27 15:57:46.791026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.433 [2024-09-27 15:57:46.791036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.433 qpair failed and we were unable to recover it. 00:39:06.433 [2024-09-27 15:57:46.791382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.433 [2024-09-27 15:57:46.791390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.433 qpair failed and we were unable to recover it. 00:39:06.433 [2024-09-27 15:57:46.791732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.433 [2024-09-27 15:57:46.791741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.433 qpair failed and we were unable to recover it. 00:39:06.433 [2024-09-27 15:57:46.792038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.433 [2024-09-27 15:57:46.792046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.433 qpair failed and we were unable to recover it. 00:39:06.433 [2024-09-27 15:57:46.792390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.433 [2024-09-27 15:57:46.792398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.433 qpair failed and we were unable to recover it. 00:39:06.433 [2024-09-27 15:57:46.792709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.433 [2024-09-27 15:57:46.792717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.433 qpair failed and we were unable to recover it. 00:39:06.433 [2024-09-27 15:57:46.793025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.433 [2024-09-27 15:57:46.793034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.433 qpair failed and we were unable to recover it. 00:39:06.433 [2024-09-27 15:57:46.793224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.433 [2024-09-27 15:57:46.793232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.433 qpair failed and we were unable to recover it. 00:39:06.433 [2024-09-27 15:57:46.793543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.433 [2024-09-27 15:57:46.793552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.433 qpair failed and we were unable to recover it. 00:39:06.433 [2024-09-27 15:57:46.793865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.433 [2024-09-27 15:57:46.793875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.433 qpair failed and we were unable to recover it. 00:39:06.433 [2024-09-27 15:57:46.794060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.433 [2024-09-27 15:57:46.794070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.433 qpair failed and we were unable to recover it. 00:39:06.433 [2024-09-27 15:57:46.794262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.433 [2024-09-27 15:57:46.794271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.433 qpair failed and we were unable to recover it. 00:39:06.433 [2024-09-27 15:57:46.794581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.433 [2024-09-27 15:57:46.794589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.433 qpair failed and we were unable to recover it. 00:39:06.433 [2024-09-27 15:57:46.794940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.433 [2024-09-27 15:57:46.794949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.433 qpair failed and we were unable to recover it. 00:39:06.433 [2024-09-27 15:57:46.795150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.433 [2024-09-27 15:57:46.795159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.433 qpair failed and we were unable to recover it. 00:39:06.433 [2024-09-27 15:57:46.795513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.433 [2024-09-27 15:57:46.795522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.433 qpair failed and we were unable to recover it. 00:39:06.433 [2024-09-27 15:57:46.795827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.433 [2024-09-27 15:57:46.795836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.433 qpair failed and we were unable to recover it. 00:39:06.433 [2024-09-27 15:57:46.796162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.433 [2024-09-27 15:57:46.796172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.433 qpair failed and we were unable to recover it. 00:39:06.433 [2024-09-27 15:57:46.796533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.433 [2024-09-27 15:57:46.796542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.433 qpair failed and we were unable to recover it. 00:39:06.433 [2024-09-27 15:57:46.796843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.433 [2024-09-27 15:57:46.796851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.433 qpair failed and we were unable to recover it. 00:39:06.433 [2024-09-27 15:57:46.797157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.433 [2024-09-27 15:57:46.797165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.433 qpair failed and we were unable to recover it. 00:39:06.433 [2024-09-27 15:57:46.797437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.433 [2024-09-27 15:57:46.797445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.433 qpair failed and we were unable to recover it. 00:39:06.433 [2024-09-27 15:57:46.797748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.433 [2024-09-27 15:57:46.797757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.433 qpair failed and we were unable to recover it. 00:39:06.433 [2024-09-27 15:57:46.798015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.433 [2024-09-27 15:57:46.798023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.433 qpair failed and we were unable to recover it. 00:39:06.433 [2024-09-27 15:57:46.798327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.433 [2024-09-27 15:57:46.798335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.433 qpair failed and we were unable to recover it. 00:39:06.433 [2024-09-27 15:57:46.798640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.434 [2024-09-27 15:57:46.798649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.434 qpair failed and we were unable to recover it. 00:39:06.434 [2024-09-27 15:57:46.798963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.434 [2024-09-27 15:57:46.798972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.434 qpair failed and we were unable to recover it. 00:39:06.434 [2024-09-27 15:57:46.799277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.434 [2024-09-27 15:57:46.799285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.434 qpair failed and we were unable to recover it. 00:39:06.434 [2024-09-27 15:57:46.799479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.434 [2024-09-27 15:57:46.799487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.434 qpair failed and we were unable to recover it. 00:39:06.434 [2024-09-27 15:57:46.799687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.434 [2024-09-27 15:57:46.799696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.434 qpair failed and we were unable to recover it. 00:39:06.434 [2024-09-27 15:57:46.799992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.434 [2024-09-27 15:57:46.800001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.434 qpair failed and we were unable to recover it. 00:39:06.434 [2024-09-27 15:57:46.800323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.434 [2024-09-27 15:57:46.800330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.434 qpair failed and we were unable to recover it. 00:39:06.434 [2024-09-27 15:57:46.800678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.434 [2024-09-27 15:57:46.800687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.434 qpair failed and we were unable to recover it. 00:39:06.434 [2024-09-27 15:57:46.800857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.434 [2024-09-27 15:57:46.800867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.434 qpair failed and we were unable to recover it. 00:39:06.434 [2024-09-27 15:57:46.801064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.434 [2024-09-27 15:57:46.801072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.434 qpair failed and we were unable to recover it. 00:39:06.434 [2024-09-27 15:57:46.801263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.434 [2024-09-27 15:57:46.801270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.434 qpair failed and we were unable to recover it. 00:39:06.434 [2024-09-27 15:57:46.801539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.434 [2024-09-27 15:57:46.801547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.434 qpair failed and we were unable to recover it. 00:39:06.434 [2024-09-27 15:57:46.801873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.434 [2024-09-27 15:57:46.801882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.434 qpair failed and we were unable to recover it. 00:39:06.434 [2024-09-27 15:57:46.802153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.434 [2024-09-27 15:57:46.802161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.434 qpair failed and we were unable to recover it. 00:39:06.434 [2024-09-27 15:57:46.802464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.434 [2024-09-27 15:57:46.802472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.434 qpair failed and we were unable to recover it. 00:39:06.434 [2024-09-27 15:57:46.802761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.434 [2024-09-27 15:57:46.802770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.434 qpair failed and we were unable to recover it. 00:39:06.434 [2024-09-27 15:57:46.803061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.434 [2024-09-27 15:57:46.803070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.434 qpair failed and we were unable to recover it. 00:39:06.434 [2024-09-27 15:57:46.803342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.434 [2024-09-27 15:57:46.803350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.434 qpair failed and we were unable to recover it. 00:39:06.434 [2024-09-27 15:57:46.803695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.434 [2024-09-27 15:57:46.803704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.434 qpair failed and we were unable to recover it. 00:39:06.434 [2024-09-27 15:57:46.803999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.434 [2024-09-27 15:57:46.804007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.434 qpair failed and we were unable to recover it. 00:39:06.434 [2024-09-27 15:57:46.804318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.434 [2024-09-27 15:57:46.804326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.434 qpair failed and we were unable to recover it. 00:39:06.434 [2024-09-27 15:57:46.804626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.434 [2024-09-27 15:57:46.804636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.434 qpair failed and we were unable to recover it. 00:39:06.434 [2024-09-27 15:57:46.804785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.434 [2024-09-27 15:57:46.804795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.434 qpair failed and we were unable to recover it. 00:39:06.434 [2024-09-27 15:57:46.805138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.434 [2024-09-27 15:57:46.805147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.434 qpair failed and we were unable to recover it. 00:39:06.434 [2024-09-27 15:57:46.805454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.434 [2024-09-27 15:57:46.805472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.434 qpair failed and we were unable to recover it. 00:39:06.434 [2024-09-27 15:57:46.805782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.434 [2024-09-27 15:57:46.805791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.434 qpair failed and we were unable to recover it. 00:39:06.434 [2024-09-27 15:57:46.806098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.434 [2024-09-27 15:57:46.806106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.434 qpair failed and we were unable to recover it. 00:39:06.434 [2024-09-27 15:57:46.806372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.434 [2024-09-27 15:57:46.806380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.434 qpair failed and we were unable to recover it. 00:39:06.434 [2024-09-27 15:57:46.806685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.434 [2024-09-27 15:57:46.806694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.434 qpair failed and we were unable to recover it. 00:39:06.434 [2024-09-27 15:57:46.807001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.434 [2024-09-27 15:57:46.807011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.434 qpair failed and we were unable to recover it. 00:39:06.434 [2024-09-27 15:57:46.807309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.434 [2024-09-27 15:57:46.807316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.434 qpair failed and we were unable to recover it. 00:39:06.434 [2024-09-27 15:57:46.807489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.434 [2024-09-27 15:57:46.807498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.434 qpair failed and we were unable to recover it. 00:39:06.434 [2024-09-27 15:57:46.807820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.434 [2024-09-27 15:57:46.807829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.434 qpair failed and we were unable to recover it. 00:39:06.434 [2024-09-27 15:57:46.808126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.434 [2024-09-27 15:57:46.808136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.434 qpair failed and we were unable to recover it. 00:39:06.434 [2024-09-27 15:57:46.808337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.434 [2024-09-27 15:57:46.808346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.434 qpair failed and we were unable to recover it. 00:39:06.434 [2024-09-27 15:57:46.808681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.434 [2024-09-27 15:57:46.808689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.434 qpair failed and we were unable to recover it. 00:39:06.434 [2024-09-27 15:57:46.808959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.434 [2024-09-27 15:57:46.808968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.434 qpair failed and we were unable to recover it. 00:39:06.434 [2024-09-27 15:57:46.809277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.434 [2024-09-27 15:57:46.809286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.434 qpair failed and we were unable to recover it. 00:39:06.435 [2024-09-27 15:57:46.809595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.435 [2024-09-27 15:57:46.809604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.435 qpair failed and we were unable to recover it. 00:39:06.435 [2024-09-27 15:57:46.809923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.435 [2024-09-27 15:57:46.809932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.435 qpair failed and we were unable to recover it. 00:39:06.435 [2024-09-27 15:57:46.810248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.435 [2024-09-27 15:57:46.810256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.435 qpair failed and we were unable to recover it. 00:39:06.435 [2024-09-27 15:57:46.810429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.435 [2024-09-27 15:57:46.810438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.435 qpair failed and we were unable to recover it. 00:39:06.435 [2024-09-27 15:57:46.810768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.435 [2024-09-27 15:57:46.810778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.435 qpair failed and we were unable to recover it. 00:39:06.435 [2024-09-27 15:57:46.811061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.435 [2024-09-27 15:57:46.811070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.435 qpair failed and we were unable to recover it. 00:39:06.435 [2024-09-27 15:57:46.811390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.435 [2024-09-27 15:57:46.811399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.435 qpair failed and we were unable to recover it. 00:39:06.435 [2024-09-27 15:57:46.811602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.435 [2024-09-27 15:57:46.811611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.435 qpair failed and we were unable to recover it. 00:39:06.435 [2024-09-27 15:57:46.811924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.435 [2024-09-27 15:57:46.811932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.435 qpair failed and we were unable to recover it. 00:39:06.435 [2024-09-27 15:57:46.812245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.435 [2024-09-27 15:57:46.812254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.435 qpair failed and we were unable to recover it. 00:39:06.435 [2024-09-27 15:57:46.812571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.435 [2024-09-27 15:57:46.812580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.435 qpair failed and we were unable to recover it. 00:39:06.435 [2024-09-27 15:57:46.812976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.435 [2024-09-27 15:57:46.812985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.435 qpair failed and we were unable to recover it. 00:39:06.435 [2024-09-27 15:57:46.813315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.435 [2024-09-27 15:57:46.813323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.435 qpair failed and we were unable to recover it. 00:39:06.435 [2024-09-27 15:57:46.813632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.435 [2024-09-27 15:57:46.813641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.435 qpair failed and we were unable to recover it. 00:39:06.435 [2024-09-27 15:57:46.813954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.435 [2024-09-27 15:57:46.813963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.435 qpair failed and we were unable to recover it. 00:39:06.435 [2024-09-27 15:57:46.814160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.435 [2024-09-27 15:57:46.814169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.435 qpair failed and we were unable to recover it. 00:39:06.435 [2024-09-27 15:57:46.814332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.435 [2024-09-27 15:57:46.814341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.435 qpair failed and we were unable to recover it. 00:39:06.435 [2024-09-27 15:57:46.814605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.435 [2024-09-27 15:57:46.814614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.435 qpair failed and we were unable to recover it. 00:39:06.435 [2024-09-27 15:57:46.814905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.435 [2024-09-27 15:57:46.814915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.435 qpair failed and we were unable to recover it. 00:39:06.435 [2024-09-27 15:57:46.815179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.435 [2024-09-27 15:57:46.815186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.435 qpair failed and we were unable to recover it. 00:39:06.435 [2024-09-27 15:57:46.815375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.435 [2024-09-27 15:57:46.815384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.435 qpair failed and we were unable to recover it. 00:39:06.435 [2024-09-27 15:57:46.815528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.435 [2024-09-27 15:57:46.815537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.435 qpair failed and we were unable to recover it. 00:39:06.435 [2024-09-27 15:57:46.815768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.435 [2024-09-27 15:57:46.815776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.435 qpair failed and we were unable to recover it. 00:39:06.435 [2024-09-27 15:57:46.815961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.435 [2024-09-27 15:57:46.815969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.435 qpair failed and we were unable to recover it. 00:39:06.435 [2024-09-27 15:57:46.816296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.435 [2024-09-27 15:57:46.816305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.435 qpair failed and we were unable to recover it. 00:39:06.435 [2024-09-27 15:57:46.816623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.435 [2024-09-27 15:57:46.816632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.435 qpair failed and we were unable to recover it. 00:39:06.435 [2024-09-27 15:57:46.816822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.435 [2024-09-27 15:57:46.816830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.435 qpair failed and we were unable to recover it. 00:39:06.435 [2024-09-27 15:57:46.816892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.435 [2024-09-27 15:57:46.816902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.435 qpair failed and we were unable to recover it. 00:39:06.435 [2024-09-27 15:57:46.817165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.435 [2024-09-27 15:57:46.817173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.435 qpair failed and we were unable to recover it. 00:39:06.435 [2024-09-27 15:57:46.817485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.435 [2024-09-27 15:57:46.817494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.435 qpair failed and we were unable to recover it. 00:39:06.435 [2024-09-27 15:57:46.817801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.435 [2024-09-27 15:57:46.817809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.435 qpair failed and we were unable to recover it. 00:39:06.435 [2024-09-27 15:57:46.818122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.435 [2024-09-27 15:57:46.818132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.435 qpair failed and we were unable to recover it. 00:39:06.435 [2024-09-27 15:57:46.818318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.435 [2024-09-27 15:57:46.818327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.435 qpair failed and we were unable to recover it. 00:39:06.435 [2024-09-27 15:57:46.818640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.435 [2024-09-27 15:57:46.818649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.435 qpair failed and we were unable to recover it. 00:39:06.435 [2024-09-27 15:57:46.818956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.435 [2024-09-27 15:57:46.818965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.435 qpair failed and we were unable to recover it. 00:39:06.435 [2024-09-27 15:57:46.819260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.435 [2024-09-27 15:57:46.819270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.435 qpair failed and we were unable to recover it. 00:39:06.435 [2024-09-27 15:57:46.819334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.435 [2024-09-27 15:57:46.819342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.435 qpair failed and we were unable to recover it. 00:39:06.436 [2024-09-27 15:57:46.819490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.436 [2024-09-27 15:57:46.819498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.436 qpair failed and we were unable to recover it. 00:39:06.436 [2024-09-27 15:57:46.819821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.436 [2024-09-27 15:57:46.819830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.436 qpair failed and we were unable to recover it. 00:39:06.436 [2024-09-27 15:57:46.820137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.436 [2024-09-27 15:57:46.820145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.436 qpair failed and we were unable to recover it. 00:39:06.436 [2024-09-27 15:57:46.820449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.436 [2024-09-27 15:57:46.820458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.436 qpair failed and we were unable to recover it. 00:39:06.436 [2024-09-27 15:57:46.820763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.436 [2024-09-27 15:57:46.820772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.436 qpair failed and we were unable to recover it. 00:39:06.436 [2024-09-27 15:57:46.821060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.436 [2024-09-27 15:57:46.821068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.436 qpair failed and we were unable to recover it. 00:39:06.436 [2024-09-27 15:57:46.821391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.436 [2024-09-27 15:57:46.821400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.436 qpair failed and we were unable to recover it. 00:39:06.436 [2024-09-27 15:57:46.821695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.436 [2024-09-27 15:57:46.821704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.436 qpair failed and we were unable to recover it. 00:39:06.436 [2024-09-27 15:57:46.822016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.436 [2024-09-27 15:57:46.822026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.436 qpair failed and we were unable to recover it. 00:39:06.436 [2024-09-27 15:57:46.822351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.436 [2024-09-27 15:57:46.822361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.436 qpair failed and we were unable to recover it. 00:39:06.436 [2024-09-27 15:57:46.822658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.436 [2024-09-27 15:57:46.822667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.436 qpair failed and we were unable to recover it. 00:39:06.436 [2024-09-27 15:57:46.822945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.436 [2024-09-27 15:57:46.822954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.436 qpair failed and we were unable to recover it. 00:39:06.436 [2024-09-27 15:57:46.823292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.436 [2024-09-27 15:57:46.823302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.436 qpair failed and we were unable to recover it. 00:39:06.436 [2024-09-27 15:57:46.823461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.436 [2024-09-27 15:57:46.823469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.436 qpair failed and we were unable to recover it. 00:39:06.436 [2024-09-27 15:57:46.823801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.436 [2024-09-27 15:57:46.823809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.436 qpair failed and we were unable to recover it. 00:39:06.436 [2024-09-27 15:57:46.824018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.436 [2024-09-27 15:57:46.824027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.436 qpair failed and we were unable to recover it. 00:39:06.436 [2024-09-27 15:57:46.824323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.436 [2024-09-27 15:57:46.824333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.436 qpair failed and we were unable to recover it. 00:39:06.436 [2024-09-27 15:57:46.824641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.436 [2024-09-27 15:57:46.824650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.436 qpair failed and we were unable to recover it. 00:39:06.436 [2024-09-27 15:57:46.824961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.436 [2024-09-27 15:57:46.824969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.436 qpair failed and we were unable to recover it. 00:39:06.436 [2024-09-27 15:57:46.825300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.436 [2024-09-27 15:57:46.825309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.436 qpair failed and we were unable to recover it. 00:39:06.436 [2024-09-27 15:57:46.825493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.436 [2024-09-27 15:57:46.825502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.436 qpair failed and we were unable to recover it. 00:39:06.436 [2024-09-27 15:57:46.825795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.436 [2024-09-27 15:57:46.825805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.436 qpair failed and we were unable to recover it. 00:39:06.436 [2024-09-27 15:57:46.826117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.436 [2024-09-27 15:57:46.826128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.436 qpair failed and we were unable to recover it. 00:39:06.436 [2024-09-27 15:57:46.826398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.436 [2024-09-27 15:57:46.826406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.436 qpair failed and we were unable to recover it. 00:39:06.436 [2024-09-27 15:57:46.826703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.436 [2024-09-27 15:57:46.826710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.436 qpair failed and we were unable to recover it. 00:39:06.436 [2024-09-27 15:57:46.827018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.436 [2024-09-27 15:57:46.827027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.436 qpair failed and we were unable to recover it. 00:39:06.436 [2024-09-27 15:57:46.827220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.436 [2024-09-27 15:57:46.827229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.436 qpair failed and we were unable to recover it. 00:39:06.436 [2024-09-27 15:57:46.827550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.436 [2024-09-27 15:57:46.827558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.436 qpair failed and we were unable to recover it. 00:39:06.436 [2024-09-27 15:57:46.827871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.436 [2024-09-27 15:57:46.827881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.436 qpair failed and we were unable to recover it. 00:39:06.436 [2024-09-27 15:57:46.828184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.436 [2024-09-27 15:57:46.828195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.436 qpair failed and we were unable to recover it. 00:39:06.436 [2024-09-27 15:57:46.828356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.436 [2024-09-27 15:57:46.828365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.436 qpair failed and we were unable to recover it. 00:39:06.436 [2024-09-27 15:57:46.828625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.436 [2024-09-27 15:57:46.828635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.436 qpair failed and we were unable to recover it. 00:39:06.436 [2024-09-27 15:57:46.828818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.436 [2024-09-27 15:57:46.828828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.436 qpair failed and we were unable to recover it. 00:39:06.436 [2024-09-27 15:57:46.829017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.436 [2024-09-27 15:57:46.829027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.436 qpair failed and we were unable to recover it. 00:39:06.436 [2024-09-27 15:57:46.829214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.436 [2024-09-27 15:57:46.829223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.436 qpair failed and we were unable to recover it. 00:39:06.436 [2024-09-27 15:57:46.829384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.436 [2024-09-27 15:57:46.829394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.436 qpair failed and we were unable to recover it. 00:39:06.436 [2024-09-27 15:57:46.829709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.436 [2024-09-27 15:57:46.829719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.436 qpair failed and we were unable to recover it. 00:39:06.436 [2024-09-27 15:57:46.830048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.437 [2024-09-27 15:57:46.830057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.437 qpair failed and we were unable to recover it. 00:39:06.437 [2024-09-27 15:57:46.830332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.437 [2024-09-27 15:57:46.830342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.437 qpair failed and we were unable to recover it. 00:39:06.437 [2024-09-27 15:57:46.830648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.437 [2024-09-27 15:57:46.830657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.437 qpair failed and we were unable to recover it. 00:39:06.437 [2024-09-27 15:57:46.830976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.437 [2024-09-27 15:57:46.830985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.437 qpair failed and we were unable to recover it. 00:39:06.437 [2024-09-27 15:57:46.831291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.437 [2024-09-27 15:57:46.831299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.437 qpair failed and we were unable to recover it. 00:39:06.437 [2024-09-27 15:57:46.831608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.437 [2024-09-27 15:57:46.831617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.437 qpair failed and we were unable to recover it. 00:39:06.437 [2024-09-27 15:57:46.831922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.437 [2024-09-27 15:57:46.831930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.437 qpair failed and we were unable to recover it. 00:39:06.437 [2024-09-27 15:57:46.832234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.437 [2024-09-27 15:57:46.832242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.437 qpair failed and we were unable to recover it. 00:39:06.437 [2024-09-27 15:57:46.832399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.437 [2024-09-27 15:57:46.832407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.437 qpair failed and we were unable to recover it. 00:39:06.437 [2024-09-27 15:57:46.832708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.437 [2024-09-27 15:57:46.832716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.437 qpair failed and we were unable to recover it. 00:39:06.437 [2024-09-27 15:57:46.833031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.437 [2024-09-27 15:57:46.833040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.437 qpair failed and we were unable to recover it. 00:39:06.437 [2024-09-27 15:57:46.833358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.437 [2024-09-27 15:57:46.833366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.437 qpair failed and we were unable to recover it. 00:39:06.437 [2024-09-27 15:57:46.833675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.437 [2024-09-27 15:57:46.833685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.437 qpair failed and we were unable to recover it. 00:39:06.437 [2024-09-27 15:57:46.833989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.437 [2024-09-27 15:57:46.833997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.437 qpair failed and we were unable to recover it. 00:39:06.437 [2024-09-27 15:57:46.834323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.437 [2024-09-27 15:57:46.834331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.437 qpair failed and we were unable to recover it. 00:39:06.437 [2024-09-27 15:57:46.834654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.437 [2024-09-27 15:57:46.834662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.437 qpair failed and we were unable to recover it. 00:39:06.437 [2024-09-27 15:57:46.834964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.437 [2024-09-27 15:57:46.834973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.437 qpair failed and we were unable to recover it. 00:39:06.437 [2024-09-27 15:57:46.835284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.437 [2024-09-27 15:57:46.835292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.437 qpair failed and we were unable to recover it. 00:39:06.437 [2024-09-27 15:57:46.835448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.437 [2024-09-27 15:57:46.835455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.437 qpair failed and we were unable to recover it. 00:39:06.437 [2024-09-27 15:57:46.835816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.437 [2024-09-27 15:57:46.835824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.437 qpair failed and we were unable to recover it. 00:39:06.437 [2024-09-27 15:57:46.836129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.437 [2024-09-27 15:57:46.836138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.437 qpair failed and we were unable to recover it. 00:39:06.437 [2024-09-27 15:57:46.836445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.437 [2024-09-27 15:57:46.836453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.437 qpair failed and we were unable to recover it. 00:39:06.437 [2024-09-27 15:57:46.836762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.437 [2024-09-27 15:57:46.836770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.437 qpair failed and we were unable to recover it. 00:39:06.437 [2024-09-27 15:57:46.837070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.437 [2024-09-27 15:57:46.837080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.437 qpair failed and we were unable to recover it. 00:39:06.437 [2024-09-27 15:57:46.837409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.437 [2024-09-27 15:57:46.837417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.437 qpair failed and we were unable to recover it. 00:39:06.437 [2024-09-27 15:57:46.837723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.437 [2024-09-27 15:57:46.837732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.437 qpair failed and we were unable to recover it. 00:39:06.437 [2024-09-27 15:57:46.837934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.437 [2024-09-27 15:57:46.837943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.437 qpair failed and we were unable to recover it. 00:39:06.437 [2024-09-27 15:57:46.838215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.437 [2024-09-27 15:57:46.838222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.437 qpair failed and we were unable to recover it. 00:39:06.437 [2024-09-27 15:57:46.838529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.437 [2024-09-27 15:57:46.838538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.437 qpair failed and we were unable to recover it. 00:39:06.437 [2024-09-27 15:57:46.838845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.437 [2024-09-27 15:57:46.838853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.437 qpair failed and we were unable to recover it. 00:39:06.437 [2024-09-27 15:57:46.839174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.437 [2024-09-27 15:57:46.839183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.437 qpair failed and we were unable to recover it. 00:39:06.437 [2024-09-27 15:57:46.839483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.437 [2024-09-27 15:57:46.839491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.437 qpair failed and we were unable to recover it. 00:39:06.437 [2024-09-27 15:57:46.839798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.437 [2024-09-27 15:57:46.839807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.437 qpair failed and we were unable to recover it. 00:39:06.437 [2024-09-27 15:57:46.840109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.437 [2024-09-27 15:57:46.840117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.437 qpair failed and we were unable to recover it. 00:39:06.437 [2024-09-27 15:57:46.840428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.437 [2024-09-27 15:57:46.840436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.437 qpair failed and we were unable to recover it. 00:39:06.437 [2024-09-27 15:57:46.840791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.437 [2024-09-27 15:57:46.840800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.437 qpair failed and we were unable to recover it. 00:39:06.437 [2024-09-27 15:57:46.841103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.437 [2024-09-27 15:57:46.841112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.437 qpair failed and we were unable to recover it. 00:39:06.437 [2024-09-27 15:57:46.841386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.437 [2024-09-27 15:57:46.841394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.437 qpair failed and we were unable to recover it. 00:39:06.438 [2024-09-27 15:57:46.841593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.438 [2024-09-27 15:57:46.841610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.438 qpair failed and we were unable to recover it. 00:39:06.438 [2024-09-27 15:57:46.841815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.438 [2024-09-27 15:57:46.841825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.438 qpair failed and we were unable to recover it. 00:39:06.438 [2024-09-27 15:57:46.842138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.438 [2024-09-27 15:57:46.842148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.438 qpair failed and we were unable to recover it. 00:39:06.438 [2024-09-27 15:57:46.842451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.438 [2024-09-27 15:57:46.842461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.438 qpair failed and we were unable to recover it. 00:39:06.438 [2024-09-27 15:57:46.842776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.438 [2024-09-27 15:57:46.842785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.438 qpair failed and we were unable to recover it. 00:39:06.438 [2024-09-27 15:57:46.843071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.438 [2024-09-27 15:57:46.843081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.438 qpair failed and we were unable to recover it. 00:39:06.438 [2024-09-27 15:57:46.843416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.438 [2024-09-27 15:57:46.843426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.438 qpair failed and we were unable to recover it. 00:39:06.438 [2024-09-27 15:57:46.843729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.438 [2024-09-27 15:57:46.843738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.438 qpair failed and we were unable to recover it. 00:39:06.438 [2024-09-27 15:57:46.844043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.438 [2024-09-27 15:57:46.844052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.438 qpair failed and we were unable to recover it. 00:39:06.438 [2024-09-27 15:57:46.844369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.438 [2024-09-27 15:57:46.844377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.438 qpair failed and we were unable to recover it. 00:39:06.438 [2024-09-27 15:57:46.844689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.438 [2024-09-27 15:57:46.844698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.438 qpair failed and we were unable to recover it. 00:39:06.438 [2024-09-27 15:57:46.845006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.438 [2024-09-27 15:57:46.845015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.438 qpair failed and we were unable to recover it. 00:39:06.438 [2024-09-27 15:57:46.845324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.438 [2024-09-27 15:57:46.845332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.438 qpair failed and we were unable to recover it. 00:39:06.438 [2024-09-27 15:57:46.845637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.438 [2024-09-27 15:57:46.845645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.438 qpair failed and we were unable to recover it. 00:39:06.438 [2024-09-27 15:57:46.845959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.438 [2024-09-27 15:57:46.845967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.438 qpair failed and we were unable to recover it. 00:39:06.438 [2024-09-27 15:57:46.846158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.438 [2024-09-27 15:57:46.846169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.438 qpair failed and we were unable to recover it. 00:39:06.438 [2024-09-27 15:57:46.846493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.438 [2024-09-27 15:57:46.846502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.438 qpair failed and we were unable to recover it. 00:39:06.438 [2024-09-27 15:57:46.846665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.438 [2024-09-27 15:57:46.846672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.438 qpair failed and we were unable to recover it. 00:39:06.438 [2024-09-27 15:57:46.846958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.438 [2024-09-27 15:57:46.846967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.438 qpair failed and we were unable to recover it. 00:39:06.438 [2024-09-27 15:57:46.847348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.438 [2024-09-27 15:57:46.847357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.438 qpair failed and we were unable to recover it. 00:39:06.438 [2024-09-27 15:57:46.847653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.438 [2024-09-27 15:57:46.847661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.438 qpair failed and we were unable to recover it. 00:39:06.438 [2024-09-27 15:57:46.847957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.438 [2024-09-27 15:57:46.847965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.438 qpair failed and we were unable to recover it. 00:39:06.438 [2024-09-27 15:57:46.848293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.438 [2024-09-27 15:57:46.848301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.438 qpair failed and we were unable to recover it. 00:39:06.438 [2024-09-27 15:57:46.848496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.438 [2024-09-27 15:57:46.848506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.438 qpair failed and we were unable to recover it. 00:39:06.438 [2024-09-27 15:57:46.848817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.438 [2024-09-27 15:57:46.848826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.438 qpair failed and we were unable to recover it. 00:39:06.438 [2024-09-27 15:57:46.849131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.438 [2024-09-27 15:57:46.849139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.438 qpair failed and we were unable to recover it. 00:39:06.438 [2024-09-27 15:57:46.849444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.438 [2024-09-27 15:57:46.849452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.438 qpair failed and we were unable to recover it. 00:39:06.438 [2024-09-27 15:57:46.849759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.438 [2024-09-27 15:57:46.849767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.438 qpair failed and we were unable to recover it. 00:39:06.438 [2024-09-27 15:57:46.849959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.438 [2024-09-27 15:57:46.849971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.438 qpair failed and we were unable to recover it. 00:39:06.438 [2024-09-27 15:57:46.850293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.438 [2024-09-27 15:57:46.850301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.438 qpair failed and we were unable to recover it. 00:39:06.438 [2024-09-27 15:57:46.850612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.438 [2024-09-27 15:57:46.850620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.438 qpair failed and we were unable to recover it. 00:39:06.438 [2024-09-27 15:57:46.850923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.438 [2024-09-27 15:57:46.850932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.438 qpair failed and we were unable to recover it. 00:39:06.438 [2024-09-27 15:57:46.851249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.438 [2024-09-27 15:57:46.851257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.438 qpair failed and we were unable to recover it. 00:39:06.438 [2024-09-27 15:57:46.851570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.439 [2024-09-27 15:57:46.851578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.439 qpair failed and we were unable to recover it. 00:39:06.439 [2024-09-27 15:57:46.851887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.439 [2024-09-27 15:57:46.851898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.439 qpair failed and we were unable to recover it. 00:39:06.439 [2024-09-27 15:57:46.852204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.439 [2024-09-27 15:57:46.852213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.439 qpair failed and we were unable to recover it. 00:39:06.439 [2024-09-27 15:57:46.852519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.439 [2024-09-27 15:57:46.852527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.439 qpair failed and we were unable to recover it. 00:39:06.439 [2024-09-27 15:57:46.852832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.439 [2024-09-27 15:57:46.852841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.439 qpair failed and we were unable to recover it. 00:39:06.439 [2024-09-27 15:57:46.853168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.439 [2024-09-27 15:57:46.853177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.439 qpair failed and we were unable to recover it. 00:39:06.439 [2024-09-27 15:57:46.853489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.439 [2024-09-27 15:57:46.853498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.439 qpair failed and we were unable to recover it. 00:39:06.439 [2024-09-27 15:57:46.853783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.439 [2024-09-27 15:57:46.853792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.439 qpair failed and we were unable to recover it. 00:39:06.439 [2024-09-27 15:57:46.854073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.439 [2024-09-27 15:57:46.854083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.439 qpair failed and we were unable to recover it. 00:39:06.439 [2024-09-27 15:57:46.854389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.439 [2024-09-27 15:57:46.854398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.439 qpair failed and we were unable to recover it. 00:39:06.439 [2024-09-27 15:57:46.854694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.439 [2024-09-27 15:57:46.854702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.439 qpair failed and we were unable to recover it. 00:39:06.439 [2024-09-27 15:57:46.855029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.439 [2024-09-27 15:57:46.855038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.439 qpair failed and we were unable to recover it. 00:39:06.439 [2024-09-27 15:57:46.855423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.439 [2024-09-27 15:57:46.855431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.439 qpair failed and we were unable to recover it. 00:39:06.439 [2024-09-27 15:57:46.855736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.439 [2024-09-27 15:57:46.855744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.439 qpair failed and we were unable to recover it. 00:39:06.439 [2024-09-27 15:57:46.856054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.439 [2024-09-27 15:57:46.856063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.439 qpair failed and we were unable to recover it. 00:39:06.439 [2024-09-27 15:57:46.856355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.439 [2024-09-27 15:57:46.856364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.439 qpair failed and we were unable to recover it. 00:39:06.439 [2024-09-27 15:57:46.856712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.439 [2024-09-27 15:57:46.856720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.439 qpair failed and we were unable to recover it. 00:39:06.439 [2024-09-27 15:57:46.856926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.439 [2024-09-27 15:57:46.856936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.439 qpair failed and we were unable to recover it. 00:39:06.439 [2024-09-27 15:57:46.857254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.439 [2024-09-27 15:57:46.857262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.439 qpair failed and we were unable to recover it. 00:39:06.439 [2024-09-27 15:57:46.857559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.439 [2024-09-27 15:57:46.857567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.439 qpair failed and we were unable to recover it. 00:39:06.439 [2024-09-27 15:57:46.857881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.439 [2024-09-27 15:57:46.857890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.439 qpair failed and we were unable to recover it. 00:39:06.439 [2024-09-27 15:57:46.858194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.439 [2024-09-27 15:57:46.858202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.439 qpair failed and we were unable to recover it. 00:39:06.439 [2024-09-27 15:57:46.858514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.439 [2024-09-27 15:57:46.858523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.439 qpair failed and we were unable to recover it. 00:39:06.439 [2024-09-27 15:57:46.858823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.439 [2024-09-27 15:57:46.858832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.439 qpair failed and we were unable to recover it. 00:39:06.439 [2024-09-27 15:57:46.859128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.439 [2024-09-27 15:57:46.859137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.439 qpair failed and we were unable to recover it. 00:39:06.439 [2024-09-27 15:57:46.859439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.439 [2024-09-27 15:57:46.859448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.439 qpair failed and we were unable to recover it. 00:39:06.439 [2024-09-27 15:57:46.859745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.439 [2024-09-27 15:57:46.859754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.439 qpair failed and we were unable to recover it. 00:39:06.439 [2024-09-27 15:57:46.860106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.439 [2024-09-27 15:57:46.860115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.439 qpair failed and we were unable to recover it. 00:39:06.439 [2024-09-27 15:57:46.860396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.439 [2024-09-27 15:57:46.860405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.439 qpair failed and we were unable to recover it. 00:39:06.439 [2024-09-27 15:57:46.860715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.439 [2024-09-27 15:57:46.860723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.439 qpair failed and we were unable to recover it. 00:39:06.439 [2024-09-27 15:57:46.861024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.439 [2024-09-27 15:57:46.861032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.439 qpair failed and we were unable to recover it. 00:39:06.439 [2024-09-27 15:57:46.861306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.439 [2024-09-27 15:57:46.861313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.439 qpair failed and we were unable to recover it. 00:39:06.439 [2024-09-27 15:57:46.861615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.439 [2024-09-27 15:57:46.861623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.439 qpair failed and we were unable to recover it. 00:39:06.439 [2024-09-27 15:57:46.861866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.439 [2024-09-27 15:57:46.861874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.439 qpair failed and we were unable to recover it. 00:39:06.439 [2024-09-27 15:57:46.862050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.439 [2024-09-27 15:57:46.862058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.439 qpair failed and we were unable to recover it. 00:39:06.439 [2024-09-27 15:57:46.862376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.439 [2024-09-27 15:57:46.862386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.439 qpair failed and we were unable to recover it. 00:39:06.439 [2024-09-27 15:57:46.862716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.439 [2024-09-27 15:57:46.862726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.439 qpair failed and we were unable to recover it. 00:39:06.439 [2024-09-27 15:57:46.863030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.440 [2024-09-27 15:57:46.863039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.440 qpair failed and we were unable to recover it. 00:39:06.440 [2024-09-27 15:57:46.863347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.440 [2024-09-27 15:57:46.863355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.440 qpair failed and we were unable to recover it. 00:39:06.440 [2024-09-27 15:57:46.863626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.440 [2024-09-27 15:57:46.863636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.440 qpair failed and we were unable to recover it. 00:39:06.440 [2024-09-27 15:57:46.863942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.440 [2024-09-27 15:57:46.863951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.440 qpair failed and we were unable to recover it. 00:39:06.440 [2024-09-27 15:57:46.864268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.440 [2024-09-27 15:57:46.864277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.440 qpair failed and we were unable to recover it. 00:39:06.440 [2024-09-27 15:57:46.864574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.440 [2024-09-27 15:57:46.864582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.440 qpair failed and we were unable to recover it. 00:39:06.440 [2024-09-27 15:57:46.864925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.440 [2024-09-27 15:57:46.864934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.440 qpair failed and we were unable to recover it. 00:39:06.440 [2024-09-27 15:57:46.865229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.440 [2024-09-27 15:57:46.865237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.440 qpair failed and we were unable to recover it. 00:39:06.440 [2024-09-27 15:57:46.865543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.440 [2024-09-27 15:57:46.865553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.440 qpair failed and we were unable to recover it. 00:39:06.440 [2024-09-27 15:57:46.865861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.440 [2024-09-27 15:57:46.865871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.440 qpair failed and we were unable to recover it. 00:39:06.440 [2024-09-27 15:57:46.866237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.440 [2024-09-27 15:57:46.866246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.440 qpair failed and we were unable to recover it. 00:39:06.440 [2024-09-27 15:57:46.866541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.440 [2024-09-27 15:57:46.866549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.440 qpair failed and we were unable to recover it. 00:39:06.440 [2024-09-27 15:57:46.866854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.440 [2024-09-27 15:57:46.866864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.440 qpair failed and we were unable to recover it. 00:39:06.440 [2024-09-27 15:57:46.867169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.440 [2024-09-27 15:57:46.867179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.440 qpair failed and we were unable to recover it. 00:39:06.440 [2024-09-27 15:57:46.867487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.440 [2024-09-27 15:57:46.867496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.440 qpair failed and we were unable to recover it. 00:39:06.440 [2024-09-27 15:57:46.867797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.440 [2024-09-27 15:57:46.867806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.440 qpair failed and we were unable to recover it. 00:39:06.440 [2024-09-27 15:57:46.867987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.440 [2024-09-27 15:57:46.867998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.440 qpair failed and we were unable to recover it. 00:39:06.440 [2024-09-27 15:57:46.868285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.440 [2024-09-27 15:57:46.868294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.440 qpair failed and we were unable to recover it. 00:39:06.440 [2024-09-27 15:57:46.868593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.440 [2024-09-27 15:57:46.868603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.440 qpair failed and we were unable to recover it. 00:39:06.440 [2024-09-27 15:57:46.868772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.440 [2024-09-27 15:57:46.868782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.440 qpair failed and we were unable to recover it. 00:39:06.440 [2024-09-27 15:57:46.869106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.440 [2024-09-27 15:57:46.869116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.440 qpair failed and we were unable to recover it. 00:39:06.440 [2024-09-27 15:57:46.869419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.440 [2024-09-27 15:57:46.869428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.440 qpair failed and we were unable to recover it. 00:39:06.440 [2024-09-27 15:57:46.869733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.440 [2024-09-27 15:57:46.869742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.440 qpair failed and we were unable to recover it. 00:39:06.440 [2024-09-27 15:57:46.870055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.440 [2024-09-27 15:57:46.870064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.440 qpair failed and we were unable to recover it. 00:39:06.440 [2024-09-27 15:57:46.870366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.440 [2024-09-27 15:57:46.870375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.440 qpair failed and we were unable to recover it. 00:39:06.440 [2024-09-27 15:57:46.870681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.440 [2024-09-27 15:57:46.870689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.440 qpair failed and we were unable to recover it. 00:39:06.440 [2024-09-27 15:57:46.870975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.440 [2024-09-27 15:57:46.870985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.440 qpair failed and we were unable to recover it. 00:39:06.440 [2024-09-27 15:57:46.871287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.440 [2024-09-27 15:57:46.871295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.440 qpair failed and we were unable to recover it. 00:39:06.440 [2024-09-27 15:57:46.871602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.440 [2024-09-27 15:57:46.871610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.440 qpair failed and we were unable to recover it. 00:39:06.440 [2024-09-27 15:57:46.871917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.440 [2024-09-27 15:57:46.871925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.440 qpair failed and we were unable to recover it. 00:39:06.440 [2024-09-27 15:57:46.872266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.440 [2024-09-27 15:57:46.872274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.440 qpair failed and we were unable to recover it. 00:39:06.440 [2024-09-27 15:57:46.872576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.440 [2024-09-27 15:57:46.872585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.440 qpair failed and we were unable to recover it. 00:39:06.440 [2024-09-27 15:57:46.872846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.440 [2024-09-27 15:57:46.872854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.440 qpair failed and we were unable to recover it. 00:39:06.440 [2024-09-27 15:57:46.873166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.440 [2024-09-27 15:57:46.873175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.440 qpair failed and we were unable to recover it. 00:39:06.440 [2024-09-27 15:57:46.873544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.440 [2024-09-27 15:57:46.873552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.440 qpair failed and we were unable to recover it. 00:39:06.440 [2024-09-27 15:57:46.873858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.440 [2024-09-27 15:57:46.873866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.440 qpair failed and we were unable to recover it. 00:39:06.440 [2024-09-27 15:57:46.874280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.440 [2024-09-27 15:57:46.874289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.440 qpair failed and we were unable to recover it. 00:39:06.440 [2024-09-27 15:57:46.874591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.440 [2024-09-27 15:57:46.874599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.441 qpair failed and we were unable to recover it. 00:39:06.441 [2024-09-27 15:57:46.874909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.441 [2024-09-27 15:57:46.874918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.441 qpair failed and we were unable to recover it. 00:39:06.441 [2024-09-27 15:57:46.875222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.441 [2024-09-27 15:57:46.875230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.441 qpair failed and we were unable to recover it. 00:39:06.441 [2024-09-27 15:57:46.875541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.441 [2024-09-27 15:57:46.875549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.441 qpair failed and we were unable to recover it. 00:39:06.441 [2024-09-27 15:57:46.875850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.441 [2024-09-27 15:57:46.875858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.441 qpair failed and we were unable to recover it. 00:39:06.441 [2024-09-27 15:57:46.876165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.441 [2024-09-27 15:57:46.876174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.441 qpair failed and we were unable to recover it. 00:39:06.441 [2024-09-27 15:57:46.876478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.441 [2024-09-27 15:57:46.876486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.441 qpair failed and we were unable to recover it. 00:39:06.441 [2024-09-27 15:57:46.876794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.441 [2024-09-27 15:57:46.876802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.441 qpair failed and we were unable to recover it. 00:39:06.441 [2024-09-27 15:57:46.877125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.441 [2024-09-27 15:57:46.877133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.441 qpair failed and we were unable to recover it. 00:39:06.441 [2024-09-27 15:57:46.877437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.441 [2024-09-27 15:57:46.877445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.441 qpair failed and we were unable to recover it. 00:39:06.441 [2024-09-27 15:57:46.877738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.441 [2024-09-27 15:57:46.877745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.441 qpair failed and we were unable to recover it. 00:39:06.441 [2024-09-27 15:57:46.878010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.441 [2024-09-27 15:57:46.878018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.441 qpair failed and we were unable to recover it. 00:39:06.441 [2024-09-27 15:57:46.878349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.441 [2024-09-27 15:57:46.878357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.441 qpair failed and we were unable to recover it. 00:39:06.441 [2024-09-27 15:57:46.878665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.441 [2024-09-27 15:57:46.878674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.441 qpair failed and we were unable to recover it. 00:39:06.441 [2024-09-27 15:57:46.878963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.441 [2024-09-27 15:57:46.878972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.441 qpair failed and we were unable to recover it. 00:39:06.441 [2024-09-27 15:57:46.879280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.441 [2024-09-27 15:57:46.879289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.441 qpair failed and we were unable to recover it. 00:39:06.441 [2024-09-27 15:57:46.879452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.441 [2024-09-27 15:57:46.879463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.441 qpair failed and we were unable to recover it. 00:39:06.441 [2024-09-27 15:57:46.879766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.441 [2024-09-27 15:57:46.879775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.441 qpair failed and we were unable to recover it. 00:39:06.441 [2024-09-27 15:57:46.880145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.441 [2024-09-27 15:57:46.880153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.441 qpair failed and we were unable to recover it. 00:39:06.441 [2024-09-27 15:57:46.880537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.441 [2024-09-27 15:57:46.880546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.441 qpair failed and we were unable to recover it. 00:39:06.441 [2024-09-27 15:57:46.880847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.441 [2024-09-27 15:57:46.880855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.441 qpair failed and we were unable to recover it. 00:39:06.441 [2024-09-27 15:57:46.881150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.441 [2024-09-27 15:57:46.881159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.441 qpair failed and we were unable to recover it. 00:39:06.441 [2024-09-27 15:57:46.881460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.441 [2024-09-27 15:57:46.881468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.441 qpair failed and we were unable to recover it. 00:39:06.441 [2024-09-27 15:57:46.881770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.441 [2024-09-27 15:57:46.881779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.441 qpair failed and we were unable to recover it. 00:39:06.441 [2024-09-27 15:57:46.882079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.441 [2024-09-27 15:57:46.882088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.441 qpair failed and we were unable to recover it. 00:39:06.441 [2024-09-27 15:57:46.882409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.441 [2024-09-27 15:57:46.882418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.441 qpair failed and we were unable to recover it. 00:39:06.441 [2024-09-27 15:57:46.882719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.441 [2024-09-27 15:57:46.882727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.441 qpair failed and we were unable to recover it. 00:39:06.441 [2024-09-27 15:57:46.883006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.441 [2024-09-27 15:57:46.883015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.441 qpair failed and we were unable to recover it. 00:39:06.441 [2024-09-27 15:57:46.883320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.441 [2024-09-27 15:57:46.883328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.441 qpair failed and we were unable to recover it. 00:39:06.441 [2024-09-27 15:57:46.883635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.441 [2024-09-27 15:57:46.883644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.441 qpair failed and we were unable to recover it. 00:39:06.441 [2024-09-27 15:57:46.883950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.441 [2024-09-27 15:57:46.883958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.441 qpair failed and we were unable to recover it. 00:39:06.441 [2024-09-27 15:57:46.884250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.441 [2024-09-27 15:57:46.884258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.441 qpair failed and we were unable to recover it. 00:39:06.441 [2024-09-27 15:57:46.884576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.441 [2024-09-27 15:57:46.884584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.441 qpair failed and we were unable to recover it. 00:39:06.441 [2024-09-27 15:57:46.884888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.441 [2024-09-27 15:57:46.884907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.441 qpair failed and we were unable to recover it. 00:39:06.441 [2024-09-27 15:57:46.885210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.441 [2024-09-27 15:57:46.885218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.441 qpair failed and we were unable to recover it. 00:39:06.441 [2024-09-27 15:57:46.885521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.441 [2024-09-27 15:57:46.885531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.441 qpair failed and we were unable to recover it. 00:39:06.441 [2024-09-27 15:57:46.885826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.441 [2024-09-27 15:57:46.885835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.441 qpair failed and we were unable to recover it. 00:39:06.441 [2024-09-27 15:57:46.886125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.441 [2024-09-27 15:57:46.886134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.442 qpair failed and we were unable to recover it. 00:39:06.442 [2024-09-27 15:57:46.886472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.442 [2024-09-27 15:57:46.886481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.442 qpair failed and we were unable to recover it. 00:39:06.442 [2024-09-27 15:57:46.886781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.442 [2024-09-27 15:57:46.886789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.442 qpair failed and we were unable to recover it. 00:39:06.442 [2024-09-27 15:57:46.887063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.442 [2024-09-27 15:57:46.887072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.442 qpair failed and we were unable to recover it. 00:39:06.720 [2024-09-27 15:57:46.887434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.720 [2024-09-27 15:57:46.887445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.720 qpair failed and we were unable to recover it. 00:39:06.720 [2024-09-27 15:57:46.887696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.720 [2024-09-27 15:57:46.887705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.720 qpair failed and we were unable to recover it. 00:39:06.720 [2024-09-27 15:57:46.888066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.720 [2024-09-27 15:57:46.888082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.720 qpair failed and we were unable to recover it. 00:39:06.720 [2024-09-27 15:57:46.888385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.720 [2024-09-27 15:57:46.888393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.720 qpair failed and we were unable to recover it. 00:39:06.720 [2024-09-27 15:57:46.888702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.720 [2024-09-27 15:57:46.888711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.720 qpair failed and we were unable to recover it. 00:39:06.720 [2024-09-27 15:57:46.889021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.720 [2024-09-27 15:57:46.889029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.720 qpair failed and we were unable to recover it. 00:39:06.720 [2024-09-27 15:57:46.889336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.720 [2024-09-27 15:57:46.889345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.720 qpair failed and we were unable to recover it. 00:39:06.720 [2024-09-27 15:57:46.889618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.720 [2024-09-27 15:57:46.889626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.720 qpair failed and we were unable to recover it. 00:39:06.720 [2024-09-27 15:57:46.889936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.720 [2024-09-27 15:57:46.889944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.720 qpair failed and we were unable to recover it. 00:39:06.720 [2024-09-27 15:57:46.890144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.720 [2024-09-27 15:57:46.890152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.720 qpair failed and we were unable to recover it. 00:39:06.720 [2024-09-27 15:57:46.890476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.720 [2024-09-27 15:57:46.890486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.720 qpair failed and we were unable to recover it. 00:39:06.720 [2024-09-27 15:57:46.890664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.720 [2024-09-27 15:57:46.890674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.720 qpair failed and we were unable to recover it. 00:39:06.720 [2024-09-27 15:57:46.890954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.720 [2024-09-27 15:57:46.890963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.720 qpair failed and we were unable to recover it. 00:39:06.720 [2024-09-27 15:57:46.891279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.720 [2024-09-27 15:57:46.891287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.720 qpair failed and we were unable to recover it. 00:39:06.720 [2024-09-27 15:57:46.891594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.720 [2024-09-27 15:57:46.891604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.720 qpair failed and we were unable to recover it. 00:39:06.720 [2024-09-27 15:57:46.891908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.720 [2024-09-27 15:57:46.891917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.720 qpair failed and we were unable to recover it. 00:39:06.720 [2024-09-27 15:57:46.892228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.720 [2024-09-27 15:57:46.892237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.720 qpair failed and we were unable to recover it. 00:39:06.720 [2024-09-27 15:57:46.892542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.720 [2024-09-27 15:57:46.892552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.720 qpair failed and we were unable to recover it. 00:39:06.720 [2024-09-27 15:57:46.892855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.721 [2024-09-27 15:57:46.892864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.721 qpair failed and we were unable to recover it. 00:39:06.721 [2024-09-27 15:57:46.893171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.721 [2024-09-27 15:57:46.893180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.721 qpair failed and we were unable to recover it. 00:39:06.721 [2024-09-27 15:57:46.893493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.721 [2024-09-27 15:57:46.893501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.721 qpair failed and we were unable to recover it. 00:39:06.721 [2024-09-27 15:57:46.893804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.721 [2024-09-27 15:57:46.893813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.721 qpair failed and we were unable to recover it. 00:39:06.721 [2024-09-27 15:57:46.894113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.721 [2024-09-27 15:57:46.894123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.721 qpair failed and we were unable to recover it. 00:39:06.721 [2024-09-27 15:57:46.894429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.721 [2024-09-27 15:57:46.894438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.721 qpair failed and we were unable to recover it. 00:39:06.721 [2024-09-27 15:57:46.894742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.721 [2024-09-27 15:57:46.894751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.721 qpair failed and we were unable to recover it. 00:39:06.721 [2024-09-27 15:57:46.895054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.721 [2024-09-27 15:57:46.895064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.721 qpair failed and we were unable to recover it. 00:39:06.721 [2024-09-27 15:57:46.895367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.721 [2024-09-27 15:57:46.895377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.721 qpair failed and we were unable to recover it. 00:39:06.721 [2024-09-27 15:57:46.895688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.721 [2024-09-27 15:57:46.895697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.721 qpair failed and we were unable to recover it. 00:39:06.721 [2024-09-27 15:57:46.895991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.721 [2024-09-27 15:57:46.896001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.721 qpair failed and we were unable to recover it. 00:39:06.721 [2024-09-27 15:57:46.896194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.721 [2024-09-27 15:57:46.896203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.721 qpair failed and we were unable to recover it. 00:39:06.721 [2024-09-27 15:57:46.896515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.721 [2024-09-27 15:57:46.896524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.721 qpair failed and we were unable to recover it. 00:39:06.721 [2024-09-27 15:57:46.896819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.721 [2024-09-27 15:57:46.896828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.721 qpair failed and we were unable to recover it. 00:39:06.721 [2024-09-27 15:57:46.897133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.721 [2024-09-27 15:57:46.897143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.721 qpair failed and we were unable to recover it. 00:39:06.721 [2024-09-27 15:57:46.897439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.721 [2024-09-27 15:57:46.897447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.721 qpair failed and we were unable to recover it. 00:39:06.721 [2024-09-27 15:57:46.897742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.721 [2024-09-27 15:57:46.897751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.721 qpair failed and we were unable to recover it. 00:39:06.721 [2024-09-27 15:57:46.898094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.721 [2024-09-27 15:57:46.898104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.721 qpair failed and we were unable to recover it. 00:39:06.721 [2024-09-27 15:57:46.898415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.721 [2024-09-27 15:57:46.898423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.721 qpair failed and we were unable to recover it. 00:39:06.721 [2024-09-27 15:57:46.898729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.721 [2024-09-27 15:57:46.898737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.721 qpair failed and we were unable to recover it. 00:39:06.721 [2024-09-27 15:57:46.898903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.721 [2024-09-27 15:57:46.898912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.721 qpair failed and we were unable to recover it. 00:39:06.721 [2024-09-27 15:57:46.899191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.721 [2024-09-27 15:57:46.899200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.721 qpair failed and we were unable to recover it. 00:39:06.721 [2024-09-27 15:57:46.899512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.721 [2024-09-27 15:57:46.899520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.721 qpair failed and we were unable to recover it. 00:39:06.721 [2024-09-27 15:57:46.899813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.721 [2024-09-27 15:57:46.899822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.721 qpair failed and we were unable to recover it. 00:39:06.721 [2024-09-27 15:57:46.900131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.721 [2024-09-27 15:57:46.900139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.721 qpair failed and we were unable to recover it. 00:39:06.721 [2024-09-27 15:57:46.900303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.721 [2024-09-27 15:57:46.900314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.721 qpair failed and we were unable to recover it. 00:39:06.721 [2024-09-27 15:57:46.900635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.721 [2024-09-27 15:57:46.900643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.721 qpair failed and we were unable to recover it. 00:39:06.721 [2024-09-27 15:57:46.900957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.721 [2024-09-27 15:57:46.900965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.721 qpair failed and we were unable to recover it. 00:39:06.721 [2024-09-27 15:57:46.901272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.721 [2024-09-27 15:57:46.901280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.721 qpair failed and we were unable to recover it. 00:39:06.721 [2024-09-27 15:57:46.901583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.721 [2024-09-27 15:57:46.901591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.721 qpair failed and we were unable to recover it. 00:39:06.721 [2024-09-27 15:57:46.901789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.721 [2024-09-27 15:57:46.901798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.721 qpair failed and we were unable to recover it. 00:39:06.721 [2024-09-27 15:57:46.902075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.721 [2024-09-27 15:57:46.902083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.721 qpair failed and we were unable to recover it. 00:39:06.721 [2024-09-27 15:57:46.902412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.721 [2024-09-27 15:57:46.902421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.721 qpair failed and we were unable to recover it. 00:39:06.721 [2024-09-27 15:57:46.902724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.721 [2024-09-27 15:57:46.902733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.721 qpair failed and we were unable to recover it. 00:39:06.721 [2024-09-27 15:57:46.903037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.721 [2024-09-27 15:57:46.903046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.721 qpair failed and we were unable to recover it. 00:39:06.721 [2024-09-27 15:57:46.903354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.721 [2024-09-27 15:57:46.903361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.721 qpair failed and we were unable to recover it. 00:39:06.721 [2024-09-27 15:57:46.903668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.721 [2024-09-27 15:57:46.903677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.721 qpair failed and we were unable to recover it. 00:39:06.722 [2024-09-27 15:57:46.903980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.722 [2024-09-27 15:57:46.903989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.722 qpair failed and we were unable to recover it. 00:39:06.722 [2024-09-27 15:57:46.904283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.722 [2024-09-27 15:57:46.904292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.722 qpair failed and we were unable to recover it. 00:39:06.722 [2024-09-27 15:57:46.904599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.722 [2024-09-27 15:57:46.904607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.722 qpair failed and we were unable to recover it. 00:39:06.722 [2024-09-27 15:57:46.904942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.722 [2024-09-27 15:57:46.904952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.722 qpair failed and we were unable to recover it. 00:39:06.722 [2024-09-27 15:57:46.905258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.722 [2024-09-27 15:57:46.905266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.722 qpair failed and we were unable to recover it. 00:39:06.722 [2024-09-27 15:57:46.905574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.722 [2024-09-27 15:57:46.905582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.722 qpair failed and we were unable to recover it. 00:39:06.722 [2024-09-27 15:57:46.905884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.722 [2024-09-27 15:57:46.905892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.722 qpair failed and we were unable to recover it. 00:39:06.722 [2024-09-27 15:57:46.906099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.722 [2024-09-27 15:57:46.906108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.722 qpair failed and we were unable to recover it. 00:39:06.722 [2024-09-27 15:57:46.906407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.722 [2024-09-27 15:57:46.906415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.722 qpair failed and we were unable to recover it. 00:39:06.722 [2024-09-27 15:57:46.906728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.722 [2024-09-27 15:57:46.906736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.722 qpair failed and we were unable to recover it. 00:39:06.722 [2024-09-27 15:57:46.907046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.722 [2024-09-27 15:57:46.907054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.722 qpair failed and we were unable to recover it. 00:39:06.722 [2024-09-27 15:57:46.907395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.722 [2024-09-27 15:57:46.907403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.722 qpair failed and we were unable to recover it. 00:39:06.722 [2024-09-27 15:57:46.907708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.722 [2024-09-27 15:57:46.907716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.722 qpair failed and we were unable to recover it. 00:39:06.722 [2024-09-27 15:57:46.908031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.722 [2024-09-27 15:57:46.908039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.722 qpair failed and we were unable to recover it. 00:39:06.722 [2024-09-27 15:57:46.908320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.722 [2024-09-27 15:57:46.908328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.722 qpair failed and we were unable to recover it. 00:39:06.722 [2024-09-27 15:57:46.908632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.722 [2024-09-27 15:57:46.908641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.722 qpair failed and we were unable to recover it. 00:39:06.722 [2024-09-27 15:57:46.908940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.722 [2024-09-27 15:57:46.908949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.722 qpair failed and we were unable to recover it. 00:39:06.722 [2024-09-27 15:57:46.909254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.722 [2024-09-27 15:57:46.909261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.722 qpair failed and we were unable to recover it. 00:39:06.722 [2024-09-27 15:57:46.909567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.722 [2024-09-27 15:57:46.909576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.722 qpair failed and we were unable to recover it. 00:39:06.722 [2024-09-27 15:57:46.909886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.722 [2024-09-27 15:57:46.909898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.722 qpair failed and we were unable to recover it. 00:39:06.722 [2024-09-27 15:57:46.910286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.722 [2024-09-27 15:57:46.910294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.722 qpair failed and we were unable to recover it. 00:39:06.722 [2024-09-27 15:57:46.910595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.722 [2024-09-27 15:57:46.910603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.722 qpair failed and we were unable to recover it. 00:39:06.722 [2024-09-27 15:57:46.910909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.722 [2024-09-27 15:57:46.910918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.722 qpair failed and we were unable to recover it. 00:39:06.722 [2024-09-27 15:57:46.911083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.722 [2024-09-27 15:57:46.911090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.722 qpair failed and we were unable to recover it. 00:39:06.722 [2024-09-27 15:57:46.911407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.722 [2024-09-27 15:57:46.911416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.722 qpair failed and we were unable to recover it. 00:39:06.722 [2024-09-27 15:57:46.911722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.722 [2024-09-27 15:57:46.911731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.722 qpair failed and we were unable to recover it. 00:39:06.722 [2024-09-27 15:57:46.911926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.722 [2024-09-27 15:57:46.911935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.722 qpair failed and we were unable to recover it. 00:39:06.722 [2024-09-27 15:57:46.912207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.722 [2024-09-27 15:57:46.912215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.722 qpair failed and we were unable to recover it. 00:39:06.722 [2024-09-27 15:57:46.912523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.722 [2024-09-27 15:57:46.912532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.722 qpair failed and we were unable to recover it. 00:39:06.722 [2024-09-27 15:57:46.912838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.722 [2024-09-27 15:57:46.912847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.722 qpair failed and we were unable to recover it. 00:39:06.722 [2024-09-27 15:57:46.913155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.722 [2024-09-27 15:57:46.913163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.722 qpair failed and we were unable to recover it. 00:39:06.722 [2024-09-27 15:57:46.913463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.722 [2024-09-27 15:57:46.913471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.722 qpair failed and we were unable to recover it. 00:39:06.722 [2024-09-27 15:57:46.913776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.722 [2024-09-27 15:57:46.913785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.722 qpair failed and we were unable to recover it. 00:39:06.722 [2024-09-27 15:57:46.914078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.722 [2024-09-27 15:57:46.914086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.722 qpair failed and we were unable to recover it. 00:39:06.722 [2024-09-27 15:57:46.914401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.722 [2024-09-27 15:57:46.914409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.722 qpair failed and we were unable to recover it. 00:39:06.722 [2024-09-27 15:57:46.914712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.722 [2024-09-27 15:57:46.914720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.722 qpair failed and we were unable to recover it. 00:39:06.722 [2024-09-27 15:57:46.914923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.722 [2024-09-27 15:57:46.914933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.723 qpair failed and we were unable to recover it. 00:39:06.723 [2024-09-27 15:57:46.915266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.723 [2024-09-27 15:57:46.915274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.723 qpair failed and we were unable to recover it. 00:39:06.723 [2024-09-27 15:57:46.915579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.723 [2024-09-27 15:57:46.915588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.723 qpair failed and we were unable to recover it. 00:39:06.723 [2024-09-27 15:57:46.915900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.723 [2024-09-27 15:57:46.915909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.723 qpair failed and we were unable to recover it. 00:39:06.723 [2024-09-27 15:57:46.916216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.723 [2024-09-27 15:57:46.916223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.723 qpair failed and we were unable to recover it. 00:39:06.723 [2024-09-27 15:57:46.916517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.723 [2024-09-27 15:57:46.916527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.723 qpair failed and we were unable to recover it. 00:39:06.723 [2024-09-27 15:57:46.916706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.723 [2024-09-27 15:57:46.916717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.723 qpair failed and we were unable to recover it. 00:39:06.723 [2024-09-27 15:57:46.917029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.723 [2024-09-27 15:57:46.917038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.723 qpair failed and we were unable to recover it. 00:39:06.723 [2024-09-27 15:57:46.917422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.723 [2024-09-27 15:57:46.917431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.723 qpair failed and we were unable to recover it. 00:39:06.723 [2024-09-27 15:57:46.917726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.723 [2024-09-27 15:57:46.917734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.723 qpair failed and we were unable to recover it. 00:39:06.723 [2024-09-27 15:57:46.918043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.723 [2024-09-27 15:57:46.918051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.723 qpair failed and we were unable to recover it. 00:39:06.723 [2024-09-27 15:57:46.918358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.723 [2024-09-27 15:57:46.918367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.723 qpair failed and we were unable to recover it. 00:39:06.723 [2024-09-27 15:57:46.918669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.723 [2024-09-27 15:57:46.918677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.723 qpair failed and we were unable to recover it. 00:39:06.723 [2024-09-27 15:57:46.918983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.723 [2024-09-27 15:57:46.918992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.723 qpair failed and we were unable to recover it. 00:39:06.723 [2024-09-27 15:57:46.919299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.723 [2024-09-27 15:57:46.919307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.723 qpair failed and we were unable to recover it. 00:39:06.723 [2024-09-27 15:57:46.919611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.723 [2024-09-27 15:57:46.919619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.723 qpair failed and we were unable to recover it. 00:39:06.723 [2024-09-27 15:57:46.919804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.723 [2024-09-27 15:57:46.919811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.723 qpair failed and we were unable to recover it. 00:39:06.723 [2024-09-27 15:57:46.920028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.723 [2024-09-27 15:57:46.920037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.723 qpair failed and we were unable to recover it. 00:39:06.723 [2024-09-27 15:57:46.920346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.723 [2024-09-27 15:57:46.920354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.723 qpair failed and we were unable to recover it. 00:39:06.723 [2024-09-27 15:57:46.920661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.723 [2024-09-27 15:57:46.920669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.723 qpair failed and we were unable to recover it. 00:39:06.723 [2024-09-27 15:57:46.920976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.723 [2024-09-27 15:57:46.920985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.723 qpair failed and we were unable to recover it. 00:39:06.723 [2024-09-27 15:57:46.921295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.723 [2024-09-27 15:57:46.921303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.723 qpair failed and we were unable to recover it. 00:39:06.723 [2024-09-27 15:57:46.921598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.723 [2024-09-27 15:57:46.921607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.723 qpair failed and we were unable to recover it. 00:39:06.723 [2024-09-27 15:57:46.921781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.723 [2024-09-27 15:57:46.921790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.723 qpair failed and we were unable to recover it. 00:39:06.723 [2024-09-27 15:57:46.922110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.723 [2024-09-27 15:57:46.922118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.723 qpair failed and we were unable to recover it. 00:39:06.723 [2024-09-27 15:57:46.922389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.723 [2024-09-27 15:57:46.922396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.723 qpair failed and we were unable to recover it. 00:39:06.723 [2024-09-27 15:57:46.922696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.723 [2024-09-27 15:57:46.922705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.723 qpair failed and we were unable to recover it. 00:39:06.723 [2024-09-27 15:57:46.922990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.723 [2024-09-27 15:57:46.922999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.723 qpair failed and we were unable to recover it. 00:39:06.723 [2024-09-27 15:57:46.923305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.723 [2024-09-27 15:57:46.923314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.723 qpair failed and we were unable to recover it. 00:39:06.723 [2024-09-27 15:57:46.923619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.723 [2024-09-27 15:57:46.923627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.723 qpair failed and we were unable to recover it. 00:39:06.723 [2024-09-27 15:57:46.923935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.723 [2024-09-27 15:57:46.923943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.723 qpair failed and we were unable to recover it. 00:39:06.723 [2024-09-27 15:57:46.924116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.723 [2024-09-27 15:57:46.924124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.723 qpair failed and we were unable to recover it. 00:39:06.723 [2024-09-27 15:57:46.924478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.723 [2024-09-27 15:57:46.924487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.723 qpair failed and we were unable to recover it. 00:39:06.723 [2024-09-27 15:57:46.924791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.723 [2024-09-27 15:57:46.924799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.723 qpair failed and we were unable to recover it. 00:39:06.723 [2024-09-27 15:57:46.925121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.723 [2024-09-27 15:57:46.925129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.723 qpair failed and we were unable to recover it. 00:39:06.723 [2024-09-27 15:57:46.925430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.723 [2024-09-27 15:57:46.925438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.723 qpair failed and we were unable to recover it. 00:39:06.723 [2024-09-27 15:57:46.925744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.723 [2024-09-27 15:57:46.925752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.723 qpair failed and we were unable to recover it. 00:39:06.723 [2024-09-27 15:57:46.925917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.723 [2024-09-27 15:57:46.925926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.724 qpair failed and we were unable to recover it. 00:39:06.724 [2024-09-27 15:57:46.926232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.724 [2024-09-27 15:57:46.926240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.724 qpair failed and we were unable to recover it. 00:39:06.724 [2024-09-27 15:57:46.926541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.724 [2024-09-27 15:57:46.926549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.724 qpair failed and we were unable to recover it. 00:39:06.724 [2024-09-27 15:57:46.926853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.724 [2024-09-27 15:57:46.926862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.724 qpair failed and we were unable to recover it. 00:39:06.724 [2024-09-27 15:57:46.927171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.724 [2024-09-27 15:57:46.927179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.724 qpair failed and we were unable to recover it. 00:39:06.724 [2024-09-27 15:57:46.927479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.724 [2024-09-27 15:57:46.927487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.724 qpair failed and we were unable to recover it. 00:39:06.724 [2024-09-27 15:57:46.927793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.724 [2024-09-27 15:57:46.927801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.724 qpair failed and we were unable to recover it. 00:39:06.724 [2024-09-27 15:57:46.928102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.724 [2024-09-27 15:57:46.928112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.724 qpair failed and we were unable to recover it. 00:39:06.724 [2024-09-27 15:57:46.928419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.724 [2024-09-27 15:57:46.928429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.724 qpair failed and we were unable to recover it. 00:39:06.724 [2024-09-27 15:57:46.928768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.724 [2024-09-27 15:57:46.928777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.724 qpair failed and we were unable to recover it. 00:39:06.724 [2024-09-27 15:57:46.929093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.724 [2024-09-27 15:57:46.929102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.724 qpair failed and we were unable to recover it. 00:39:06.724 [2024-09-27 15:57:46.929407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.724 [2024-09-27 15:57:46.929415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.724 qpair failed and we were unable to recover it. 00:39:06.724 [2024-09-27 15:57:46.929618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.724 [2024-09-27 15:57:46.929628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.724 qpair failed and we were unable to recover it. 00:39:06.724 [2024-09-27 15:57:46.929931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.724 [2024-09-27 15:57:46.929940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.724 qpair failed and we were unable to recover it. 00:39:06.724 [2024-09-27 15:57:46.930242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.724 [2024-09-27 15:57:46.930250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.724 qpair failed and we were unable to recover it. 00:39:06.724 [2024-09-27 15:57:46.930551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.724 [2024-09-27 15:57:46.930560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.724 qpair failed and we were unable to recover it. 00:39:06.724 [2024-09-27 15:57:46.930866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.724 [2024-09-27 15:57:46.930874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.724 qpair failed and we were unable to recover it. 00:39:06.724 [2024-09-27 15:57:46.931181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.724 [2024-09-27 15:57:46.931189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.724 qpair failed and we were unable to recover it. 00:39:06.724 [2024-09-27 15:57:46.931365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.724 [2024-09-27 15:57:46.931373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.724 qpair failed and we were unable to recover it. 00:39:06.724 [2024-09-27 15:57:46.931649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.724 [2024-09-27 15:57:46.931657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.724 qpair failed and we were unable to recover it. 00:39:06.724 [2024-09-27 15:57:46.931971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.724 [2024-09-27 15:57:46.931980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.724 qpair failed and we were unable to recover it. 00:39:06.724 [2024-09-27 15:57:46.932284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.724 [2024-09-27 15:57:46.932292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.724 qpair failed and we were unable to recover it. 00:39:06.724 [2024-09-27 15:57:46.932589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.724 [2024-09-27 15:57:46.932597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.724 qpair failed and we were unable to recover it. 00:39:06.724 [2024-09-27 15:57:46.932976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.724 [2024-09-27 15:57:46.932984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.724 qpair failed and we were unable to recover it. 00:39:06.724 [2024-09-27 15:57:46.933281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.724 [2024-09-27 15:57:46.933289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.724 qpair failed and we were unable to recover it. 00:39:06.724 [2024-09-27 15:57:46.933598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.724 [2024-09-27 15:57:46.933606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.724 qpair failed and we were unable to recover it. 00:39:06.724 [2024-09-27 15:57:46.933910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.724 [2024-09-27 15:57:46.933918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.724 qpair failed and we were unable to recover it. 00:39:06.724 [2024-09-27 15:57:46.934206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.724 [2024-09-27 15:57:46.934214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.724 qpair failed and we were unable to recover it. 00:39:06.724 [2024-09-27 15:57:46.934424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.724 [2024-09-27 15:57:46.934431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.724 qpair failed and we were unable to recover it. 00:39:06.724 [2024-09-27 15:57:46.934747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.724 [2024-09-27 15:57:46.934756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.724 qpair failed and we were unable to recover it. 00:39:06.724 [2024-09-27 15:57:46.935150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.724 [2024-09-27 15:57:46.935159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.724 qpair failed and we were unable to recover it. 00:39:06.724 [2024-09-27 15:57:46.935454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.724 [2024-09-27 15:57:46.935462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.724 qpair failed and we were unable to recover it. 00:39:06.724 [2024-09-27 15:57:46.935727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.724 [2024-09-27 15:57:46.935735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.724 qpair failed and we were unable to recover it. 00:39:06.724 [2024-09-27 15:57:46.936047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.724 [2024-09-27 15:57:46.936055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.724 qpair failed and we were unable to recover it. 00:39:06.724 [2024-09-27 15:57:46.936372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.724 [2024-09-27 15:57:46.936380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.724 qpair failed and we were unable to recover it. 00:39:06.724 [2024-09-27 15:57:46.936547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.724 [2024-09-27 15:57:46.936554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.724 qpair failed and we were unable to recover it. 00:39:06.724 [2024-09-27 15:57:46.936871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.724 [2024-09-27 15:57:46.936880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.724 qpair failed and we were unable to recover it. 00:39:06.724 [2024-09-27 15:57:46.937182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.725 [2024-09-27 15:57:46.937192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.725 qpair failed and we were unable to recover it. 00:39:06.725 [2024-09-27 15:57:46.937496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.725 [2024-09-27 15:57:46.937504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.725 qpair failed and we were unable to recover it. 00:39:06.725 [2024-09-27 15:57:46.937729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.725 [2024-09-27 15:57:46.937738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.725 qpair failed and we were unable to recover it. 00:39:06.725 [2024-09-27 15:57:46.938095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.725 [2024-09-27 15:57:46.938103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.725 qpair failed and we were unable to recover it. 00:39:06.725 [2024-09-27 15:57:46.938408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.725 [2024-09-27 15:57:46.938416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.725 qpair failed and we were unable to recover it. 00:39:06.725 [2024-09-27 15:57:46.938696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.725 [2024-09-27 15:57:46.938704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.725 qpair failed and we were unable to recover it. 00:39:06.725 [2024-09-27 15:57:46.939008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.725 [2024-09-27 15:57:46.939017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.725 qpair failed and we were unable to recover it. 00:39:06.725 [2024-09-27 15:57:46.939322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.725 [2024-09-27 15:57:46.939332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.725 qpair failed and we were unable to recover it. 00:39:06.725 [2024-09-27 15:57:46.939637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.725 [2024-09-27 15:57:46.939646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.725 qpair failed and we were unable to recover it. 00:39:06.725 [2024-09-27 15:57:46.939955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.725 [2024-09-27 15:57:46.939963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.725 qpair failed and we were unable to recover it. 00:39:06.725 [2024-09-27 15:57:46.940230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.725 [2024-09-27 15:57:46.940239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.725 qpair failed and we were unable to recover it. 00:39:06.725 [2024-09-27 15:57:46.940541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.725 [2024-09-27 15:57:46.940550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.725 qpair failed and we were unable to recover it. 00:39:06.725 [2024-09-27 15:57:46.940856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.725 [2024-09-27 15:57:46.940866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.725 qpair failed and we were unable to recover it. 00:39:06.725 [2024-09-27 15:57:46.941173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.725 [2024-09-27 15:57:46.941182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.725 qpair failed and we were unable to recover it. 00:39:06.725 [2024-09-27 15:57:46.941495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.725 [2024-09-27 15:57:46.941502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.725 qpair failed and we were unable to recover it. 00:39:06.725 [2024-09-27 15:57:46.941806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.725 [2024-09-27 15:57:46.941815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.725 qpair failed and we were unable to recover it. 00:39:06.725 [2024-09-27 15:57:46.942122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.725 [2024-09-27 15:57:46.942132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.725 qpair failed and we were unable to recover it. 00:39:06.725 [2024-09-27 15:57:46.942456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.725 [2024-09-27 15:57:46.942466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.725 qpair failed and we were unable to recover it. 00:39:06.725 [2024-09-27 15:57:46.942770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.725 [2024-09-27 15:57:46.942779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.725 qpair failed and we were unable to recover it. 00:39:06.725 [2024-09-27 15:57:46.943078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.725 [2024-09-27 15:57:46.943088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.725 qpair failed and we were unable to recover it. 00:39:06.725 [2024-09-27 15:57:46.943373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.725 [2024-09-27 15:57:46.943382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.725 qpair failed and we were unable to recover it. 00:39:06.725 [2024-09-27 15:57:46.943690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.725 [2024-09-27 15:57:46.943700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.725 qpair failed and we were unable to recover it. 00:39:06.725 [2024-09-27 15:57:46.944007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.725 [2024-09-27 15:57:46.944015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.725 qpair failed and we were unable to recover it. 00:39:06.725 [2024-09-27 15:57:46.944320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.725 [2024-09-27 15:57:46.944328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.725 qpair failed and we were unable to recover it. 00:39:06.725 [2024-09-27 15:57:46.944637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.725 [2024-09-27 15:57:46.944646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.725 qpair failed and we were unable to recover it. 00:39:06.725 [2024-09-27 15:57:46.944954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.725 [2024-09-27 15:57:46.944963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.725 qpair failed and we were unable to recover it. 00:39:06.725 [2024-09-27 15:57:46.945264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.725 [2024-09-27 15:57:46.945273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.725 qpair failed and we were unable to recover it. 00:39:06.725 [2024-09-27 15:57:46.945426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.725 [2024-09-27 15:57:46.945436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.725 qpair failed and we were unable to recover it. 00:39:06.725 [2024-09-27 15:57:46.945617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.725 [2024-09-27 15:57:46.945626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.725 qpair failed and we were unable to recover it. 00:39:06.725 [2024-09-27 15:57:46.945934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.725 [2024-09-27 15:57:46.945943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.725 qpair failed and we were unable to recover it. 00:39:06.725 [2024-09-27 15:57:46.946245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.725 [2024-09-27 15:57:46.946254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.725 qpair failed and we were unable to recover it. 00:39:06.725 [2024-09-27 15:57:46.946562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.725 [2024-09-27 15:57:46.946570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.725 qpair failed and we were unable to recover it. 00:39:06.725 [2024-09-27 15:57:46.946875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.726 [2024-09-27 15:57:46.946883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.726 qpair failed and we were unable to recover it. 00:39:06.726 [2024-09-27 15:57:46.947185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.726 [2024-09-27 15:57:46.947193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.726 qpair failed and we were unable to recover it. 00:39:06.726 [2024-09-27 15:57:46.947502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.726 [2024-09-27 15:57:46.947509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.726 qpair failed and we were unable to recover it. 00:39:06.726 [2024-09-27 15:57:46.947813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.726 [2024-09-27 15:57:46.947821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.726 qpair failed and we were unable to recover it. 00:39:06.726 [2024-09-27 15:57:46.948130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.726 [2024-09-27 15:57:46.948139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.726 qpair failed and we were unable to recover it. 00:39:06.726 [2024-09-27 15:57:46.948438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.726 [2024-09-27 15:57:46.948446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.726 qpair failed and we were unable to recover it. 00:39:06.726 [2024-09-27 15:57:46.948750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.726 [2024-09-27 15:57:46.948758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.726 qpair failed and we were unable to recover it. 00:39:06.726 [2024-09-27 15:57:46.949064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.726 [2024-09-27 15:57:46.949073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.726 qpair failed and we were unable to recover it. 00:39:06.726 [2024-09-27 15:57:46.949390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.726 [2024-09-27 15:57:46.949398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.726 qpair failed and we were unable to recover it. 00:39:06.726 [2024-09-27 15:57:46.949703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.726 [2024-09-27 15:57:46.949712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.726 qpair failed and we were unable to recover it. 00:39:06.726 [2024-09-27 15:57:46.950046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.726 [2024-09-27 15:57:46.950055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.726 qpair failed and we were unable to recover it. 00:39:06.726 [2024-09-27 15:57:46.950358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.726 [2024-09-27 15:57:46.950367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.726 qpair failed and we were unable to recover it. 00:39:06.726 [2024-09-27 15:57:46.950676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.726 [2024-09-27 15:57:46.950685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.726 qpair failed and we were unable to recover it. 00:39:06.726 [2024-09-27 15:57:46.951000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.726 [2024-09-27 15:57:46.951009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.726 qpair failed and we were unable to recover it. 00:39:06.726 [2024-09-27 15:57:46.951305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.726 [2024-09-27 15:57:46.951313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.726 qpair failed and we were unable to recover it. 00:39:06.726 [2024-09-27 15:57:46.951624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.726 [2024-09-27 15:57:46.951632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.726 qpair failed and we were unable to recover it. 00:39:06.726 [2024-09-27 15:57:46.951926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.726 [2024-09-27 15:57:46.951935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.726 qpair failed and we were unable to recover it. 00:39:06.726 [2024-09-27 15:57:46.952258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.726 [2024-09-27 15:57:46.952266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.726 qpair failed and we were unable to recover it. 00:39:06.726 [2024-09-27 15:57:46.952572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.726 [2024-09-27 15:57:46.952580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.726 qpair failed and we were unable to recover it. 00:39:06.726 [2024-09-27 15:57:46.952838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.726 [2024-09-27 15:57:46.952846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.726 qpair failed and we were unable to recover it. 00:39:06.726 [2024-09-27 15:57:46.953051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.726 [2024-09-27 15:57:46.953059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.726 qpair failed and we were unable to recover it. 00:39:06.726 [2024-09-27 15:57:46.953376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.726 [2024-09-27 15:57:46.953384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.726 qpair failed and we were unable to recover it. 00:39:06.726 [2024-09-27 15:57:46.953660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.726 [2024-09-27 15:57:46.953670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.726 qpair failed and we were unable to recover it. 00:39:06.726 [2024-09-27 15:57:46.953957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.726 [2024-09-27 15:57:46.953966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.726 qpair failed and we were unable to recover it. 00:39:06.726 [2024-09-27 15:57:46.954263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.726 [2024-09-27 15:57:46.954271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.726 qpair failed and we were unable to recover it. 00:39:06.726 [2024-09-27 15:57:46.954432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.726 [2024-09-27 15:57:46.954442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.726 qpair failed and we were unable to recover it. 00:39:06.726 [2024-09-27 15:57:46.954711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.726 [2024-09-27 15:57:46.954720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.726 qpair failed and we were unable to recover it. 00:39:06.726 [2024-09-27 15:57:46.955048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.726 [2024-09-27 15:57:46.955056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.726 qpair failed and we were unable to recover it. 00:39:06.726 [2024-09-27 15:57:46.955262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.726 [2024-09-27 15:57:46.955269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.726 qpair failed and we were unable to recover it. 00:39:06.726 [2024-09-27 15:57:46.955570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.726 [2024-09-27 15:57:46.955579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.726 qpair failed and we were unable to recover it. 00:39:06.726 [2024-09-27 15:57:46.955838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.726 [2024-09-27 15:57:46.955846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.726 qpair failed and we were unable to recover it. 00:39:06.726 [2024-09-27 15:57:46.956196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.726 [2024-09-27 15:57:46.956204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.726 qpair failed and we were unable to recover it. 00:39:06.726 [2024-09-27 15:57:46.956368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.726 [2024-09-27 15:57:46.956378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.726 qpair failed and we were unable to recover it. 00:39:06.726 [2024-09-27 15:57:46.956688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.726 [2024-09-27 15:57:46.956696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.726 qpair failed and we were unable to recover it. 00:39:06.726 [2024-09-27 15:57:46.957011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.726 [2024-09-27 15:57:46.957019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.726 qpair failed and we were unable to recover it. 00:39:06.726 [2024-09-27 15:57:46.957108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.726 [2024-09-27 15:57:46.957116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.726 qpair failed and we were unable to recover it. 00:39:06.726 [2024-09-27 15:57:46.957506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.727 [2024-09-27 15:57:46.957596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:06.727 qpair failed and we were unable to recover it. 00:39:06.727 [2024-09-27 15:57:46.958165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.727 [2024-09-27 15:57:46.958256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:06.727 qpair failed and we were unable to recover it. 00:39:06.727 [2024-09-27 15:57:46.958436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.727 [2024-09-27 15:57:46.958475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:06.727 qpair failed and we were unable to recover it. 00:39:06.727 [2024-09-27 15:57:46.958952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.727 [2024-09-27 15:57:46.958988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:06.727 qpair failed and we were unable to recover it. 00:39:06.727 [2024-09-27 15:57:46.959385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.727 [2024-09-27 15:57:46.959396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.727 qpair failed and we were unable to recover it. 00:39:06.727 [2024-09-27 15:57:46.959709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.727 [2024-09-27 15:57:46.959718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.727 qpair failed and we were unable to recover it. 00:39:06.727 [2024-09-27 15:57:46.960030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.727 [2024-09-27 15:57:46.960038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.727 qpair failed and we were unable to recover it. 00:39:06.727 [2024-09-27 15:57:46.960339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.727 [2024-09-27 15:57:46.960348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.727 qpair failed and we were unable to recover it. 00:39:06.727 [2024-09-27 15:57:46.960656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.727 [2024-09-27 15:57:46.960664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.727 qpair failed and we were unable to recover it. 00:39:06.727 [2024-09-27 15:57:46.960968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.727 [2024-09-27 15:57:46.960976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.727 qpair failed and we were unable to recover it. 00:39:06.727 [2024-09-27 15:57:46.961299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.727 [2024-09-27 15:57:46.961308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.727 qpair failed and we were unable to recover it. 00:39:06.727 [2024-09-27 15:57:46.961620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.727 [2024-09-27 15:57:46.961628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.727 qpair failed and we were unable to recover it. 00:39:06.727 [2024-09-27 15:57:46.961805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.727 [2024-09-27 15:57:46.961814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.727 qpair failed and we were unable to recover it. 00:39:06.727 [2024-09-27 15:57:46.962082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.727 [2024-09-27 15:57:46.962091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.727 qpair failed and we were unable to recover it. 00:39:06.727 [2024-09-27 15:57:46.962387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.727 [2024-09-27 15:57:46.962395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.727 qpair failed and we were unable to recover it. 00:39:06.727 [2024-09-27 15:57:46.962705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.727 [2024-09-27 15:57:46.962714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.727 qpair failed and we were unable to recover it. 00:39:06.727 [2024-09-27 15:57:46.962941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.727 [2024-09-27 15:57:46.962949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.727 qpair failed and we were unable to recover it. 00:39:06.727 [2024-09-27 15:57:46.963301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.727 [2024-09-27 15:57:46.963310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.727 qpair failed and we were unable to recover it. 00:39:06.727 [2024-09-27 15:57:46.963603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.727 [2024-09-27 15:57:46.963611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.727 qpair failed and we were unable to recover it. 00:39:06.727 [2024-09-27 15:57:46.963925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.727 [2024-09-27 15:57:46.963933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.727 qpair failed and we were unable to recover it. 00:39:06.727 [2024-09-27 15:57:46.964262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.727 [2024-09-27 15:57:46.964271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.727 qpair failed and we were unable to recover it. 00:39:06.727 [2024-09-27 15:57:46.964604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.727 [2024-09-27 15:57:46.964612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.727 qpair failed and we were unable to recover it. 00:39:06.727 [2024-09-27 15:57:46.964916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.727 [2024-09-27 15:57:46.964926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.727 qpair failed and we were unable to recover it. 00:39:06.727 [2024-09-27 15:57:46.965270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.727 [2024-09-27 15:57:46.965278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.727 qpair failed and we were unable to recover it. 00:39:06.727 [2024-09-27 15:57:46.965456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.727 [2024-09-27 15:57:46.965463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.727 qpair failed and we were unable to recover it. 00:39:06.727 [2024-09-27 15:57:46.965741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.727 [2024-09-27 15:57:46.965749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.727 qpair failed and we were unable to recover it. 00:39:06.727 [2024-09-27 15:57:46.966035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.727 [2024-09-27 15:57:46.966043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.727 qpair failed and we were unable to recover it. 00:39:06.727 [2024-09-27 15:57:46.966348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.727 [2024-09-27 15:57:46.966356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.727 qpair failed and we were unable to recover it. 00:39:06.727 [2024-09-27 15:57:46.966668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.727 [2024-09-27 15:57:46.966677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.727 qpair failed and we were unable to recover it. 00:39:06.727 [2024-09-27 15:57:46.966977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.727 [2024-09-27 15:57:46.966986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.727 qpair failed and we were unable to recover it. 00:39:06.727 [2024-09-27 15:57:46.967291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.727 [2024-09-27 15:57:46.967299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.727 qpair failed and we were unable to recover it. 00:39:06.727 [2024-09-27 15:57:46.967611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.727 [2024-09-27 15:57:46.967619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.727 qpair failed and we were unable to recover it. 00:39:06.727 [2024-09-27 15:57:46.967763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.727 [2024-09-27 15:57:46.967770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.727 qpair failed and we were unable to recover it. 00:39:06.727 [2024-09-27 15:57:46.968062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.727 [2024-09-27 15:57:46.968070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.727 qpair failed and we were unable to recover it. 00:39:06.727 [2024-09-27 15:57:46.968386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.727 [2024-09-27 15:57:46.968394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.727 qpair failed and we were unable to recover it. 00:39:06.727 [2024-09-27 15:57:46.968704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.727 [2024-09-27 15:57:46.968712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.727 qpair failed and we were unable to recover it. 00:39:06.727 [2024-09-27 15:57:46.968863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.727 [2024-09-27 15:57:46.968872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.727 qpair failed and we were unable to recover it. 00:39:06.727 [2024-09-27 15:57:46.969141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.728 [2024-09-27 15:57:46.969150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.728 qpair failed and we were unable to recover it. 00:39:06.728 [2024-09-27 15:57:46.969457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.728 [2024-09-27 15:57:46.969465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.728 qpair failed and we were unable to recover it. 00:39:06.728 [2024-09-27 15:57:46.969766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.728 [2024-09-27 15:57:46.969775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.728 qpair failed and we were unable to recover it. 00:39:06.728 [2024-09-27 15:57:46.970089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.728 [2024-09-27 15:57:46.970098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.728 qpair failed and we were unable to recover it. 00:39:06.728 [2024-09-27 15:57:46.970404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.728 [2024-09-27 15:57:46.970413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.728 qpair failed and we were unable to recover it. 00:39:06.728 [2024-09-27 15:57:46.970589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.728 [2024-09-27 15:57:46.970598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.728 qpair failed and we were unable to recover it. 00:39:06.728 [2024-09-27 15:57:46.970853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.728 [2024-09-27 15:57:46.970863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.728 qpair failed and we were unable to recover it. 00:39:06.728 [2024-09-27 15:57:46.971186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.728 [2024-09-27 15:57:46.971196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.728 qpair failed and we were unable to recover it. 00:39:06.728 [2024-09-27 15:57:46.971509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.728 [2024-09-27 15:57:46.971518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.728 qpair failed and we were unable to recover it. 00:39:06.728 [2024-09-27 15:57:46.971820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.728 [2024-09-27 15:57:46.971830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.728 qpair failed and we were unable to recover it. 00:39:06.728 [2024-09-27 15:57:46.972159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.728 [2024-09-27 15:57:46.972169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.728 qpair failed and we were unable to recover it. 00:39:06.728 [2024-09-27 15:57:46.972460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.728 [2024-09-27 15:57:46.972469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.728 qpair failed and we were unable to recover it. 00:39:06.728 [2024-09-27 15:57:46.972817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.728 [2024-09-27 15:57:46.972827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.728 qpair failed and we were unable to recover it. 00:39:06.728 [2024-09-27 15:57:46.973129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.728 [2024-09-27 15:57:46.973138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.728 qpair failed and we were unable to recover it. 00:39:06.728 [2024-09-27 15:57:46.973493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.728 [2024-09-27 15:57:46.973500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.728 qpair failed and we were unable to recover it. 00:39:06.728 [2024-09-27 15:57:46.973829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.728 [2024-09-27 15:57:46.973838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.728 qpair failed and we were unable to recover it. 00:39:06.728 [2024-09-27 15:57:46.974140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.728 [2024-09-27 15:57:46.974149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.728 qpair failed and we were unable to recover it. 00:39:06.728 [2024-09-27 15:57:46.974424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.728 [2024-09-27 15:57:46.974435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.728 qpair failed and we were unable to recover it. 00:39:06.728 [2024-09-27 15:57:46.974738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.728 [2024-09-27 15:57:46.974747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.728 qpair failed and we were unable to recover it. 00:39:06.728 [2024-09-27 15:57:46.975053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.728 [2024-09-27 15:57:46.975063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.728 qpair failed and we were unable to recover it. 00:39:06.728 [2024-09-27 15:57:46.975368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.728 [2024-09-27 15:57:46.975377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.728 qpair failed and we were unable to recover it. 00:39:06.728 [2024-09-27 15:57:46.975680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.728 [2024-09-27 15:57:46.975689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.728 qpair failed and we were unable to recover it. 00:39:06.728 [2024-09-27 15:57:46.976029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.728 [2024-09-27 15:57:46.976039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.728 qpair failed and we were unable to recover it. 00:39:06.728 [2024-09-27 15:57:46.976301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.728 [2024-09-27 15:57:46.976310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.728 qpair failed and we were unable to recover it. 00:39:06.728 [2024-09-27 15:57:46.976487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.728 [2024-09-27 15:57:46.976497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.728 qpair failed and we were unable to recover it. 00:39:06.728 [2024-09-27 15:57:46.976796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.728 [2024-09-27 15:57:46.976806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.728 qpair failed and we were unable to recover it. 00:39:06.728 [2024-09-27 15:57:46.977114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.728 [2024-09-27 15:57:46.977122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.728 qpair failed and we were unable to recover it. 00:39:06.728 [2024-09-27 15:57:46.977433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.728 [2024-09-27 15:57:46.977441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.728 qpair failed and we were unable to recover it. 00:39:06.728 [2024-09-27 15:57:46.977734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.728 [2024-09-27 15:57:46.977743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.728 qpair failed and we were unable to recover it. 00:39:06.728 [2024-09-27 15:57:46.978049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.728 [2024-09-27 15:57:46.978057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.728 qpair failed and we were unable to recover it. 00:39:06.728 [2024-09-27 15:57:46.978413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.728 [2024-09-27 15:57:46.978421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.728 qpair failed and we were unable to recover it. 00:39:06.728 [2024-09-27 15:57:46.978723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.728 [2024-09-27 15:57:46.978732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.728 qpair failed and we were unable to recover it. 00:39:06.728 [2024-09-27 15:57:46.979012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.728 [2024-09-27 15:57:46.979020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.728 qpair failed and we were unable to recover it. 00:39:06.728 [2024-09-27 15:57:46.979313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.728 [2024-09-27 15:57:46.979321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.728 qpair failed and we were unable to recover it. 00:39:06.728 [2024-09-27 15:57:46.979630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.728 [2024-09-27 15:57:46.979639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.728 qpair failed and we were unable to recover it. 00:39:06.728 [2024-09-27 15:57:46.979948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.728 [2024-09-27 15:57:46.979957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.728 qpair failed and we were unable to recover it. 00:39:06.728 [2024-09-27 15:57:46.980263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.728 [2024-09-27 15:57:46.980271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.729 qpair failed and we were unable to recover it. 00:39:06.729 [2024-09-27 15:57:46.980572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.729 [2024-09-27 15:57:46.980580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.729 qpair failed and we were unable to recover it. 00:39:06.729 [2024-09-27 15:57:46.980747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.729 [2024-09-27 15:57:46.980755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.729 qpair failed and we were unable to recover it. 00:39:06.729 [2024-09-27 15:57:46.981050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.729 [2024-09-27 15:57:46.981059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.729 qpair failed and we were unable to recover it. 00:39:06.729 [2024-09-27 15:57:46.981211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.729 [2024-09-27 15:57:46.981221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.729 qpair failed and we were unable to recover it. 00:39:06.729 [2024-09-27 15:57:46.981370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.729 [2024-09-27 15:57:46.981380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.729 qpair failed and we were unable to recover it. 00:39:06.729 [2024-09-27 15:57:46.981687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.729 [2024-09-27 15:57:46.981696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.729 qpair failed and we were unable to recover it. 00:39:06.729 [2024-09-27 15:57:46.981992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.729 [2024-09-27 15:57:46.982001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.729 qpair failed and we were unable to recover it. 00:39:06.729 [2024-09-27 15:57:46.982318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.729 [2024-09-27 15:57:46.982329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.729 qpair failed and we were unable to recover it. 00:39:06.729 [2024-09-27 15:57:46.982632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.729 [2024-09-27 15:57:46.982641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.729 qpair failed and we were unable to recover it. 00:39:06.729 [2024-09-27 15:57:46.982951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.729 [2024-09-27 15:57:46.982961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.729 qpair failed and we were unable to recover it. 00:39:06.729 [2024-09-27 15:57:46.983268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.729 [2024-09-27 15:57:46.983276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.729 qpair failed and we were unable to recover it. 00:39:06.729 [2024-09-27 15:57:46.983590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.729 [2024-09-27 15:57:46.983599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.729 qpair failed and we were unable to recover it. 00:39:06.729 [2024-09-27 15:57:46.983928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.729 [2024-09-27 15:57:46.983937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.729 qpair failed and we were unable to recover it. 00:39:06.729 [2024-09-27 15:57:46.984257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.729 [2024-09-27 15:57:46.984267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.729 qpair failed and we were unable to recover it. 00:39:06.729 [2024-09-27 15:57:46.984578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.729 [2024-09-27 15:57:46.984587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.729 qpair failed and we were unable to recover it. 00:39:06.729 [2024-09-27 15:57:46.984903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.729 [2024-09-27 15:57:46.984913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.729 qpair failed and we were unable to recover it. 00:39:06.729 [2024-09-27 15:57:46.985216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.729 [2024-09-27 15:57:46.985226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.729 qpair failed and we were unable to recover it. 00:39:06.729 [2024-09-27 15:57:46.985532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.729 [2024-09-27 15:57:46.985541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.729 qpair failed and we were unable to recover it. 00:39:06.729 [2024-09-27 15:57:46.985849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.729 [2024-09-27 15:57:46.985859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.729 qpair failed and we were unable to recover it. 00:39:06.729 [2024-09-27 15:57:46.986142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.729 [2024-09-27 15:57:46.986151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.729 qpair failed and we were unable to recover it. 00:39:06.729 [2024-09-27 15:57:46.986468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.729 [2024-09-27 15:57:46.986476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.729 qpair failed and we were unable to recover it. 00:39:06.729 [2024-09-27 15:57:46.986788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.729 [2024-09-27 15:57:46.986796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.729 qpair failed and we were unable to recover it. 00:39:06.729 [2024-09-27 15:57:46.987107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.729 [2024-09-27 15:57:46.987115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.729 qpair failed and we were unable to recover it. 00:39:06.729 [2024-09-27 15:57:46.987265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.729 [2024-09-27 15:57:46.987274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.729 qpair failed and we were unable to recover it. 00:39:06.729 [2024-09-27 15:57:46.987587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.729 [2024-09-27 15:57:46.987596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.729 qpair failed and we were unable to recover it. 00:39:06.729 [2024-09-27 15:57:46.987890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.729 [2024-09-27 15:57:46.987907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.729 qpair failed and we were unable to recover it. 00:39:06.729 [2024-09-27 15:57:46.988205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.729 [2024-09-27 15:57:46.988214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.729 qpair failed and we were unable to recover it. 00:39:06.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 645381 Killed "${NVMF_APP[@]}" "$@" 00:39:06.729 [2024-09-27 15:57:46.988524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.729 [2024-09-27 15:57:46.988537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.729 qpair failed and we were unable to recover it. 00:39:06.729 15:57:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:39:06.729 [2024-09-27 15:57:46.988840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.729 [2024-09-27 15:57:46.988855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.729 qpair failed and we were unable to recover it. 00:39:06.729 15:57:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:39:06.729 15:57:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:39:06.729 [2024-09-27 15:57:46.989068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.729 [2024-09-27 15:57:46.989083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.729 qpair failed and we were unable to recover it. 00:39:06.729 15:57:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:06.729 15:57:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:06.729 [2024-09-27 15:57:46.989417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.729 [2024-09-27 15:57:46.989443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.729 qpair failed and we were unable to recover it. 00:39:06.729 [2024-09-27 15:57:46.989772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.729 [2024-09-27 15:57:46.989782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.729 qpair failed and we were unable to recover it. 00:39:06.729 [2024-09-27 15:57:46.990157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.729 [2024-09-27 15:57:46.990167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.730 qpair failed and we were unable to recover it. 00:39:06.730 [2024-09-27 15:57:46.990463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.730 [2024-09-27 15:57:46.990471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.730 qpair failed and we were unable to recover it. 00:39:06.730 [2024-09-27 15:57:46.990780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.730 [2024-09-27 15:57:46.990788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.730 qpair failed and we were unable to recover it. 00:39:06.730 [2024-09-27 15:57:46.991067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.730 [2024-09-27 15:57:46.991076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.730 qpair failed and we were unable to recover it. 00:39:06.730 [2024-09-27 15:57:46.991406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.730 [2024-09-27 15:57:46.991415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.730 qpair failed and we were unable to recover it. 00:39:06.730 [2024-09-27 15:57:46.991720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.730 [2024-09-27 15:57:46.991728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.730 qpair failed and we were unable to recover it. 00:39:06.730 [2024-09-27 15:57:46.992032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.730 [2024-09-27 15:57:46.992042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.730 qpair failed and we were unable to recover it. 00:39:06.730 [2024-09-27 15:57:46.992347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.730 [2024-09-27 15:57:46.992355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.730 qpair failed and we were unable to recover it. 00:39:06.730 [2024-09-27 15:57:46.992665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.730 [2024-09-27 15:57:46.992674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.730 qpair failed and we were unable to recover it. 00:39:06.730 [2024-09-27 15:57:46.992988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.730 [2024-09-27 15:57:46.992997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.730 qpair failed and we were unable to recover it. 00:39:06.730 [2024-09-27 15:57:46.993296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.730 [2024-09-27 15:57:46.993306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.730 qpair failed and we were unable to recover it. 00:39:06.730 [2024-09-27 15:57:46.993491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.730 [2024-09-27 15:57:46.993499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.730 qpair failed and we were unable to recover it. 00:39:06.730 [2024-09-27 15:57:46.993869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.730 [2024-09-27 15:57:46.993878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.730 qpair failed and we were unable to recover it. 00:39:06.730 [2024-09-27 15:57:46.994187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.730 [2024-09-27 15:57:46.994206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.730 qpair failed and we were unable to recover it. 00:39:06.730 [2024-09-27 15:57:46.994536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.730 [2024-09-27 15:57:46.994562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.730 qpair failed and we were unable to recover it. 00:39:06.730 15:57:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # nvmfpid=646406 00:39:06.730 15:57:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # waitforlisten 646406 00:39:06.730 15:57:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 646406 ']' 00:39:06.730 [2024-09-27 15:57:46.994922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.730 [2024-09-27 15:57:46.994940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.730 qpair failed and we were unable to recover it. 00:39:06.730 15:57:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:06.730 15:57:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:06.730 15:57:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:06.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:06.730 15:57:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:06.730 15:57:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:06.730 [2024-09-27 15:57:46.995263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.730 [2024-09-27 15:57:46.995281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.730 qpair failed and we were unable to recover it. 00:39:06.730 [2024-09-27 15:57:46.995563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.730 [2024-09-27 15:57:46.995579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.730 qpair failed and we were unable to recover it. 00:39:06.730 [2024-09-27 15:57:46.995908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.730 [2024-09-27 15:57:46.995918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.730 qpair failed and we were unable to recover it. 00:39:06.730 15:57:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:39:06.730 [2024-09-27 15:57:46.996122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.730 [2024-09-27 15:57:46.996131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.730 qpair failed and we were unable to recover it. 00:39:06.730 [2024-09-27 15:57:46.996469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.730 [2024-09-27 15:57:46.996477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.730 qpair failed and we were unable to recover it. 00:39:06.730 [2024-09-27 15:57:46.996788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.730 [2024-09-27 15:57:46.996798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.730 qpair failed and we were unable to recover it. 00:39:06.730 [2024-09-27 15:57:46.996989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.730 [2024-09-27 15:57:46.996998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.730 qpair failed and we were unable to recover it. 00:39:06.730 [2024-09-27 15:57:46.997207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.730 [2024-09-27 15:57:46.997215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.730 qpair failed and we were unable to recover it. 00:39:06.730 [2024-09-27 15:57:46.997417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.730 [2024-09-27 15:57:46.997426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.730 qpair failed and we were unable to recover it. 00:39:06.730 [2024-09-27 15:57:46.997607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.730 [2024-09-27 15:57:46.997616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.730 qpair failed and we were unable to recover it. 00:39:06.730 [2024-09-27 15:57:46.997813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.730 [2024-09-27 15:57:46.997823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.730 qpair failed and we were unable to recover it. 00:39:06.730 [2024-09-27 15:57:46.998090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.730 [2024-09-27 15:57:46.998100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.730 qpair failed and we were unable to recover it. 00:39:06.731 [2024-09-27 15:57:46.998430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.731 [2024-09-27 15:57:46.998439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.731 qpair failed and we were unable to recover it. 00:39:06.731 [2024-09-27 15:57:46.998648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.731 [2024-09-27 15:57:46.998658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.731 qpair failed and we were unable to recover it. 00:39:06.731 [2024-09-27 15:57:46.998954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.731 [2024-09-27 15:57:46.998964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.731 qpair failed and we were unable to recover it. 00:39:06.731 [2024-09-27 15:57:46.999152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.731 [2024-09-27 15:57:46.999160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.731 qpair failed and we were unable to recover it. 00:39:06.731 [2024-09-27 15:57:46.999487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.731 [2024-09-27 15:57:46.999495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.731 qpair failed and we were unable to recover it. 00:39:06.731 [2024-09-27 15:57:46.999676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.731 [2024-09-27 15:57:46.999687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.731 qpair failed and we were unable to recover it. 00:39:06.731 [2024-09-27 15:57:46.999885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.731 [2024-09-27 15:57:46.999903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.731 qpair failed and we were unable to recover it. 00:39:06.731 [2024-09-27 15:57:47.000205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.731 [2024-09-27 15:57:47.000214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.731 qpair failed and we were unable to recover it. 00:39:06.731 [2024-09-27 15:57:47.000531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.731 [2024-09-27 15:57:47.000540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.731 qpair failed and we were unable to recover it. 00:39:06.731 [2024-09-27 15:57:47.000890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.731 [2024-09-27 15:57:47.000906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.731 qpair failed and we were unable to recover it. 00:39:06.731 [2024-09-27 15:57:47.001230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.731 [2024-09-27 15:57:47.001240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.731 qpair failed and we were unable to recover it. 00:39:06.731 [2024-09-27 15:57:47.001537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.731 [2024-09-27 15:57:47.001546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.731 qpair failed and we were unable to recover it. 00:39:06.731 [2024-09-27 15:57:47.001858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.731 [2024-09-27 15:57:47.001867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.731 qpair failed and we were unable to recover it. 00:39:06.731 [2024-09-27 15:57:47.002173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.731 [2024-09-27 15:57:47.002183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.731 qpair failed and we were unable to recover it. 00:39:06.731 [2024-09-27 15:57:47.002480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.731 [2024-09-27 15:57:47.002489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.731 qpair failed and we were unable to recover it. 00:39:06.731 [2024-09-27 15:57:47.002850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.731 [2024-09-27 15:57:47.002859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.731 qpair failed and we were unable to recover it. 00:39:06.731 [2024-09-27 15:57:47.003172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.731 [2024-09-27 15:57:47.003182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.731 qpair failed and we were unable to recover it. 00:39:06.731 [2024-09-27 15:57:47.003389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.731 [2024-09-27 15:57:47.003399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.731 qpair failed and we were unable to recover it. 00:39:06.731 [2024-09-27 15:57:47.003572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.731 [2024-09-27 15:57:47.003581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.731 qpair failed and we were unable to recover it. 00:39:06.731 [2024-09-27 15:57:47.003771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.731 [2024-09-27 15:57:47.003780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.731 qpair failed and we were unable to recover it. 00:39:06.731 [2024-09-27 15:57:47.003969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.731 [2024-09-27 15:57:47.003979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.731 qpair failed and we were unable to recover it. 00:39:06.731 [2024-09-27 15:57:47.004258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.731 [2024-09-27 15:57:47.004269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.731 qpair failed and we were unable to recover it. 00:39:06.731 [2024-09-27 15:57:47.004592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.731 [2024-09-27 15:57:47.004601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.731 qpair failed and we were unable to recover it. 00:39:06.731 [2024-09-27 15:57:47.004916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.731 [2024-09-27 15:57:47.004926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.731 qpair failed and we were unable to recover it. 00:39:06.731 [2024-09-27 15:57:47.005320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.731 [2024-09-27 15:57:47.005329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.731 qpair failed and we were unable to recover it. 00:39:06.731 [2024-09-27 15:57:47.005649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.731 [2024-09-27 15:57:47.005658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.731 qpair failed and we were unable to recover it. 00:39:06.731 [2024-09-27 15:57:47.005955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.731 [2024-09-27 15:57:47.005966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.731 qpair failed and we were unable to recover it. 00:39:06.731 [2024-09-27 15:57:47.006144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.731 [2024-09-27 15:57:47.006154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.731 qpair failed and we were unable to recover it. 00:39:06.731 [2024-09-27 15:57:47.006363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.731 [2024-09-27 15:57:47.006372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.731 qpair failed and we were unable to recover it. 00:39:06.731 [2024-09-27 15:57:47.006682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.731 [2024-09-27 15:57:47.006692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.731 qpair failed and we were unable to recover it. 00:39:06.731 [2024-09-27 15:57:47.007000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.731 [2024-09-27 15:57:47.007011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.731 qpair failed and we were unable to recover it. 00:39:06.731 [2024-09-27 15:57:47.007321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.731 [2024-09-27 15:57:47.007331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.731 qpair failed and we were unable to recover it. 00:39:06.731 [2024-09-27 15:57:47.007625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.731 [2024-09-27 15:57:47.007635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.731 qpair failed and we were unable to recover it. 00:39:06.731 [2024-09-27 15:57:47.007949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.731 [2024-09-27 15:57:47.007959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.731 qpair failed and we were unable to recover it. 00:39:06.731 [2024-09-27 15:57:47.008284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.731 [2024-09-27 15:57:47.008293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.731 qpair failed and we were unable to recover it. 00:39:06.731 [2024-09-27 15:57:47.008479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.731 [2024-09-27 15:57:47.008489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.731 qpair failed and we were unable to recover it. 00:39:06.731 [2024-09-27 15:57:47.008689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.731 [2024-09-27 15:57:47.008698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.732 qpair failed and we were unable to recover it. 00:39:06.732 [2024-09-27 15:57:47.009030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.732 [2024-09-27 15:57:47.009039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.732 qpair failed and we were unable to recover it. 00:39:06.732 [2024-09-27 15:57:47.009233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.732 [2024-09-27 15:57:47.009242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.732 qpair failed and we were unable to recover it. 00:39:06.732 [2024-09-27 15:57:47.009559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.732 [2024-09-27 15:57:47.009569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.732 qpair failed and we were unable to recover it. 00:39:06.732 [2024-09-27 15:57:47.009872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.732 [2024-09-27 15:57:47.009882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.732 qpair failed and we were unable to recover it. 00:39:06.732 [2024-09-27 15:57:47.010199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.732 [2024-09-27 15:57:47.010208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.732 qpair failed and we were unable to recover it. 00:39:06.732 [2024-09-27 15:57:47.010524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.732 [2024-09-27 15:57:47.010534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.732 qpair failed and we were unable to recover it. 00:39:06.732 [2024-09-27 15:57:47.010689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.732 [2024-09-27 15:57:47.010699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.732 qpair failed and we were unable to recover it. 00:39:06.732 [2024-09-27 15:57:47.011002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.732 [2024-09-27 15:57:47.011011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.732 qpair failed and we were unable to recover it. 00:39:06.732 [2024-09-27 15:57:47.011323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.732 [2024-09-27 15:57:47.011333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.732 qpair failed and we were unable to recover it. 00:39:06.732 [2024-09-27 15:57:47.011506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.732 [2024-09-27 15:57:47.011517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.732 qpair failed and we were unable to recover it. 00:39:06.732 [2024-09-27 15:57:47.011699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.732 [2024-09-27 15:57:47.011710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.732 qpair failed and we were unable to recover it. 00:39:06.732 [2024-09-27 15:57:47.011892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.732 [2024-09-27 15:57:47.011910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.732 qpair failed and we were unable to recover it. 00:39:06.732 [2024-09-27 15:57:47.012290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.732 [2024-09-27 15:57:47.012299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.732 qpair failed and we were unable to recover it. 00:39:06.732 [2024-09-27 15:57:47.012580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.732 [2024-09-27 15:57:47.012589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.732 qpair failed and we were unable to recover it. 00:39:06.732 [2024-09-27 15:57:47.012967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.732 [2024-09-27 15:57:47.012978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.732 qpair failed and we were unable to recover it. 00:39:06.732 [2024-09-27 15:57:47.013189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.732 [2024-09-27 15:57:47.013198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.732 qpair failed and we were unable to recover it. 00:39:06.732 [2024-09-27 15:57:47.013526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.732 [2024-09-27 15:57:47.013536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.732 qpair failed and we were unable to recover it. 00:39:06.732 [2024-09-27 15:57:47.013886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.732 [2024-09-27 15:57:47.013902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.732 qpair failed and we were unable to recover it. 00:39:06.732 [2024-09-27 15:57:47.014194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.732 [2024-09-27 15:57:47.014204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.732 qpair failed and we were unable to recover it. 00:39:06.732 [2024-09-27 15:57:47.014529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.732 [2024-09-27 15:57:47.014538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.732 qpair failed and we were unable to recover it. 00:39:06.732 [2024-09-27 15:57:47.014806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.732 [2024-09-27 15:57:47.014816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.732 qpair failed and we were unable to recover it. 00:39:06.732 [2024-09-27 15:57:47.015018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.732 [2024-09-27 15:57:47.015028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.732 qpair failed and we were unable to recover it. 00:39:06.732 [2024-09-27 15:57:47.015319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.732 [2024-09-27 15:57:47.015330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.732 qpair failed and we were unable to recover it. 00:39:06.732 [2024-09-27 15:57:47.015511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.732 [2024-09-27 15:57:47.015521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.732 qpair failed and we were unable to recover it. 00:39:06.732 [2024-09-27 15:57:47.015697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.732 [2024-09-27 15:57:47.015707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.732 qpair failed and we were unable to recover it. 00:39:06.732 [2024-09-27 15:57:47.016017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.732 [2024-09-27 15:57:47.016027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.732 qpair failed and we were unable to recover it. 00:39:06.732 [2024-09-27 15:57:47.016344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.732 [2024-09-27 15:57:47.016354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.732 qpair failed and we were unable to recover it. 00:39:06.732 [2024-09-27 15:57:47.016568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.732 [2024-09-27 15:57:47.016577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.732 qpair failed and we were unable to recover it. 00:39:06.732 [2024-09-27 15:57:47.016910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.732 [2024-09-27 15:57:47.016920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.732 qpair failed and we were unable to recover it. 00:39:06.732 [2024-09-27 15:57:47.017105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.732 [2024-09-27 15:57:47.017115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.732 qpair failed and we were unable to recover it. 00:39:06.732 [2024-09-27 15:57:47.017455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.732 [2024-09-27 15:57:47.017465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.732 qpair failed and we were unable to recover it. 00:39:06.732 [2024-09-27 15:57:47.017790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.732 [2024-09-27 15:57:47.017800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.732 qpair failed and we were unable to recover it. 00:39:06.732 [2024-09-27 15:57:47.018154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.732 [2024-09-27 15:57:47.018164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.732 qpair failed and we were unable to recover it. 00:39:06.732 [2024-09-27 15:57:47.018341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.732 [2024-09-27 15:57:47.018350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.732 qpair failed and we were unable to recover it. 00:39:06.732 [2024-09-27 15:57:47.018666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.732 [2024-09-27 15:57:47.018675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.732 qpair failed and we were unable to recover it. 00:39:06.732 [2024-09-27 15:57:47.019000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.732 [2024-09-27 15:57:47.019010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.732 qpair failed and we were unable to recover it. 00:39:06.732 [2024-09-27 15:57:47.019199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.732 [2024-09-27 15:57:47.019208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.732 qpair failed and we were unable to recover it. 00:39:06.733 [2024-09-27 15:57:47.019494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.733 [2024-09-27 15:57:47.019503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.733 qpair failed and we were unable to recover it. 00:39:06.733 [2024-09-27 15:57:47.019815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.733 [2024-09-27 15:57:47.019826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.733 qpair failed and we were unable to recover it. 00:39:06.733 [2024-09-27 15:57:47.020045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.733 [2024-09-27 15:57:47.020054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.733 qpair failed and we were unable to recover it. 00:39:06.733 [2024-09-27 15:57:47.020264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.733 [2024-09-27 15:57:47.020274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.733 qpair failed and we were unable to recover it. 00:39:06.733 [2024-09-27 15:57:47.020604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.733 [2024-09-27 15:57:47.020614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.733 qpair failed and we were unable to recover it. 00:39:06.733 [2024-09-27 15:57:47.020933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.733 [2024-09-27 15:57:47.020942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.733 qpair failed and we were unable to recover it. 00:39:06.733 [2024-09-27 15:57:47.021239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.733 [2024-09-27 15:57:47.021249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.733 qpair failed and we were unable to recover it. 00:39:06.733 [2024-09-27 15:57:47.021437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.733 [2024-09-27 15:57:47.021450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.733 qpair failed and we were unable to recover it. 00:39:06.733 [2024-09-27 15:57:47.021633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.733 [2024-09-27 15:57:47.021643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.733 qpair failed and we were unable to recover it. 00:39:06.733 [2024-09-27 15:57:47.021802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.733 [2024-09-27 15:57:47.021811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.733 qpair failed and we were unable to recover it. 00:39:06.733 [2024-09-27 15:57:47.022087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.733 [2024-09-27 15:57:47.022096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.733 qpair failed and we were unable to recover it. 00:39:06.733 [2024-09-27 15:57:47.022250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.733 [2024-09-27 15:57:47.022260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.733 qpair failed and we were unable to recover it. 00:39:06.733 [2024-09-27 15:57:47.022467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.733 [2024-09-27 15:57:47.022476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.733 qpair failed and we were unable to recover it. 00:39:06.733 [2024-09-27 15:57:47.022701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.733 [2024-09-27 15:57:47.022711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.733 qpair failed and we were unable to recover it. 00:39:06.733 [2024-09-27 15:57:47.023011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.733 [2024-09-27 15:57:47.023021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.733 qpair failed and we were unable to recover it. 00:39:06.733 [2024-09-27 15:57:47.023335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.733 [2024-09-27 15:57:47.023344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.733 qpair failed and we were unable to recover it. 00:39:06.733 [2024-09-27 15:57:47.023620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.733 [2024-09-27 15:57:47.023628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.733 qpair failed and we were unable to recover it. 00:39:06.733 [2024-09-27 15:57:47.023953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.733 [2024-09-27 15:57:47.023962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.733 qpair failed and we were unable to recover it. 00:39:06.733 [2024-09-27 15:57:47.024177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.733 [2024-09-27 15:57:47.024186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.733 qpair failed and we were unable to recover it. 00:39:06.733 [2024-09-27 15:57:47.024504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.733 [2024-09-27 15:57:47.024512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.733 qpair failed and we were unable to recover it. 00:39:06.733 [2024-09-27 15:57:47.024833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.733 [2024-09-27 15:57:47.024841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.733 qpair failed and we were unable to recover it. 00:39:06.733 [2024-09-27 15:57:47.025182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.733 [2024-09-27 15:57:47.025191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.733 qpair failed and we were unable to recover it. 00:39:06.733 [2024-09-27 15:57:47.025379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.733 [2024-09-27 15:57:47.025387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.733 qpair failed and we were unable to recover it. 00:39:06.733 [2024-09-27 15:57:47.025661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.733 [2024-09-27 15:57:47.025670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.733 qpair failed and we were unable to recover it. 00:39:06.733 [2024-09-27 15:57:47.025965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.733 [2024-09-27 15:57:47.025975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.733 qpair failed and we were unable to recover it. 00:39:06.733 [2024-09-27 15:57:47.026152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.733 [2024-09-27 15:57:47.026160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.733 qpair failed and we were unable to recover it. 00:39:06.733 [2024-09-27 15:57:47.026546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.733 [2024-09-27 15:57:47.026555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.733 qpair failed and we were unable to recover it. 00:39:06.733 [2024-09-27 15:57:47.026883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.733 [2024-09-27 15:57:47.026891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.733 qpair failed and we were unable to recover it. 00:39:06.733 [2024-09-27 15:57:47.027063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.733 [2024-09-27 15:57:47.027073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.733 qpair failed and we were unable to recover it. 00:39:06.733 [2024-09-27 15:57:47.027317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.733 [2024-09-27 15:57:47.027325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.733 qpair failed and we were unable to recover it. 00:39:06.733 [2024-09-27 15:57:47.027546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.733 [2024-09-27 15:57:47.027554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.733 qpair failed and we were unable to recover it. 00:39:06.733 [2024-09-27 15:57:47.027864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.733 [2024-09-27 15:57:47.027874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.733 qpair failed and we were unable to recover it. 00:39:06.733 [2024-09-27 15:57:47.028070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.733 [2024-09-27 15:57:47.028080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.733 qpair failed and we were unable to recover it. 00:39:06.733 [2024-09-27 15:57:47.028257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.733 [2024-09-27 15:57:47.028265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.733 qpair failed and we were unable to recover it. 00:39:06.733 [2024-09-27 15:57:47.028590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.733 [2024-09-27 15:57:47.028599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.733 qpair failed and we were unable to recover it. 00:39:06.733 [2024-09-27 15:57:47.028924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.733 [2024-09-27 15:57:47.028934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.733 qpair failed and we were unable to recover it. 00:39:06.733 [2024-09-27 15:57:47.028978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.734 [2024-09-27 15:57:47.028987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.734 qpair failed and we were unable to recover it. 00:39:06.734 [2024-09-27 15:57:47.029167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.734 [2024-09-27 15:57:47.029178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.734 qpair failed and we were unable to recover it. 00:39:06.734 [2024-09-27 15:57:47.029360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.734 [2024-09-27 15:57:47.029368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.734 qpair failed and we were unable to recover it. 00:39:06.734 [2024-09-27 15:57:47.029683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.734 [2024-09-27 15:57:47.029691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.734 qpair failed and we were unable to recover it. 00:39:06.734 [2024-09-27 15:57:47.030009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.734 [2024-09-27 15:57:47.030018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.734 qpair failed and we were unable to recover it. 00:39:06.734 [2024-09-27 15:57:47.030353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.734 [2024-09-27 15:57:47.030363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.734 qpair failed and we were unable to recover it. 00:39:06.734 [2024-09-27 15:57:47.030676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.734 [2024-09-27 15:57:47.030687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.734 qpair failed and we were unable to recover it. 00:39:06.734 [2024-09-27 15:57:47.031005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.734 [2024-09-27 15:57:47.031013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.734 qpair failed and we were unable to recover it. 00:39:06.734 [2024-09-27 15:57:47.031446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.734 [2024-09-27 15:57:47.031455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.734 qpair failed and we were unable to recover it. 00:39:06.734 [2024-09-27 15:57:47.031757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.734 [2024-09-27 15:57:47.031765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.734 qpair failed and we were unable to recover it. 00:39:06.734 [2024-09-27 15:57:47.031948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.734 [2024-09-27 15:57:47.031956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.734 qpair failed and we were unable to recover it. 00:39:06.734 [2024-09-27 15:57:47.032233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.734 [2024-09-27 15:57:47.032241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.734 qpair failed and we were unable to recover it. 00:39:06.734 [2024-09-27 15:57:47.032552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.734 [2024-09-27 15:57:47.032561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.734 qpair failed and we were unable to recover it. 00:39:06.734 [2024-09-27 15:57:47.032835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.734 [2024-09-27 15:57:47.032845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.734 qpair failed and we were unable to recover it. 00:39:06.734 [2024-09-27 15:57:47.033159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.734 [2024-09-27 15:57:47.033167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.734 qpair failed and we were unable to recover it. 00:39:06.734 [2024-09-27 15:57:47.033458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.734 [2024-09-27 15:57:47.033466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.734 qpair failed and we were unable to recover it. 00:39:06.734 [2024-09-27 15:57:47.033766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.734 [2024-09-27 15:57:47.033776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.734 qpair failed and we were unable to recover it. 00:39:06.734 [2024-09-27 15:57:47.034011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.734 [2024-09-27 15:57:47.034020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.734 qpair failed and we were unable to recover it. 00:39:06.734 [2024-09-27 15:57:47.034219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.734 [2024-09-27 15:57:47.034228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.734 qpair failed and we were unable to recover it. 00:39:06.734 [2024-09-27 15:57:47.034555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.734 [2024-09-27 15:57:47.034564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.734 qpair failed and we were unable to recover it. 00:39:06.734 [2024-09-27 15:57:47.034880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.734 [2024-09-27 15:57:47.034889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.734 qpair failed and we were unable to recover it. 00:39:06.734 [2024-09-27 15:57:47.035205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.734 [2024-09-27 15:57:47.035214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.734 qpair failed and we were unable to recover it. 00:39:06.734 [2024-09-27 15:57:47.035617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.734 [2024-09-27 15:57:47.035626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.734 qpair failed and we were unable to recover it. 00:39:06.734 [2024-09-27 15:57:47.035797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.734 [2024-09-27 15:57:47.035807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.734 qpair failed and we were unable to recover it. 00:39:06.734 [2024-09-27 15:57:47.036120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.734 [2024-09-27 15:57:47.036130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.734 qpair failed and we were unable to recover it. 00:39:06.734 [2024-09-27 15:57:47.036443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.734 [2024-09-27 15:57:47.036451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.734 qpair failed and we were unable to recover it. 00:39:06.734 [2024-09-27 15:57:47.036771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.734 [2024-09-27 15:57:47.036781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.734 qpair failed and we were unable to recover it. 00:39:06.734 [2024-09-27 15:57:47.036971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.734 [2024-09-27 15:57:47.036980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.734 qpair failed and we were unable to recover it. 00:39:06.734 [2024-09-27 15:57:47.037157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.734 [2024-09-27 15:57:47.037165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.734 qpair failed and we were unable to recover it. 00:39:06.734 [2024-09-27 15:57:47.037476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.734 [2024-09-27 15:57:47.037486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.734 qpair failed and we were unable to recover it. 00:39:06.734 [2024-09-27 15:57:47.037800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.734 [2024-09-27 15:57:47.037810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.734 qpair failed and we were unable to recover it. 00:39:06.734 [2024-09-27 15:57:47.038113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.734 [2024-09-27 15:57:47.038122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.734 qpair failed and we were unable to recover it. 00:39:06.734 [2024-09-27 15:57:47.038432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.735 [2024-09-27 15:57:47.038441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.735 qpair failed and we were unable to recover it. 00:39:06.735 [2024-09-27 15:57:47.038760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.735 [2024-09-27 15:57:47.038772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.735 qpair failed and we were unable to recover it. 00:39:06.735 [2024-09-27 15:57:47.039144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.735 [2024-09-27 15:57:47.039154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.735 qpair failed and we were unable to recover it. 00:39:06.735 [2024-09-27 15:57:47.039462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.735 [2024-09-27 15:57:47.039470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.735 qpair failed and we were unable to recover it. 00:39:06.735 [2024-09-27 15:57:47.039652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.735 [2024-09-27 15:57:47.039662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.735 qpair failed and we were unable to recover it. 00:39:06.735 [2024-09-27 15:57:47.040002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.735 [2024-09-27 15:57:47.040011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.735 qpair failed and we were unable to recover it. 00:39:06.735 [2024-09-27 15:57:47.040219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.735 [2024-09-27 15:57:47.040227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.735 qpair failed and we were unable to recover it. 00:39:06.735 [2024-09-27 15:57:47.040546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.735 [2024-09-27 15:57:47.040555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.735 qpair failed and we were unable to recover it. 00:39:06.735 [2024-09-27 15:57:47.040741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.735 [2024-09-27 15:57:47.040750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.735 qpair failed and we were unable to recover it. 00:39:06.735 [2024-09-27 15:57:47.041069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.735 [2024-09-27 15:57:47.041078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.735 qpair failed and we were unable to recover it. 00:39:06.735 [2024-09-27 15:57:47.041418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.735 [2024-09-27 15:57:47.041428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.735 qpair failed and we were unable to recover it. 00:39:06.735 [2024-09-27 15:57:47.041742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.735 [2024-09-27 15:57:47.041752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.735 qpair failed and we were unable to recover it. 00:39:06.735 [2024-09-27 15:57:47.041946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.735 [2024-09-27 15:57:47.041955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.735 qpair failed and we were unable to recover it. 00:39:06.735 [2024-09-27 15:57:47.042305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.735 [2024-09-27 15:57:47.042314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.735 qpair failed and we were unable to recover it. 00:39:06.735 [2024-09-27 15:57:47.042641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.735 [2024-09-27 15:57:47.042651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.735 qpair failed and we were unable to recover it. 00:39:06.735 [2024-09-27 15:57:47.042818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.735 [2024-09-27 15:57:47.042827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.735 qpair failed and we were unable to recover it. 00:39:06.735 [2024-09-27 15:57:47.043099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.735 [2024-09-27 15:57:47.043107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.735 qpair failed and we were unable to recover it. 00:39:06.735 [2024-09-27 15:57:47.043409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.735 [2024-09-27 15:57:47.043418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.735 qpair failed and we were unable to recover it. 00:39:06.735 [2024-09-27 15:57:47.043732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.735 [2024-09-27 15:57:47.043742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.735 qpair failed and we were unable to recover it. 00:39:06.735 [2024-09-27 15:57:47.044055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.735 [2024-09-27 15:57:47.044065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.735 qpair failed and we were unable to recover it. 00:39:06.735 [2024-09-27 15:57:47.044389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.735 [2024-09-27 15:57:47.044397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.735 qpair failed and we were unable to recover it. 00:39:06.735 [2024-09-27 15:57:47.044599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.735 [2024-09-27 15:57:47.044608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.735 qpair failed and we were unable to recover it. 00:39:06.735 [2024-09-27 15:57:47.044950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.735 [2024-09-27 15:57:47.044959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.735 qpair failed and we were unable to recover it. 00:39:06.735 [2024-09-27 15:57:47.045281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.735 [2024-09-27 15:57:47.045290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.735 qpair failed and we were unable to recover it. 00:39:06.735 [2024-09-27 15:57:47.045614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.735 [2024-09-27 15:57:47.045624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.735 qpair failed and we were unable to recover it. 00:39:06.735 [2024-09-27 15:57:47.045944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.735 [2024-09-27 15:57:47.045953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.735 qpair failed and we were unable to recover it. 00:39:06.735 [2024-09-27 15:57:47.046154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.735 [2024-09-27 15:57:47.046162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.735 qpair failed and we were unable to recover it. 00:39:06.735 [2024-09-27 15:57:47.046364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.735 [2024-09-27 15:57:47.046371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.735 qpair failed and we were unable to recover it. 00:39:06.735 [2024-09-27 15:57:47.046683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.735 [2024-09-27 15:57:47.046695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.735 qpair failed and we were unable to recover it. 00:39:06.735 [2024-09-27 15:57:47.047003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.735 [2024-09-27 15:57:47.047012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.735 qpair failed and we were unable to recover it. 00:39:06.735 [2024-09-27 15:57:47.047238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.735 [2024-09-27 15:57:47.047247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.735 qpair failed and we were unable to recover it. 00:39:06.735 [2024-09-27 15:57:47.047420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.735 [2024-09-27 15:57:47.047430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.735 qpair failed and we were unable to recover it. 00:39:06.735 [2024-09-27 15:57:47.047735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.735 [2024-09-27 15:57:47.047745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.735 qpair failed and we were unable to recover it. 00:39:06.735 [2024-09-27 15:57:47.048058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.735 [2024-09-27 15:57:47.048067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.735 qpair failed and we were unable to recover it. 00:39:06.735 [2024-09-27 15:57:47.048379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.735 [2024-09-27 15:57:47.048387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.735 qpair failed and we were unable to recover it. 00:39:06.735 [2024-09-27 15:57:47.048699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.735 [2024-09-27 15:57:47.048708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.735 qpair failed and we were unable to recover it. 00:39:06.735 [2024-09-27 15:57:47.049034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.735 [2024-09-27 15:57:47.049042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.735 qpair failed and we were unable to recover it. 00:39:06.735 [2024-09-27 15:57:47.049439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.736 [2024-09-27 15:57:47.049448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.736 qpair failed and we were unable to recover it. 00:39:06.736 [2024-09-27 15:57:47.049729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.736 [2024-09-27 15:57:47.049737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.736 qpair failed and we were unable to recover it. 00:39:06.736 [2024-09-27 15:57:47.050066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.736 [2024-09-27 15:57:47.050075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.736 qpair failed and we were unable to recover it. 00:39:06.736 [2024-09-27 15:57:47.050391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.736 [2024-09-27 15:57:47.050400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.736 qpair failed and we were unable to recover it. 00:39:06.736 [2024-09-27 15:57:47.050725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.736 [2024-09-27 15:57:47.050734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.736 qpair failed and we were unable to recover it. 00:39:06.736 [2024-09-27 15:57:47.050911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.736 [2024-09-27 15:57:47.050921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.736 qpair failed and we were unable to recover it. 00:39:06.736 [2024-09-27 15:57:47.051107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.736 [2024-09-27 15:57:47.051115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.736 qpair failed and we were unable to recover it. 00:39:06.736 [2024-09-27 15:57:47.051130] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:39:06.736 [2024-09-27 15:57:47.051184] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:06.736 [2024-09-27 15:57:47.051317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.736 [2024-09-27 15:57:47.051326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.736 qpair failed and we were unable to recover it. 00:39:06.736 [2024-09-27 15:57:47.051518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.736 [2024-09-27 15:57:47.051527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.736 qpair failed and we were unable to recover it. 00:39:06.736 [2024-09-27 15:57:47.051845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.736 [2024-09-27 15:57:47.051853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.736 qpair failed and we were unable to recover it. 00:39:06.736 [2024-09-27 15:57:47.052190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.736 [2024-09-27 15:57:47.052199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.736 qpair failed and we were unable to recover it. 00:39:06.736 [2024-09-27 15:57:47.052501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.736 [2024-09-27 15:57:47.052511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.736 qpair failed and we were unable to recover it. 00:39:06.736 [2024-09-27 15:57:47.052824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.736 [2024-09-27 15:57:47.052833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.736 qpair failed and we were unable to recover it. 00:39:06.736 [2024-09-27 15:57:47.053015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.736 [2024-09-27 15:57:47.053025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.736 qpair failed and we were unable to recover it. 00:39:06.736 [2024-09-27 15:57:47.053354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.736 [2024-09-27 15:57:47.053363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.736 qpair failed and we were unable to recover it. 00:39:06.736 [2024-09-27 15:57:47.053629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.736 [2024-09-27 15:57:47.053638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.736 qpair failed and we were unable to recover it. 00:39:06.736 [2024-09-27 15:57:47.053933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.736 [2024-09-27 15:57:47.053943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.736 qpair failed and we were unable to recover it. 00:39:06.736 [2024-09-27 15:57:47.054176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.736 [2024-09-27 15:57:47.054188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.736 qpair failed and we were unable to recover it. 00:39:06.736 [2024-09-27 15:57:47.054516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.736 [2024-09-27 15:57:47.054526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.736 qpair failed and we were unable to recover it. 00:39:06.736 [2024-09-27 15:57:47.054814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.736 [2024-09-27 15:57:47.054824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.736 qpair failed and we were unable to recover it. 00:39:06.736 [2024-09-27 15:57:47.055018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.736 [2024-09-27 15:57:47.055029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.736 qpair failed and we were unable to recover it. 00:39:06.736 [2024-09-27 15:57:47.055204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.736 [2024-09-27 15:57:47.055214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.736 qpair failed and we were unable to recover it. 00:39:06.736 [2024-09-27 15:57:47.055537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.736 [2024-09-27 15:57:47.055546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.736 qpair failed and we were unable to recover it. 00:39:06.736 [2024-09-27 15:57:47.055859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.736 [2024-09-27 15:57:47.055869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.736 qpair failed and we were unable to recover it. 00:39:06.736 [2024-09-27 15:57:47.056186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.736 [2024-09-27 15:57:47.056196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.736 qpair failed and we were unable to recover it. 00:39:06.736 [2024-09-27 15:57:47.056416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.736 [2024-09-27 15:57:47.056426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.736 qpair failed and we were unable to recover it. 00:39:06.736 [2024-09-27 15:57:47.056756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.736 [2024-09-27 15:57:47.056767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.736 qpair failed and we were unable to recover it. 00:39:06.736 [2024-09-27 15:57:47.057063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.736 [2024-09-27 15:57:47.057072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.736 qpair failed and we were unable to recover it. 00:39:06.736 [2024-09-27 15:57:47.057414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.736 [2024-09-27 15:57:47.057424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.736 qpair failed and we were unable to recover it. 00:39:06.736 [2024-09-27 15:57:47.057739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.736 [2024-09-27 15:57:47.057749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.736 qpair failed and we were unable to recover it. 00:39:06.736 [2024-09-27 15:57:47.058063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.736 [2024-09-27 15:57:47.058074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.736 qpair failed and we were unable to recover it. 00:39:06.736 [2024-09-27 15:57:47.058402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.736 [2024-09-27 15:57:47.058412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.736 qpair failed and we were unable to recover it. 00:39:06.736 [2024-09-27 15:57:47.058723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.736 [2024-09-27 15:57:47.058732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.736 qpair failed and we were unable to recover it. 00:39:06.736 [2024-09-27 15:57:47.059105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.736 [2024-09-27 15:57:47.059115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.736 qpair failed and we were unable to recover it. 00:39:06.736 [2024-09-27 15:57:47.059429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.736 [2024-09-27 15:57:47.059438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.736 qpair failed and we were unable to recover it. 00:39:06.736 [2024-09-27 15:57:47.059750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.736 [2024-09-27 15:57:47.059760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.737 qpair failed and we were unable to recover it. 00:39:06.737 [2024-09-27 15:57:47.060092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.737 [2024-09-27 15:57:47.060102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.737 qpair failed and we were unable to recover it. 00:39:06.737 [2024-09-27 15:57:47.060415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.737 [2024-09-27 15:57:47.060425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.737 qpair failed and we were unable to recover it. 00:39:06.737 [2024-09-27 15:57:47.060727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.737 [2024-09-27 15:57:47.060737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.737 qpair failed and we were unable to recover it. 00:39:06.737 [2024-09-27 15:57:47.061055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.737 [2024-09-27 15:57:47.061065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.737 qpair failed and we were unable to recover it. 00:39:06.737 [2024-09-27 15:57:47.061383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.737 [2024-09-27 15:57:47.061393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.737 qpair failed and we were unable to recover it. 00:39:06.737 [2024-09-27 15:57:47.061604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.737 [2024-09-27 15:57:47.061614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.737 qpair failed and we were unable to recover it. 00:39:06.737 [2024-09-27 15:57:47.061827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.737 [2024-09-27 15:57:47.061836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.737 qpair failed and we were unable to recover it. 00:39:06.737 [2024-09-27 15:57:47.062166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.737 [2024-09-27 15:57:47.062176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.737 qpair failed and we were unable to recover it. 00:39:06.737 [2024-09-27 15:57:47.062396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.737 [2024-09-27 15:57:47.062407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.737 qpair failed and we were unable to recover it. 00:39:06.737 [2024-09-27 15:57:47.062765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.737 [2024-09-27 15:57:47.062775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.737 qpair failed and we were unable to recover it. 00:39:06.737 [2024-09-27 15:57:47.062994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.737 [2024-09-27 15:57:47.063004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.737 qpair failed and we were unable to recover it. 00:39:06.737 [2024-09-27 15:57:47.063415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.737 [2024-09-27 15:57:47.063425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.737 qpair failed and we were unable to recover it. 00:39:06.737 [2024-09-27 15:57:47.063601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.737 [2024-09-27 15:57:47.063610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.737 qpair failed and we were unable to recover it. 00:39:06.737 [2024-09-27 15:57:47.063881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.737 [2024-09-27 15:57:47.063891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.737 qpair failed and we were unable to recover it. 00:39:06.737 [2024-09-27 15:57:47.064136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.737 [2024-09-27 15:57:47.064146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.737 qpair failed and we were unable to recover it. 00:39:06.737 [2024-09-27 15:57:47.064464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.737 [2024-09-27 15:57:47.064473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.737 qpair failed and we were unable to recover it. 00:39:06.737 [2024-09-27 15:57:47.064778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.737 [2024-09-27 15:57:47.064787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.737 qpair failed and we were unable to recover it. 00:39:06.737 [2024-09-27 15:57:47.065103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.737 [2024-09-27 15:57:47.065113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.737 qpair failed and we were unable to recover it. 00:39:06.737 [2024-09-27 15:57:47.065291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.737 [2024-09-27 15:57:47.065301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.737 qpair failed and we were unable to recover it. 00:39:06.737 [2024-09-27 15:57:47.065604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.737 [2024-09-27 15:57:47.065614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.737 qpair failed and we were unable to recover it. 00:39:06.737 [2024-09-27 15:57:47.065809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.737 [2024-09-27 15:57:47.065819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.737 qpair failed and we were unable to recover it. 00:39:06.737 [2024-09-27 15:57:47.066126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.737 [2024-09-27 15:57:47.066136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.737 qpair failed and we were unable to recover it. 00:39:06.737 [2024-09-27 15:57:47.066459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.737 [2024-09-27 15:57:47.066469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.737 qpair failed and we were unable to recover it. 00:39:06.737 [2024-09-27 15:57:47.066783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.737 [2024-09-27 15:57:47.066793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.737 qpair failed and we were unable to recover it. 00:39:06.737 [2024-09-27 15:57:47.067075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.737 [2024-09-27 15:57:47.067084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.737 qpair failed and we were unable to recover it. 00:39:06.737 [2024-09-27 15:57:47.067412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.737 [2024-09-27 15:57:47.067420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.737 qpair failed and we were unable to recover it. 00:39:06.737 [2024-09-27 15:57:47.067734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.737 [2024-09-27 15:57:47.067744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.737 qpair failed and we were unable to recover it. 00:39:06.737 [2024-09-27 15:57:47.068061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.737 [2024-09-27 15:57:47.068070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.737 qpair failed and we were unable to recover it. 00:39:06.737 [2024-09-27 15:57:47.068357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.737 [2024-09-27 15:57:47.068365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.737 qpair failed and we were unable to recover it. 00:39:06.737 [2024-09-27 15:57:47.068563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.737 [2024-09-27 15:57:47.068573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.737 qpair failed and we were unable to recover it. 00:39:06.737 [2024-09-27 15:57:47.068844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.737 [2024-09-27 15:57:47.068853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.737 qpair failed and we were unable to recover it. 00:39:06.737 [2024-09-27 15:57:47.069171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.737 [2024-09-27 15:57:47.069179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.737 qpair failed and we were unable to recover it. 00:39:06.737 [2024-09-27 15:57:47.069499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.737 [2024-09-27 15:57:47.069508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.737 qpair failed and we were unable to recover it. 00:39:06.737 [2024-09-27 15:57:47.069700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.737 [2024-09-27 15:57:47.069709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.737 qpair failed and we were unable to recover it. 00:39:06.737 [2024-09-27 15:57:47.069901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.737 [2024-09-27 15:57:47.069911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.737 qpair failed and we were unable to recover it. 00:39:06.737 [2024-09-27 15:57:47.070086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.737 [2024-09-27 15:57:47.070094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.737 qpair failed and we were unable to recover it. 00:39:06.737 [2024-09-27 15:57:47.070441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.737 [2024-09-27 15:57:47.070450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.737 qpair failed and we were unable to recover it. 00:39:06.738 [2024-09-27 15:57:47.070672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.738 [2024-09-27 15:57:47.070681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.738 qpair failed and we were unable to recover it. 00:39:06.738 [2024-09-27 15:57:47.071004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.738 [2024-09-27 15:57:47.071013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.738 qpair failed and we were unable to recover it. 00:39:06.738 [2024-09-27 15:57:47.071350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.738 [2024-09-27 15:57:47.071360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.738 qpair failed and we were unable to recover it. 00:39:06.738 [2024-09-27 15:57:47.071566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.738 [2024-09-27 15:57:47.071576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.738 qpair failed and we were unable to recover it. 00:39:06.738 [2024-09-27 15:57:47.071787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.738 [2024-09-27 15:57:47.071797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.738 qpair failed and we were unable to recover it. 00:39:06.738 [2024-09-27 15:57:47.071947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.738 [2024-09-27 15:57:47.071957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.738 qpair failed and we were unable to recover it. 00:39:06.738 [2024-09-27 15:57:47.072254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.738 [2024-09-27 15:57:47.072263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.738 qpair failed and we were unable to recover it. 00:39:06.738 [2024-09-27 15:57:47.072604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.738 [2024-09-27 15:57:47.072614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.738 qpair failed and we were unable to recover it. 00:39:06.738 [2024-09-27 15:57:47.072809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.738 [2024-09-27 15:57:47.072818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.738 qpair failed and we were unable to recover it. 00:39:06.738 [2024-09-27 15:57:47.073103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.738 [2024-09-27 15:57:47.073113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.738 qpair failed and we were unable to recover it. 00:39:06.738 [2024-09-27 15:57:47.073432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.738 [2024-09-27 15:57:47.073441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.738 qpair failed and we were unable to recover it. 00:39:06.738 [2024-09-27 15:57:47.073751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.738 [2024-09-27 15:57:47.073761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.738 qpair failed and we were unable to recover it. 00:39:06.738 [2024-09-27 15:57:47.073931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.738 [2024-09-27 15:57:47.073942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.738 qpair failed and we were unable to recover it. 00:39:06.738 [2024-09-27 15:57:47.074269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.738 [2024-09-27 15:57:47.074277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.738 qpair failed and we were unable to recover it. 00:39:06.738 [2024-09-27 15:57:47.074587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.738 [2024-09-27 15:57:47.074596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.738 qpair failed and we were unable to recover it. 00:39:06.738 [2024-09-27 15:57:47.074941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.738 [2024-09-27 15:57:47.074950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.738 qpair failed and we were unable to recover it. 00:39:06.738 [2024-09-27 15:57:47.075261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.738 [2024-09-27 15:57:47.075269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.738 qpair failed and we were unable to recover it. 00:39:06.738 [2024-09-27 15:57:47.075585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.738 [2024-09-27 15:57:47.075594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.738 qpair failed and we were unable to recover it. 00:39:06.738 [2024-09-27 15:57:47.075874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.738 [2024-09-27 15:57:47.075883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.738 qpair failed and we were unable to recover it. 00:39:06.738 [2024-09-27 15:57:47.076201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.738 [2024-09-27 15:57:47.076211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.738 qpair failed and we were unable to recover it. 00:39:06.738 [2024-09-27 15:57:47.076582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.738 [2024-09-27 15:57:47.076591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.738 qpair failed and we were unable to recover it. 00:39:06.738 [2024-09-27 15:57:47.076646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.738 [2024-09-27 15:57:47.076653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.738 qpair failed and we were unable to recover it. 00:39:06.738 [2024-09-27 15:57:47.076959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.738 [2024-09-27 15:57:47.076967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.738 qpair failed and we were unable to recover it. 00:39:06.738 [2024-09-27 15:57:47.077263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.738 [2024-09-27 15:57:47.077273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.738 qpair failed and we were unable to recover it. 00:39:06.738 [2024-09-27 15:57:47.077616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.738 [2024-09-27 15:57:47.077625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.738 qpair failed and we were unable to recover it. 00:39:06.738 [2024-09-27 15:57:47.077937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.738 [2024-09-27 15:57:47.077945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.738 qpair failed and we were unable to recover it. 00:39:06.738 [2024-09-27 15:57:47.078240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.738 [2024-09-27 15:57:47.078249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.738 qpair failed and we were unable to recover it. 00:39:06.738 [2024-09-27 15:57:47.078468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.738 [2024-09-27 15:57:47.078477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.738 qpair failed and we were unable to recover it. 00:39:06.738 [2024-09-27 15:57:47.078787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.738 [2024-09-27 15:57:47.078796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.738 qpair failed and we were unable to recover it. 00:39:06.738 [2024-09-27 15:57:47.079135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.738 [2024-09-27 15:57:47.079144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.738 qpair failed and we were unable to recover it. 00:39:06.738 [2024-09-27 15:57:47.079317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.738 [2024-09-27 15:57:47.079325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.738 qpair failed and we were unable to recover it. 00:39:06.738 [2024-09-27 15:57:47.079547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.738 [2024-09-27 15:57:47.079556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.738 qpair failed and we were unable to recover it. 00:39:06.738 [2024-09-27 15:57:47.079855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.738 [2024-09-27 15:57:47.079864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.738 qpair failed and we were unable to recover it. 00:39:06.738 [2024-09-27 15:57:47.080197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.738 [2024-09-27 15:57:47.080206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.738 qpair failed and we were unable to recover it. 00:39:06.738 [2024-09-27 15:57:47.080518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.738 [2024-09-27 15:57:47.080527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.738 qpair failed and we were unable to recover it. 00:39:06.738 [2024-09-27 15:57:47.080849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.738 [2024-09-27 15:57:47.080858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.738 qpair failed and we were unable to recover it. 00:39:06.738 [2024-09-27 15:57:47.081162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.738 [2024-09-27 15:57:47.081171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.738 qpair failed and we were unable to recover it. 00:39:06.738 [2024-09-27 15:57:47.081494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.739 [2024-09-27 15:57:47.081502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.739 qpair failed and we were unable to recover it. 00:39:06.739 [2024-09-27 15:57:47.081702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.739 [2024-09-27 15:57:47.081711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.739 qpair failed and we were unable to recover it. 00:39:06.739 [2024-09-27 15:57:47.082043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.739 [2024-09-27 15:57:47.082055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.739 qpair failed and we were unable to recover it. 00:39:06.739 [2024-09-27 15:57:47.082387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.739 [2024-09-27 15:57:47.082395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.739 qpair failed and we were unable to recover it. 00:39:06.739 [2024-09-27 15:57:47.082619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.739 [2024-09-27 15:57:47.082628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.739 qpair failed and we were unable to recover it. 00:39:06.739 [2024-09-27 15:57:47.082932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.739 [2024-09-27 15:57:47.082942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.739 qpair failed and we were unable to recover it. 00:39:06.739 [2024-09-27 15:57:47.083259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.739 [2024-09-27 15:57:47.083268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.739 qpair failed and we were unable to recover it. 00:39:06.739 [2024-09-27 15:57:47.083566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.739 [2024-09-27 15:57:47.083575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.739 qpair failed and we were unable to recover it. 00:39:06.739 [2024-09-27 15:57:47.083890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.739 [2024-09-27 15:57:47.083904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.739 qpair failed and we were unable to recover it. 00:39:06.739 [2024-09-27 15:57:47.084261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.739 [2024-09-27 15:57:47.084270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.739 qpair failed and we were unable to recover it. 00:39:06.739 [2024-09-27 15:57:47.084450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.739 [2024-09-27 15:57:47.084459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.739 qpair failed and we were unable to recover it. 00:39:06.739 [2024-09-27 15:57:47.084771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.739 [2024-09-27 15:57:47.084781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.739 qpair failed and we were unable to recover it. 00:39:06.739 [2024-09-27 15:57:47.085115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.739 [2024-09-27 15:57:47.085124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.739 qpair failed and we were unable to recover it. 00:39:06.739 [2024-09-27 15:57:47.085447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.739 [2024-09-27 15:57:47.085457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.739 qpair failed and we were unable to recover it. 00:39:06.739 [2024-09-27 15:57:47.085649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.739 [2024-09-27 15:57:47.085659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.739 qpair failed and we were unable to recover it. 00:39:06.739 [2024-09-27 15:57:47.085992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.739 [2024-09-27 15:57:47.086001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.739 qpair failed and we were unable to recover it. 00:39:06.739 [2024-09-27 15:57:47.086268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.739 [2024-09-27 15:57:47.086277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.739 qpair failed and we were unable to recover it. 00:39:06.739 [2024-09-27 15:57:47.086618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.739 [2024-09-27 15:57:47.086627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.739 qpair failed and we were unable to recover it. 00:39:06.739 [2024-09-27 15:57:47.086960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.739 [2024-09-27 15:57:47.086969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.739 qpair failed and we were unable to recover it. 00:39:06.739 [2024-09-27 15:57:47.087295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.739 [2024-09-27 15:57:47.087303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.739 qpair failed and we were unable to recover it. 00:39:06.739 [2024-09-27 15:57:47.087617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.739 [2024-09-27 15:57:47.087627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.739 qpair failed and we were unable to recover it. 00:39:06.739 [2024-09-27 15:57:47.087941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.739 [2024-09-27 15:57:47.087950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.739 qpair failed and we were unable to recover it. 00:39:06.739 [2024-09-27 15:57:47.088242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.739 [2024-09-27 15:57:47.088250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.739 qpair failed and we were unable to recover it. 00:39:06.739 [2024-09-27 15:57:47.088541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.739 [2024-09-27 15:57:47.088550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.739 qpair failed and we were unable to recover it. 00:39:06.739 [2024-09-27 15:57:47.088866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.739 [2024-09-27 15:57:47.088876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.739 qpair failed and we were unable to recover it. 00:39:06.739 [2024-09-27 15:57:47.089002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.739 [2024-09-27 15:57:47.089011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.739 qpair failed and we were unable to recover it. 00:39:06.739 [2024-09-27 15:57:47.089265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.739 [2024-09-27 15:57:47.089275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.739 qpair failed and we were unable to recover it. 00:39:06.739 [2024-09-27 15:57:47.089588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.739 [2024-09-27 15:57:47.089597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.739 qpair failed and we were unable to recover it. 00:39:06.739 [2024-09-27 15:57:47.089926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.739 [2024-09-27 15:57:47.089937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.739 qpair failed and we were unable to recover it. 00:39:06.739 [2024-09-27 15:57:47.090085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.739 [2024-09-27 15:57:47.090096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.739 qpair failed and we were unable to recover it. 00:39:06.739 [2024-09-27 15:57:47.090432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.739 [2024-09-27 15:57:47.090441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.739 qpair failed and we were unable to recover it. 00:39:06.739 [2024-09-27 15:57:47.090765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.739 [2024-09-27 15:57:47.090773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.739 qpair failed and we were unable to recover it. 00:39:06.739 [2024-09-27 15:57:47.091001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.739 [2024-09-27 15:57:47.091011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.739 qpair failed and we were unable to recover it. 00:39:06.739 [2024-09-27 15:57:47.091219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.739 [2024-09-27 15:57:47.091228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.740 qpair failed and we were unable to recover it. 00:39:06.740 [2024-09-27 15:57:47.091558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.740 [2024-09-27 15:57:47.091567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.740 qpair failed and we were unable to recover it. 00:39:06.740 [2024-09-27 15:57:47.091751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.740 [2024-09-27 15:57:47.091759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.740 qpair failed and we were unable to recover it. 00:39:06.740 [2024-09-27 15:57:47.092074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.740 [2024-09-27 15:57:47.092084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.740 qpair failed and we were unable to recover it. 00:39:06.740 [2024-09-27 15:57:47.092363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.740 [2024-09-27 15:57:47.092371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.740 qpair failed and we were unable to recover it. 00:39:06.740 [2024-09-27 15:57:47.092575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.740 [2024-09-27 15:57:47.092583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.740 qpair failed and we were unable to recover it. 00:39:06.740 [2024-09-27 15:57:47.092938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.740 [2024-09-27 15:57:47.092947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.740 qpair failed and we were unable to recover it. 00:39:06.740 [2024-09-27 15:57:47.093254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.740 [2024-09-27 15:57:47.093263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.740 qpair failed and we were unable to recover it. 00:39:06.740 [2024-09-27 15:57:47.093572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.740 [2024-09-27 15:57:47.093580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.740 qpair failed and we were unable to recover it. 00:39:06.740 [2024-09-27 15:57:47.093988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.740 [2024-09-27 15:57:47.093998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.740 qpair failed and we were unable to recover it. 00:39:06.740 [2024-09-27 15:57:47.094222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.740 [2024-09-27 15:57:47.094230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.740 qpair failed and we were unable to recover it. 00:39:06.740 [2024-09-27 15:57:47.094560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.740 [2024-09-27 15:57:47.094570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.740 qpair failed and we were unable to recover it. 00:39:06.740 [2024-09-27 15:57:47.094887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.740 [2024-09-27 15:57:47.094917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.740 qpair failed and we were unable to recover it. 00:39:06.740 [2024-09-27 15:57:47.095210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.740 [2024-09-27 15:57:47.095220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.740 qpair failed and we were unable to recover it. 00:39:06.740 [2024-09-27 15:57:47.095430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.740 [2024-09-27 15:57:47.095440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.740 qpair failed and we were unable to recover it. 00:39:06.740 [2024-09-27 15:57:47.095632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.740 [2024-09-27 15:57:47.095640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.740 qpair failed and we were unable to recover it. 00:39:06.740 [2024-09-27 15:57:47.095955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.740 [2024-09-27 15:57:47.095964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.740 qpair failed and we were unable to recover it. 00:39:06.740 [2024-09-27 15:57:47.096302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.740 [2024-09-27 15:57:47.096311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.740 qpair failed and we were unable to recover it. 00:39:06.740 [2024-09-27 15:57:47.096520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.740 [2024-09-27 15:57:47.096528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.740 qpair failed and we were unable to recover it. 00:39:06.740 [2024-09-27 15:57:47.096727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.740 [2024-09-27 15:57:47.096735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.740 qpair failed and we were unable to recover it. 00:39:06.740 [2024-09-27 15:57:47.096939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.740 [2024-09-27 15:57:47.096948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.740 qpair failed and we were unable to recover it. 00:39:06.740 [2024-09-27 15:57:47.097275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.740 [2024-09-27 15:57:47.097283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.740 qpair failed and we were unable to recover it. 00:39:06.740 [2024-09-27 15:57:47.097595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.740 [2024-09-27 15:57:47.097604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.740 qpair failed and we were unable to recover it. 00:39:06.740 [2024-09-27 15:57:47.097683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.740 [2024-09-27 15:57:47.097691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.740 qpair failed and we were unable to recover it. 00:39:06.740 [2024-09-27 15:57:47.097978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.740 [2024-09-27 15:57:47.097988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.740 qpair failed and we were unable to recover it. 00:39:06.740 [2024-09-27 15:57:47.098189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.740 [2024-09-27 15:57:47.098196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.740 qpair failed and we were unable to recover it. 00:39:06.740 [2024-09-27 15:57:47.098517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.740 [2024-09-27 15:57:47.098525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.740 qpair failed and we were unable to recover it. 00:39:06.740 [2024-09-27 15:57:47.098850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.740 [2024-09-27 15:57:47.098859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.740 qpair failed and we were unable to recover it. 00:39:06.740 [2024-09-27 15:57:47.099253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.740 [2024-09-27 15:57:47.099264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.740 qpair failed and we were unable to recover it. 00:39:06.740 [2024-09-27 15:57:47.099574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.740 [2024-09-27 15:57:47.099583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.740 qpair failed and we were unable to recover it. 00:39:06.740 [2024-09-27 15:57:47.099886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.740 [2024-09-27 15:57:47.099898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.740 qpair failed and we were unable to recover it. 00:39:06.740 [2024-09-27 15:57:47.100201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.740 [2024-09-27 15:57:47.100209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.740 qpair failed and we were unable to recover it. 00:39:06.740 [2024-09-27 15:57:47.100530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.740 [2024-09-27 15:57:47.100539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.740 qpair failed and we were unable to recover it. 00:39:06.740 [2024-09-27 15:57:47.100899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.740 [2024-09-27 15:57:47.100908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.740 qpair failed and we were unable to recover it. 00:39:06.740 [2024-09-27 15:57:47.101220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.740 [2024-09-27 15:57:47.101229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.740 qpair failed and we were unable to recover it. 00:39:06.740 [2024-09-27 15:57:47.101402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.740 [2024-09-27 15:57:47.101412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.740 qpair failed and we were unable to recover it. 00:39:06.740 [2024-09-27 15:57:47.101824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.740 [2024-09-27 15:57:47.101833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.740 qpair failed and we were unable to recover it. 00:39:06.741 [2024-09-27 15:57:47.102134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.741 [2024-09-27 15:57:47.102143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.741 qpair failed and we were unable to recover it. 00:39:06.741 [2024-09-27 15:57:47.102465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.741 [2024-09-27 15:57:47.102474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.741 qpair failed and we were unable to recover it. 00:39:06.741 [2024-09-27 15:57:47.102647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.741 [2024-09-27 15:57:47.102656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.741 qpair failed and we were unable to recover it. 00:39:06.741 [2024-09-27 15:57:47.102994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.741 [2024-09-27 15:57:47.103003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.741 qpair failed and we were unable to recover it. 00:39:06.741 [2024-09-27 15:57:47.103323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.741 [2024-09-27 15:57:47.103333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.741 qpair failed and we were unable to recover it. 00:39:06.741 [2024-09-27 15:57:47.103640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.741 [2024-09-27 15:57:47.103648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.741 qpair failed and we were unable to recover it. 00:39:06.741 [2024-09-27 15:57:47.103959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.741 [2024-09-27 15:57:47.103967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.741 qpair failed and we were unable to recover it. 00:39:06.741 [2024-09-27 15:57:47.104341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.741 [2024-09-27 15:57:47.104349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.741 qpair failed and we were unable to recover it. 00:39:06.741 [2024-09-27 15:57:47.104629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.741 [2024-09-27 15:57:47.104637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.741 qpair failed and we were unable to recover it. 00:39:06.741 [2024-09-27 15:57:47.104940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.741 [2024-09-27 15:57:47.104948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.741 qpair failed and we were unable to recover it. 00:39:06.741 [2024-09-27 15:57:47.105262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.741 [2024-09-27 15:57:47.105271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.741 qpair failed and we were unable to recover it. 00:39:06.741 [2024-09-27 15:57:47.105590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.741 [2024-09-27 15:57:47.105599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.741 qpair failed and we were unable to recover it. 00:39:06.741 [2024-09-27 15:57:47.105907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.741 [2024-09-27 15:57:47.105917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.741 qpair failed and we were unable to recover it. 00:39:06.741 [2024-09-27 15:57:47.106206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.741 [2024-09-27 15:57:47.106214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.741 qpair failed and we were unable to recover it. 00:39:06.741 [2024-09-27 15:57:47.106525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.741 [2024-09-27 15:57:47.106533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.741 qpair failed and we were unable to recover it. 00:39:06.741 [2024-09-27 15:57:47.106840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.741 [2024-09-27 15:57:47.106848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.741 qpair failed and we were unable to recover it. 00:39:06.741 [2024-09-27 15:57:47.107133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.741 [2024-09-27 15:57:47.107142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.741 qpair failed and we were unable to recover it. 00:39:06.741 [2024-09-27 15:57:47.107451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.741 [2024-09-27 15:57:47.107459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.741 qpair failed and we were unable to recover it. 00:39:06.741 [2024-09-27 15:57:47.107731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.741 [2024-09-27 15:57:47.107739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.741 qpair failed and we were unable to recover it. 00:39:06.741 [2024-09-27 15:57:47.108053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.741 [2024-09-27 15:57:47.108061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.741 qpair failed and we were unable to recover it. 00:39:06.741 [2024-09-27 15:57:47.108382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.741 [2024-09-27 15:57:47.108390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.741 qpair failed and we were unable to recover it. 00:39:06.741 [2024-09-27 15:57:47.108700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.741 [2024-09-27 15:57:47.108708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.741 qpair failed and we were unable to recover it. 00:39:06.741 [2024-09-27 15:57:47.108992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.741 [2024-09-27 15:57:47.109000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.741 qpair failed and we were unable to recover it. 00:39:06.741 [2024-09-27 15:57:47.109302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.741 [2024-09-27 15:57:47.109310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.741 qpair failed and we were unable to recover it. 00:39:06.741 [2024-09-27 15:57:47.109465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.741 [2024-09-27 15:57:47.109473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.741 qpair failed and we were unable to recover it. 00:39:06.741 [2024-09-27 15:57:47.109749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.741 [2024-09-27 15:57:47.109758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.741 qpair failed and we were unable to recover it. 00:39:06.741 [2024-09-27 15:57:47.110060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.741 [2024-09-27 15:57:47.110069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.741 qpair failed and we were unable to recover it. 00:39:06.741 [2024-09-27 15:57:47.110399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.741 [2024-09-27 15:57:47.110409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.741 qpair failed and we were unable to recover it. 00:39:06.741 [2024-09-27 15:57:47.110716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.741 [2024-09-27 15:57:47.110724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.741 qpair failed and we were unable to recover it. 00:39:06.741 [2024-09-27 15:57:47.111044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.741 [2024-09-27 15:57:47.111052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.741 qpair failed and we were unable to recover it. 00:39:06.741 [2024-09-27 15:57:47.111377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.741 [2024-09-27 15:57:47.111385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.741 qpair failed and we were unable to recover it. 00:39:06.741 [2024-09-27 15:57:47.111696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.741 [2024-09-27 15:57:47.111704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.741 qpair failed and we were unable to recover it. 00:39:06.741 [2024-09-27 15:57:47.111981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.741 [2024-09-27 15:57:47.111989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.741 qpair failed and we were unable to recover it. 00:39:06.741 [2024-09-27 15:57:47.112298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.741 [2024-09-27 15:57:47.112306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.741 qpair failed and we were unable to recover it. 00:39:06.741 [2024-09-27 15:57:47.112585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.741 [2024-09-27 15:57:47.112593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.741 qpair failed and we were unable to recover it. 00:39:06.741 [2024-09-27 15:57:47.112907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.741 [2024-09-27 15:57:47.112915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.741 qpair failed and we were unable to recover it. 00:39:06.741 [2024-09-27 15:57:47.113095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.741 [2024-09-27 15:57:47.113104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.742 qpair failed and we were unable to recover it. 00:39:06.742 [2024-09-27 15:57:47.113409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.742 [2024-09-27 15:57:47.113417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.742 qpair failed and we were unable to recover it. 00:39:06.742 [2024-09-27 15:57:47.113739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.742 [2024-09-27 15:57:47.113748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.742 qpair failed and we were unable to recover it. 00:39:06.742 [2024-09-27 15:57:47.114048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.742 [2024-09-27 15:57:47.114057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.742 qpair failed and we were unable to recover it. 00:39:06.742 [2024-09-27 15:57:47.114371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.742 [2024-09-27 15:57:47.114379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.742 qpair failed and we were unable to recover it. 00:39:06.742 [2024-09-27 15:57:47.114693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.742 [2024-09-27 15:57:47.114702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.742 qpair failed and we were unable to recover it. 00:39:06.742 [2024-09-27 15:57:47.115020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.742 [2024-09-27 15:57:47.115029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.742 qpair failed and we were unable to recover it. 00:39:06.742 [2024-09-27 15:57:47.115361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.742 [2024-09-27 15:57:47.115369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.742 qpair failed and we were unable to recover it. 00:39:06.742 [2024-09-27 15:57:47.115684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.742 [2024-09-27 15:57:47.115692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.742 qpair failed and we were unable to recover it. 00:39:06.742 [2024-09-27 15:57:47.116020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.742 [2024-09-27 15:57:47.116028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.742 qpair failed and we were unable to recover it. 00:39:06.742 [2024-09-27 15:57:47.116195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.742 [2024-09-27 15:57:47.116203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.742 qpair failed and we were unable to recover it. 00:39:06.742 [2024-09-27 15:57:47.116551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.742 [2024-09-27 15:57:47.116559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.742 qpair failed and we were unable to recover it. 00:39:06.742 [2024-09-27 15:57:47.116873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.742 [2024-09-27 15:57:47.116881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.742 qpair failed and we were unable to recover it. 00:39:06.742 [2024-09-27 15:57:47.117069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.742 [2024-09-27 15:57:47.117078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.742 qpair failed and we were unable to recover it. 00:39:06.742 [2024-09-27 15:57:47.117385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.742 [2024-09-27 15:57:47.117393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.742 qpair failed and we were unable to recover it. 00:39:06.742 [2024-09-27 15:57:47.117698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.742 [2024-09-27 15:57:47.117706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.742 qpair failed and we were unable to recover it. 00:39:06.742 [2024-09-27 15:57:47.117854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.742 [2024-09-27 15:57:47.117861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.742 qpair failed and we were unable to recover it. 00:39:06.742 [2024-09-27 15:57:47.118160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.742 [2024-09-27 15:57:47.118168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.742 qpair failed and we were unable to recover it. 00:39:06.742 [2024-09-27 15:57:47.118484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.742 [2024-09-27 15:57:47.118494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.742 qpair failed and we were unable to recover it. 00:39:06.742 [2024-09-27 15:57:47.118799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.742 [2024-09-27 15:57:47.118807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.742 qpair failed and we were unable to recover it. 00:39:06.742 [2024-09-27 15:57:47.119091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.742 [2024-09-27 15:57:47.119099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.742 qpair failed and we were unable to recover it. 00:39:06.742 [2024-09-27 15:57:47.119412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.742 [2024-09-27 15:57:47.119420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.742 qpair failed and we were unable to recover it. 00:39:06.742 [2024-09-27 15:57:47.119618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.742 [2024-09-27 15:57:47.119626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.742 qpair failed and we were unable to recover it. 00:39:06.742 [2024-09-27 15:57:47.119868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.742 [2024-09-27 15:57:47.119876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.742 qpair failed and we were unable to recover it. 00:39:06.742 [2024-09-27 15:57:47.120148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.742 [2024-09-27 15:57:47.120157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.742 qpair failed and we were unable to recover it. 00:39:06.742 [2024-09-27 15:57:47.120465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.742 [2024-09-27 15:57:47.120474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.742 qpair failed and we were unable to recover it. 00:39:06.742 [2024-09-27 15:57:47.120779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.742 [2024-09-27 15:57:47.120787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.742 qpair failed and we were unable to recover it. 00:39:06.742 [2024-09-27 15:57:47.121115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.742 [2024-09-27 15:57:47.121124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.742 qpair failed and we were unable to recover it. 00:39:06.742 [2024-09-27 15:57:47.121439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.742 [2024-09-27 15:57:47.121447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.742 qpair failed and we were unable to recover it. 00:39:06.742 [2024-09-27 15:57:47.121783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.742 [2024-09-27 15:57:47.121790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.742 qpair failed and we were unable to recover it. 00:39:06.742 [2024-09-27 15:57:47.122122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.742 [2024-09-27 15:57:47.122130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.742 qpair failed and we were unable to recover it. 00:39:06.742 [2024-09-27 15:57:47.122533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.742 [2024-09-27 15:57:47.122541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.742 qpair failed and we were unable to recover it. 00:39:06.742 [2024-09-27 15:57:47.122684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.742 [2024-09-27 15:57:47.122692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.742 qpair failed and we were unable to recover it. 00:39:06.742 [2024-09-27 15:57:47.122881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.742 [2024-09-27 15:57:47.122888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.742 qpair failed and we were unable to recover it. 00:39:06.742 [2024-09-27 15:57:47.123214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.742 [2024-09-27 15:57:47.123222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.742 qpair failed and we were unable to recover it. 00:39:06.742 [2024-09-27 15:57:47.123531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.742 [2024-09-27 15:57:47.123539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.742 qpair failed and we were unable to recover it. 00:39:06.742 [2024-09-27 15:57:47.123704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.742 [2024-09-27 15:57:47.123713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.742 qpair failed and we were unable to recover it. 00:39:06.742 [2024-09-27 15:57:47.124009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.742 [2024-09-27 15:57:47.124017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.743 qpair failed and we were unable to recover it. 00:39:06.743 [2024-09-27 15:57:47.124336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.743 [2024-09-27 15:57:47.124344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.743 qpair failed and we were unable to recover it. 00:39:06.743 [2024-09-27 15:57:47.124652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.743 [2024-09-27 15:57:47.124660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.743 qpair failed and we were unable to recover it. 00:39:06.743 [2024-09-27 15:57:47.125008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.743 [2024-09-27 15:57:47.125016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.743 qpair failed and we were unable to recover it. 00:39:06.743 [2024-09-27 15:57:47.125294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.743 [2024-09-27 15:57:47.125302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.743 qpair failed and we were unable to recover it. 00:39:06.743 [2024-09-27 15:57:47.125663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.743 [2024-09-27 15:57:47.125672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.743 qpair failed and we were unable to recover it. 00:39:06.743 [2024-09-27 15:57:47.126005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.743 [2024-09-27 15:57:47.126014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.743 qpair failed and we were unable to recover it. 00:39:06.743 [2024-09-27 15:57:47.126316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.743 [2024-09-27 15:57:47.126324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.743 qpair failed and we were unable to recover it. 00:39:06.743 [2024-09-27 15:57:47.126636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.743 [2024-09-27 15:57:47.126647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.743 qpair failed and we were unable to recover it. 00:39:06.743 [2024-09-27 15:57:47.126957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.743 [2024-09-27 15:57:47.126965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.743 qpair failed and we were unable to recover it. 00:39:06.743 [2024-09-27 15:57:47.127267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.743 [2024-09-27 15:57:47.127275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.743 qpair failed and we were unable to recover it. 00:39:06.743 [2024-09-27 15:57:47.127584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.743 [2024-09-27 15:57:47.127594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.743 qpair failed and we were unable to recover it. 00:39:06.743 [2024-09-27 15:57:47.127889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.743 [2024-09-27 15:57:47.127904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.743 qpair failed and we were unable to recover it. 00:39:06.743 [2024-09-27 15:57:47.128218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.743 [2024-09-27 15:57:47.128226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.743 qpair failed and we were unable to recover it. 00:39:06.743 [2024-09-27 15:57:47.128503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.743 [2024-09-27 15:57:47.128512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.743 qpair failed and we were unable to recover it. 00:39:06.743 [2024-09-27 15:57:47.128815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.743 [2024-09-27 15:57:47.128824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.743 qpair failed and we were unable to recover it. 00:39:06.743 [2024-09-27 15:57:47.129137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.743 [2024-09-27 15:57:47.129146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.743 qpair failed and we were unable to recover it. 00:39:06.743 [2024-09-27 15:57:47.129449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.743 [2024-09-27 15:57:47.129458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.743 qpair failed and we were unable to recover it. 00:39:06.743 [2024-09-27 15:57:47.129759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.743 [2024-09-27 15:57:47.129768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.743 qpair failed and we were unable to recover it. 00:39:06.743 [2024-09-27 15:57:47.129938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.743 [2024-09-27 15:57:47.129946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.743 qpair failed and we were unable to recover it. 00:39:06.743 [2024-09-27 15:57:47.130230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.743 [2024-09-27 15:57:47.130238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.743 qpair failed and we were unable to recover it. 00:39:06.743 [2024-09-27 15:57:47.130412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.743 [2024-09-27 15:57:47.130421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.743 qpair failed and we were unable to recover it. 00:39:06.743 [2024-09-27 15:57:47.130717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.743 [2024-09-27 15:57:47.130726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.743 qpair failed and we were unable to recover it. 00:39:06.743 [2024-09-27 15:57:47.131039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.743 [2024-09-27 15:57:47.131047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.743 qpair failed and we were unable to recover it. 00:39:06.743 [2024-09-27 15:57:47.131357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.743 [2024-09-27 15:57:47.131365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.743 qpair failed and we were unable to recover it. 00:39:06.743 [2024-09-27 15:57:47.131640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.743 [2024-09-27 15:57:47.131648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.743 qpair failed and we were unable to recover it. 00:39:06.743 [2024-09-27 15:57:47.132006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.743 [2024-09-27 15:57:47.132014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.743 qpair failed and we were unable to recover it. 00:39:06.743 [2024-09-27 15:57:47.132241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.743 [2024-09-27 15:57:47.132249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.743 qpair failed and we were unable to recover it. 00:39:06.743 [2024-09-27 15:57:47.132547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.743 [2024-09-27 15:57:47.132556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.743 qpair failed and we were unable to recover it. 00:39:06.743 [2024-09-27 15:57:47.132871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.743 [2024-09-27 15:57:47.132879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.743 qpair failed and we were unable to recover it. 00:39:06.743 [2024-09-27 15:57:47.133226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.743 [2024-09-27 15:57:47.133235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.743 qpair failed and we were unable to recover it. 00:39:06.743 [2024-09-27 15:57:47.133530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.743 [2024-09-27 15:57:47.133538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.743 qpair failed and we were unable to recover it. 00:39:06.743 [2024-09-27 15:57:47.133692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.743 [2024-09-27 15:57:47.133699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.743 qpair failed and we were unable to recover it. 00:39:06.743 [2024-09-27 15:57:47.133888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.743 [2024-09-27 15:57:47.133900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.743 qpair failed and we were unable to recover it. 00:39:06.743 [2024-09-27 15:57:47.134065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.743 [2024-09-27 15:57:47.134074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.743 qpair failed and we were unable to recover it. 00:39:06.743 [2024-09-27 15:57:47.134375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.743 [2024-09-27 15:57:47.134383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.743 qpair failed and we were unable to recover it. 00:39:06.743 [2024-09-27 15:57:47.134694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.743 [2024-09-27 15:57:47.134702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.743 qpair failed and we were unable to recover it. 00:39:06.743 [2024-09-27 15:57:47.134879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.743 [2024-09-27 15:57:47.134887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.744 qpair failed and we were unable to recover it. 00:39:06.744 [2024-09-27 15:57:47.135180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.744 [2024-09-27 15:57:47.135189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.744 qpair failed and we were unable to recover it. 00:39:06.744 [2024-09-27 15:57:47.135495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.744 [2024-09-27 15:57:47.135503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.744 qpair failed and we were unable to recover it. 00:39:06.744 [2024-09-27 15:57:47.135808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.744 [2024-09-27 15:57:47.135816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.744 qpair failed and we were unable to recover it. 00:39:06.744 [2024-09-27 15:57:47.136094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.744 [2024-09-27 15:57:47.136104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.744 qpair failed and we were unable to recover it. 00:39:06.744 [2024-09-27 15:57:47.136425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.744 [2024-09-27 15:57:47.136434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.744 qpair failed and we were unable to recover it. 00:39:06.744 [2024-09-27 15:57:47.136749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.744 [2024-09-27 15:57:47.136759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.744 qpair failed and we were unable to recover it. 00:39:06.744 [2024-09-27 15:57:47.137046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.744 [2024-09-27 15:57:47.137055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.744 qpair failed and we were unable to recover it. 00:39:06.744 [2024-09-27 15:57:47.137239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.744 [2024-09-27 15:57:47.137247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.744 qpair failed and we were unable to recover it. 00:39:06.744 [2024-09-27 15:57:47.137569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.744 [2024-09-27 15:57:47.137578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.744 qpair failed and we were unable to recover it. 00:39:06.744 [2024-09-27 15:57:47.137888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.744 [2024-09-27 15:57:47.137900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.744 qpair failed and we were unable to recover it. 00:39:06.744 [2024-09-27 15:57:47.138203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.744 [2024-09-27 15:57:47.138211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.744 qpair failed and we were unable to recover it. 00:39:06.744 [2024-09-27 15:57:47.138518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.744 [2024-09-27 15:57:47.138526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.744 qpair failed and we were unable to recover it. 00:39:06.744 [2024-09-27 15:57:47.138836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.744 [2024-09-27 15:57:47.138844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.744 qpair failed and we were unable to recover it. 00:39:06.744 [2024-09-27 15:57:47.139062] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:06.744 [2024-09-27 15:57:47.139107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.744 [2024-09-27 15:57:47.139115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.744 qpair failed and we were unable to recover it. 00:39:06.744 [2024-09-27 15:57:47.139306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.744 [2024-09-27 15:57:47.139314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.744 qpair failed and we were unable to recover it. 00:39:06.744 [2024-09-27 15:57:47.139567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.744 [2024-09-27 15:57:47.139575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.744 qpair failed and we were unable to recover it. 00:39:06.744 [2024-09-27 15:57:47.139749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.744 [2024-09-27 15:57:47.139758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.744 qpair failed and we were unable to recover it. 00:39:06.744 [2024-09-27 15:57:47.139934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.744 [2024-09-27 15:57:47.139942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.744 qpair failed and we were unable to recover it. 00:39:06.744 [2024-09-27 15:57:47.140216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.744 [2024-09-27 15:57:47.140224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.744 qpair failed and we were unable to recover it. 00:39:06.744 [2024-09-27 15:57:47.140413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.744 [2024-09-27 15:57:47.140423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.744 qpair failed and we were unable to recover it. 00:39:06.744 [2024-09-27 15:57:47.140737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.744 [2024-09-27 15:57:47.140745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.744 qpair failed and we were unable to recover it. 00:39:06.744 [2024-09-27 15:57:47.141057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.744 [2024-09-27 15:57:47.141065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.744 qpair failed and we were unable to recover it. 00:39:06.744 [2024-09-27 15:57:47.141379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.744 [2024-09-27 15:57:47.141387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.744 qpair failed and we were unable to recover it. 00:39:06.744 [2024-09-27 15:57:47.141707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.744 [2024-09-27 15:57:47.141716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.744 qpair failed and we were unable to recover it. 00:39:06.744 [2024-09-27 15:57:47.142027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.744 [2024-09-27 15:57:47.142037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.744 qpair failed and we were unable to recover it. 00:39:06.744 [2024-09-27 15:57:47.142221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.744 [2024-09-27 15:57:47.142229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.744 qpair failed and we were unable to recover it. 00:39:06.744 [2024-09-27 15:57:47.142531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.744 [2024-09-27 15:57:47.142539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.744 qpair failed and we were unable to recover it. 00:39:06.744 [2024-09-27 15:57:47.142850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.744 [2024-09-27 15:57:47.142859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.744 qpair failed and we were unable to recover it. 00:39:06.744 [2024-09-27 15:57:47.143155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.744 [2024-09-27 15:57:47.143164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.744 qpair failed and we were unable to recover it. 00:39:06.744 [2024-09-27 15:57:47.143475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.744 [2024-09-27 15:57:47.143483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.744 qpair failed and we were unable to recover it. 00:39:06.744 [2024-09-27 15:57:47.143797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.744 [2024-09-27 15:57:47.143806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.744 qpair failed and we were unable to recover it. 00:39:06.744 [2024-09-27 15:57:47.144106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.744 [2024-09-27 15:57:47.144115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.744 qpair failed and we were unable to recover it. 00:39:06.745 [2024-09-27 15:57:47.144428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.745 [2024-09-27 15:57:47.144438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.745 qpair failed and we were unable to recover it. 00:39:06.745 [2024-09-27 15:57:47.144751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.745 [2024-09-27 15:57:47.144761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.745 qpair failed and we were unable to recover it. 00:39:06.745 [2024-09-27 15:57:47.144948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.745 [2024-09-27 15:57:47.144956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.745 qpair failed and we were unable to recover it. 00:39:06.745 [2024-09-27 15:57:47.145306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.745 [2024-09-27 15:57:47.145314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.745 qpair failed and we were unable to recover it. 00:39:06.745 [2024-09-27 15:57:47.145618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.745 [2024-09-27 15:57:47.145626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.745 qpair failed and we were unable to recover it. 00:39:06.745 [2024-09-27 15:57:47.145857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.745 [2024-09-27 15:57:47.145869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.745 qpair failed and we were unable to recover it. 00:39:06.745 [2024-09-27 15:57:47.146175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.745 [2024-09-27 15:57:47.146184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.745 qpair failed and we were unable to recover it. 00:39:06.745 [2024-09-27 15:57:47.146497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.745 [2024-09-27 15:57:47.146507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.745 qpair failed and we were unable to recover it. 00:39:06.745 [2024-09-27 15:57:47.146553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.745 [2024-09-27 15:57:47.146561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.745 qpair failed and we were unable to recover it. 00:39:06.745 [2024-09-27 15:57:47.146827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.745 [2024-09-27 15:57:47.146836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.745 qpair failed and we were unable to recover it. 00:39:06.745 [2024-09-27 15:57:47.147070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.745 [2024-09-27 15:57:47.147080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.745 qpair failed and we were unable to recover it. 00:39:06.745 [2024-09-27 15:57:47.147235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.745 [2024-09-27 15:57:47.147244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.745 qpair failed and we were unable to recover it. 00:39:06.745 [2024-09-27 15:57:47.147631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.745 [2024-09-27 15:57:47.147726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:06.745 qpair failed and we were unable to recover it. 00:39:06.745 [2024-09-27 15:57:47.148170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.745 [2024-09-27 15:57:47.148214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:06.745 qpair failed and we were unable to recover it. 00:39:06.745 [2024-09-27 15:57:47.148601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.745 [2024-09-27 15:57:47.148634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:06.745 qpair failed and we were unable to recover it. 00:39:06.745 [2024-09-27 15:57:47.148870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.745 [2024-09-27 15:57:47.148880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.745 qpair failed and we were unable to recover it. 00:39:06.745 [2024-09-27 15:57:47.149209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.745 [2024-09-27 15:57:47.149218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.745 qpair failed and we were unable to recover it. 00:39:06.745 [2024-09-27 15:57:47.149609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.745 [2024-09-27 15:57:47.149618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.745 qpair failed and we were unable to recover it. 00:39:06.745 [2024-09-27 15:57:47.149937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.745 [2024-09-27 15:57:47.149945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.745 qpair failed and we were unable to recover it. 00:39:06.745 [2024-09-27 15:57:47.150260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.745 [2024-09-27 15:57:47.150270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.745 qpair failed and we were unable to recover it. 00:39:06.745 [2024-09-27 15:57:47.150583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.745 [2024-09-27 15:57:47.150593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.745 qpair failed and we were unable to recover it. 00:39:06.745 [2024-09-27 15:57:47.150920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.745 [2024-09-27 15:57:47.150928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.745 qpair failed and we were unable to recover it. 00:39:06.745 [2024-09-27 15:57:47.151218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.745 [2024-09-27 15:57:47.151227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.745 qpair failed and we were unable to recover it. 00:39:06.745 [2024-09-27 15:57:47.151569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.745 [2024-09-27 15:57:47.151578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.745 qpair failed and we were unable to recover it. 00:39:06.745 [2024-09-27 15:57:47.151902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.745 [2024-09-27 15:57:47.151912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.745 qpair failed and we were unable to recover it. 00:39:06.745 [2024-09-27 15:57:47.152221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.745 [2024-09-27 15:57:47.152230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.745 qpair failed and we were unable to recover it. 00:39:06.745 [2024-09-27 15:57:47.152544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.745 [2024-09-27 15:57:47.152554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.745 qpair failed and we were unable to recover it. 00:39:06.745 [2024-09-27 15:57:47.152723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.745 [2024-09-27 15:57:47.152731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.745 qpair failed and we were unable to recover it. 00:39:06.745 [2024-09-27 15:57:47.153053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.745 [2024-09-27 15:57:47.153063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.745 qpair failed and we were unable to recover it. 00:39:06.745 [2024-09-27 15:57:47.153380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.745 [2024-09-27 15:57:47.153389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.745 qpair failed and we were unable to recover it. 00:39:06.745 [2024-09-27 15:57:47.153698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.745 [2024-09-27 15:57:47.153706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.745 qpair failed and we were unable to recover it. 00:39:06.745 [2024-09-27 15:57:47.154017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.745 [2024-09-27 15:57:47.154026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.745 qpair failed and we were unable to recover it. 00:39:06.745 [2024-09-27 15:57:47.154225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.745 [2024-09-27 15:57:47.154234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.745 qpair failed and we were unable to recover it. 00:39:06.745 [2024-09-27 15:57:47.154554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.745 [2024-09-27 15:57:47.154564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.745 qpair failed and we were unable to recover it. 00:39:06.745 [2024-09-27 15:57:47.154732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.745 [2024-09-27 15:57:47.154741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.745 qpair failed and we were unable to recover it. 00:39:06.745 [2024-09-27 15:57:47.155068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.745 [2024-09-27 15:57:47.155078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.745 qpair failed and we were unable to recover it. 00:39:06.745 [2024-09-27 15:57:47.155407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.745 [2024-09-27 15:57:47.155416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.746 qpair failed and we were unable to recover it. 00:39:06.746 [2024-09-27 15:57:47.155757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.746 [2024-09-27 15:57:47.155766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.746 qpair failed and we were unable to recover it. 00:39:06.746 [2024-09-27 15:57:47.156085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.746 [2024-09-27 15:57:47.156096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.746 qpair failed and we were unable to recover it. 00:39:06.746 [2024-09-27 15:57:47.156443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.746 [2024-09-27 15:57:47.156452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.746 qpair failed and we were unable to recover it. 00:39:06.746 [2024-09-27 15:57:47.156815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.746 [2024-09-27 15:57:47.156825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.746 qpair failed and we were unable to recover it. 00:39:06.746 [2024-09-27 15:57:47.157029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.746 [2024-09-27 15:57:47.157039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.746 qpair failed and we were unable to recover it. 00:39:06.746 [2024-09-27 15:57:47.157385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.746 [2024-09-27 15:57:47.157395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.746 qpair failed and we were unable to recover it. 00:39:06.746 [2024-09-27 15:57:47.157717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.746 [2024-09-27 15:57:47.157727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.746 qpair failed and we were unable to recover it. 00:39:06.746 [2024-09-27 15:57:47.157902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.746 [2024-09-27 15:57:47.157912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.746 qpair failed and we were unable to recover it. 00:39:06.746 [2024-09-27 15:57:47.157994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.746 [2024-09-27 15:57:47.158003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.746 qpair failed and we were unable to recover it. 00:39:06.746 [2024-09-27 15:57:47.158298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.746 [2024-09-27 15:57:47.158307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.746 qpair failed and we were unable to recover it. 00:39:06.746 [2024-09-27 15:57:47.158616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.746 [2024-09-27 15:57:47.158626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.746 qpair failed and we were unable to recover it. 00:39:06.746 [2024-09-27 15:57:47.158950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.746 [2024-09-27 15:57:47.158960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.746 qpair failed and we were unable to recover it. 00:39:06.746 [2024-09-27 15:57:47.159263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.746 [2024-09-27 15:57:47.159272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.746 qpair failed and we were unable to recover it. 00:39:06.746 [2024-09-27 15:57:47.159593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.746 [2024-09-27 15:57:47.159603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.746 qpair failed and we were unable to recover it. 00:39:06.746 [2024-09-27 15:57:47.159929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.746 [2024-09-27 15:57:47.159940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.746 qpair failed and we were unable to recover it. 00:39:06.746 [2024-09-27 15:57:47.160307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.746 [2024-09-27 15:57:47.160317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.746 qpair failed and we were unable to recover it. 00:39:06.746 [2024-09-27 15:57:47.160629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.746 [2024-09-27 15:57:47.160639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.746 qpair failed and we were unable to recover it. 00:39:06.746 [2024-09-27 15:57:47.160801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.746 [2024-09-27 15:57:47.160810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.746 qpair failed and we were unable to recover it. 00:39:06.746 [2024-09-27 15:57:47.161119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.746 [2024-09-27 15:57:47.161128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.746 qpair failed and we were unable to recover it. 00:39:06.746 [2024-09-27 15:57:47.161306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.746 [2024-09-27 15:57:47.161315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.746 qpair failed and we were unable to recover it. 00:39:06.746 [2024-09-27 15:57:47.161657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.746 [2024-09-27 15:57:47.161685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.746 qpair failed and we were unable to recover it. 00:39:06.746 [2024-09-27 15:57:47.162024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.746 [2024-09-27 15:57:47.162035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.746 qpair failed and we were unable to recover it. 00:39:06.746 [2024-09-27 15:57:47.162224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.746 [2024-09-27 15:57:47.162232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.746 qpair failed and we were unable to recover it. 00:39:06.746 [2024-09-27 15:57:47.162500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.746 [2024-09-27 15:57:47.162512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.746 qpair failed and we were unable to recover it. 00:39:06.746 [2024-09-27 15:57:47.162832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.746 [2024-09-27 15:57:47.162841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.746 qpair failed and we were unable to recover it. 00:39:06.746 [2024-09-27 15:57:47.163146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.746 [2024-09-27 15:57:47.163155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.746 qpair failed and we were unable to recover it. 00:39:06.746 [2024-09-27 15:57:47.163463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.746 [2024-09-27 15:57:47.163472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.746 qpair failed and we were unable to recover it. 00:39:06.746 [2024-09-27 15:57:47.163782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.746 [2024-09-27 15:57:47.163791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.746 qpair failed and we were unable to recover it. 00:39:06.746 [2024-09-27 15:57:47.164177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.746 [2024-09-27 15:57:47.164186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.746 qpair failed and we were unable to recover it. 00:39:06.746 [2024-09-27 15:57:47.164500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.746 [2024-09-27 15:57:47.164511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.746 qpair failed and we were unable to recover it. 00:39:06.746 [2024-09-27 15:57:47.164852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.746 [2024-09-27 15:57:47.164861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.746 qpair failed and we were unable to recover it. 00:39:06.746 [2024-09-27 15:57:47.165135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.746 [2024-09-27 15:57:47.165144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.746 qpair failed and we were unable to recover it. 00:39:06.746 [2024-09-27 15:57:47.165449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.746 [2024-09-27 15:57:47.165459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.746 qpair failed and we were unable to recover it. 00:39:06.746 [2024-09-27 15:57:47.165772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.746 [2024-09-27 15:57:47.165782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.746 qpair failed and we were unable to recover it. 00:39:06.746 [2024-09-27 15:57:47.166075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.746 [2024-09-27 15:57:47.166084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.746 qpair failed and we were unable to recover it. 00:39:06.746 [2024-09-27 15:57:47.166411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.746 [2024-09-27 15:57:47.166420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.746 qpair failed and we were unable to recover it. 00:39:06.746 [2024-09-27 15:57:47.166598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.746 [2024-09-27 15:57:47.166608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.747 qpair failed and we were unable to recover it. 00:39:06.747 [2024-09-27 15:57:47.166930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.747 [2024-09-27 15:57:47.166941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.747 qpair failed and we were unable to recover it. 00:39:06.747 [2024-09-27 15:57:47.167255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.747 [2024-09-27 15:57:47.167264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.747 qpair failed and we were unable to recover it. 00:39:06.747 [2024-09-27 15:57:47.167475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.747 [2024-09-27 15:57:47.167483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.747 qpair failed and we were unable to recover it. 00:39:06.747 [2024-09-27 15:57:47.167651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.747 [2024-09-27 15:57:47.167659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.747 qpair failed and we were unable to recover it. 00:39:06.747 [2024-09-27 15:57:47.167957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.747 [2024-09-27 15:57:47.167967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.747 qpair failed and we were unable to recover it. 00:39:06.747 [2024-09-27 15:57:47.168264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.747 [2024-09-27 15:57:47.168273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.747 qpair failed and we were unable to recover it. 00:39:06.747 [2024-09-27 15:57:47.168452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.747 [2024-09-27 15:57:47.168460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.747 qpair failed and we were unable to recover it. 00:39:06.747 [2024-09-27 15:57:47.168815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.747 [2024-09-27 15:57:47.168825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.747 qpair failed and we were unable to recover it. 00:39:06.747 [2024-09-27 15:57:47.169059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.747 [2024-09-27 15:57:47.169068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.747 qpair failed and we were unable to recover it. 00:39:06.747 [2024-09-27 15:57:47.169421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.747 [2024-09-27 15:57:47.169431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.747 qpair failed and we were unable to recover it. 00:39:06.747 [2024-09-27 15:57:47.169717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.747 [2024-09-27 15:57:47.169726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.747 qpair failed and we were unable to recover it. 00:39:06.747 [2024-09-27 15:57:47.170036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.747 [2024-09-27 15:57:47.170045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.747 qpair failed and we were unable to recover it. 00:39:06.747 [2024-09-27 15:57:47.170210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.747 [2024-09-27 15:57:47.170219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.747 qpair failed and we were unable to recover it. 00:39:06.747 [2024-09-27 15:57:47.170544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.747 [2024-09-27 15:57:47.170554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.747 qpair failed and we were unable to recover it. 00:39:06.747 [2024-09-27 15:57:47.170748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.747 [2024-09-27 15:57:47.170756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.747 qpair failed and we were unable to recover it. 00:39:06.747 [2024-09-27 15:57:47.171038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.747 [2024-09-27 15:57:47.171047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.747 qpair failed and we were unable to recover it. 00:39:06.747 [2024-09-27 15:57:47.171368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.747 [2024-09-27 15:57:47.171378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.747 qpair failed and we were unable to recover it. 00:39:06.747 [2024-09-27 15:57:47.171688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.747 [2024-09-27 15:57:47.171698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.747 qpair failed and we were unable to recover it. 00:39:06.747 [2024-09-27 15:57:47.172090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.747 [2024-09-27 15:57:47.172099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.747 qpair failed and we were unable to recover it. 00:39:06.747 [2024-09-27 15:57:47.172270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.747 [2024-09-27 15:57:47.172278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.747 qpair failed and we were unable to recover it. 00:39:06.747 [2024-09-27 15:57:47.172478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.747 [2024-09-27 15:57:47.172486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.747 qpair failed and we were unable to recover it. 00:39:06.747 [2024-09-27 15:57:47.172793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.747 [2024-09-27 15:57:47.172782] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:06.747 [2024-09-27 15:57:47.172801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.747 qpair failed and we were unable to recover it. 00:39:06.747 [2024-09-27 15:57:47.172813] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:06.747 [2024-09-27 15:57:47.172821] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:06.747 [2024-09-27 15:57:47.172828] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:06.747 [2024-09-27 15:57:47.172834] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:06.747 [2024-09-27 15:57:47.173074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.747 [2024-09-27 15:57:47.173084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.747 qpair failed and we were unable to recover it. 00:39:06.747 [2024-09-27 15:57:47.172997] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:39:06.747 [2024-09-27 15:57:47.173417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.747 [2024-09-27 15:57:47.173426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.747 qpair failed and we were unable to recover it. 00:39:06.747 [2024-09-27 15:57:47.173328] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:39:06.747 [2024-09-27 15:57:47.173454] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:39:06.747 [2024-09-27 15:57:47.173457] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:39:06.747 [2024-09-27 15:57:47.173749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.747 [2024-09-27 15:57:47.173758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.747 qpair failed and we were unable to recover it. 00:39:06.747 [2024-09-27 15:57:47.173954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.747 [2024-09-27 15:57:47.173963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.747 qpair failed and we were unable to recover it. 00:39:06.747 [2024-09-27 15:57:47.174236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.747 [2024-09-27 15:57:47.174245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.747 qpair failed and we were unable to recover it. 00:39:06.747 [2024-09-27 15:57:47.174512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.747 [2024-09-27 15:57:47.174521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.747 qpair failed and we were unable to recover it. 00:39:06.747 [2024-09-27 15:57:47.174840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.747 [2024-09-27 15:57:47.174849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.747 qpair failed and we were unable to recover it. 00:39:06.747 [2024-09-27 15:57:47.175158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.747 [2024-09-27 15:57:47.175168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.747 qpair failed and we were unable to recover it. 00:39:06.747 [2024-09-27 15:57:47.175478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.747 [2024-09-27 15:57:47.175488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.747 qpair failed and we were unable to recover it. 00:39:06.747 [2024-09-27 15:57:47.175802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.747 [2024-09-27 15:57:47.175811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.747 qpair failed and we were unable to recover it. 00:39:06.747 [2024-09-27 15:57:47.175996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.747 [2024-09-27 15:57:47.176004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.747 qpair failed and we were unable to recover it. 00:39:06.747 [2024-09-27 15:57:47.176221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.748 [2024-09-27 15:57:47.176229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.748 qpair failed and we were unable to recover it. 00:39:06.748 [2024-09-27 15:57:47.176518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.748 [2024-09-27 15:57:47.176528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.748 qpair failed and we were unable to recover it. 00:39:06.748 [2024-09-27 15:57:47.176834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.748 [2024-09-27 15:57:47.176844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.748 qpair failed and we were unable to recover it. 00:39:06.748 [2024-09-27 15:57:47.177054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.748 [2024-09-27 15:57:47.177063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.748 qpair failed and we were unable to recover it. 00:39:06.748 [2024-09-27 15:57:47.177273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.748 [2024-09-27 15:57:47.177284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.748 qpair failed and we were unable to recover it. 00:39:06.748 [2024-09-27 15:57:47.177605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.748 [2024-09-27 15:57:47.177614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.748 qpair failed and we were unable to recover it. 00:39:06.748 [2024-09-27 15:57:47.177927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.748 [2024-09-27 15:57:47.177937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.748 qpair failed and we were unable to recover it. 00:39:06.748 [2024-09-27 15:57:47.178267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.748 [2024-09-27 15:57:47.178276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.748 qpair failed and we were unable to recover it. 00:39:06.748 [2024-09-27 15:57:47.178569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.748 [2024-09-27 15:57:47.178579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.748 qpair failed and we were unable to recover it. 00:39:06.748 [2024-09-27 15:57:47.178901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.748 [2024-09-27 15:57:47.178909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.748 qpair failed and we were unable to recover it. 00:39:06.748 [2024-09-27 15:57:47.179213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.748 [2024-09-27 15:57:47.179223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.748 qpair failed and we were unable to recover it. 00:39:06.748 [2024-09-27 15:57:47.179531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.748 [2024-09-27 15:57:47.179540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.748 qpair failed and we were unable to recover it. 00:39:06.748 [2024-09-27 15:57:47.179834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.748 [2024-09-27 15:57:47.179843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.748 qpair failed and we were unable to recover it. 00:39:06.748 [2024-09-27 15:57:47.180148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.748 [2024-09-27 15:57:47.180157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.748 qpair failed and we were unable to recover it. 00:39:06.748 [2024-09-27 15:57:47.180476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.748 [2024-09-27 15:57:47.180485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.748 qpair failed and we were unable to recover it. 00:39:06.748 [2024-09-27 15:57:47.180793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.748 [2024-09-27 15:57:47.180802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.748 qpair failed and we were unable to recover it. 00:39:06.748 [2024-09-27 15:57:47.181059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.748 [2024-09-27 15:57:47.181068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.748 qpair failed and we were unable to recover it. 00:39:06.748 [2024-09-27 15:57:47.181456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.748 [2024-09-27 15:57:47.181466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.748 qpair failed and we were unable to recover it. 00:39:06.748 [2024-09-27 15:57:47.181765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.748 [2024-09-27 15:57:47.181775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.748 qpair failed and we were unable to recover it. 00:39:06.748 [2024-09-27 15:57:47.182106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.748 [2024-09-27 15:57:47.182115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.748 qpair failed and we were unable to recover it. 00:39:06.748 [2024-09-27 15:57:47.182480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.748 [2024-09-27 15:57:47.182489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.748 qpair failed and we were unable to recover it. 00:39:06.748 [2024-09-27 15:57:47.182694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.748 [2024-09-27 15:57:47.182703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.748 qpair failed and we were unable to recover it. 00:39:06.748 [2024-09-27 15:57:47.183029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.748 [2024-09-27 15:57:47.183038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.748 qpair failed and we were unable to recover it. 00:39:06.748 [2024-09-27 15:57:47.183350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.748 [2024-09-27 15:57:47.183360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.748 qpair failed and we were unable to recover it. 00:39:06.748 [2024-09-27 15:57:47.183543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.748 [2024-09-27 15:57:47.183553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.748 qpair failed and we were unable to recover it. 00:39:06.748 [2024-09-27 15:57:47.183841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.748 [2024-09-27 15:57:47.183851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.748 qpair failed and we were unable to recover it. 00:39:06.748 [2024-09-27 15:57:47.184020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.748 [2024-09-27 15:57:47.184030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.748 qpair failed and we were unable to recover it. 00:39:06.748 [2024-09-27 15:57:47.184199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.748 [2024-09-27 15:57:47.184208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.748 qpair failed and we were unable to recover it. 00:39:06.748 [2024-09-27 15:57:47.184492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.748 [2024-09-27 15:57:47.184501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.748 qpair failed and we were unable to recover it. 00:39:06.748 [2024-09-27 15:57:47.184820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.748 [2024-09-27 15:57:47.184829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.748 qpair failed and we were unable to recover it. 00:39:06.748 [2024-09-27 15:57:47.184982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.748 [2024-09-27 15:57:47.184992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.748 qpair failed and we were unable to recover it. 00:39:06.748 [2024-09-27 15:57:47.185301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.748 [2024-09-27 15:57:47.185313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.748 qpair failed and we were unable to recover it. 00:39:06.748 [2024-09-27 15:57:47.185480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.748 [2024-09-27 15:57:47.185490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.748 qpair failed and we were unable to recover it. 00:39:06.748 [2024-09-27 15:57:47.185825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.748 [2024-09-27 15:57:47.185834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.748 qpair failed and we were unable to recover it. 00:39:06.748 [2024-09-27 15:57:47.186141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.748 [2024-09-27 15:57:47.186152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.748 qpair failed and we were unable to recover it. 00:39:06.748 [2024-09-27 15:57:47.186461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.748 [2024-09-27 15:57:47.186471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.748 qpair failed and we were unable to recover it. 00:39:06.748 [2024-09-27 15:57:47.186780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.749 [2024-09-27 15:57:47.186790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.749 qpair failed and we were unable to recover it. 00:39:06.749 [2024-09-27 15:57:47.187007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.749 [2024-09-27 15:57:47.187017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.749 qpair failed and we were unable to recover it. 00:39:06.749 [2024-09-27 15:57:47.187337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.749 [2024-09-27 15:57:47.187347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.749 qpair failed and we were unable to recover it. 00:39:06.749 [2024-09-27 15:57:47.187650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.749 [2024-09-27 15:57:47.187660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.749 qpair failed and we were unable to recover it. 00:39:06.749 [2024-09-27 15:57:47.188048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.749 [2024-09-27 15:57:47.188058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.749 qpair failed and we were unable to recover it. 00:39:06.749 [2024-09-27 15:57:47.188336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.749 [2024-09-27 15:57:47.188345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.749 qpair failed and we were unable to recover it. 00:39:06.749 [2024-09-27 15:57:47.188540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.749 [2024-09-27 15:57:47.188549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.749 qpair failed and we were unable to recover it. 00:39:06.749 [2024-09-27 15:57:47.188884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.749 [2024-09-27 15:57:47.188903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.749 qpair failed and we were unable to recover it. 00:39:06.749 [2024-09-27 15:57:47.189092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.749 [2024-09-27 15:57:47.189101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.749 qpair failed and we were unable to recover it. 00:39:06.749 [2024-09-27 15:57:47.189266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.749 [2024-09-27 15:57:47.189275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.749 qpair failed and we were unable to recover it. 00:39:06.749 [2024-09-27 15:57:47.189456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.749 [2024-09-27 15:57:47.189464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.749 qpair failed and we were unable to recover it. 00:39:06.749 [2024-09-27 15:57:47.189732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.749 [2024-09-27 15:57:47.189741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.749 qpair failed and we were unable to recover it. 00:39:06.749 [2024-09-27 15:57:47.189946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.749 [2024-09-27 15:57:47.189954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.749 qpair failed and we were unable to recover it. 00:39:06.749 [2024-09-27 15:57:47.190141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.749 [2024-09-27 15:57:47.190149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.749 qpair failed and we were unable to recover it. 00:39:06.749 [2024-09-27 15:57:47.190344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.749 [2024-09-27 15:57:47.190353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.749 qpair failed and we were unable to recover it. 00:39:06.749 [2024-09-27 15:57:47.190633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.749 [2024-09-27 15:57:47.190641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.749 qpair failed and we were unable to recover it. 00:39:06.749 [2024-09-27 15:57:47.190970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.749 [2024-09-27 15:57:47.190980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.749 qpair failed and we were unable to recover it. 00:39:06.749 [2024-09-27 15:57:47.191192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:06.749 [2024-09-27 15:57:47.191200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:06.749 qpair failed and we were unable to recover it. 00:39:07.029 [2024-09-27 15:57:47.191425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.029 [2024-09-27 15:57:47.191435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.029 qpair failed and we were unable to recover it. 00:39:07.029 [2024-09-27 15:57:47.191607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.029 [2024-09-27 15:57:47.191616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.029 qpair failed and we were unable to recover it. 00:39:07.029 [2024-09-27 15:57:47.191778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.029 [2024-09-27 15:57:47.191788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.029 qpair failed and we were unable to recover it. 00:39:07.029 [2024-09-27 15:57:47.192025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.029 [2024-09-27 15:57:47.192035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.029 qpair failed and we were unable to recover it. 00:39:07.029 [2024-09-27 15:57:47.192248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.029 [2024-09-27 15:57:47.192260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.029 qpair failed and we were unable to recover it. 00:39:07.029 [2024-09-27 15:57:47.192581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.029 [2024-09-27 15:57:47.192589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.029 qpair failed and we were unable to recover it. 00:39:07.029 [2024-09-27 15:57:47.192983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.029 [2024-09-27 15:57:47.192993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.029 qpair failed and we were unable to recover it. 00:39:07.029 [2024-09-27 15:57:47.193381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.029 [2024-09-27 15:57:47.193391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.029 qpair failed and we were unable to recover it. 00:39:07.029 [2024-09-27 15:57:47.193737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.029 [2024-09-27 15:57:47.193747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.029 qpair failed and we were unable to recover it. 00:39:07.029 [2024-09-27 15:57:47.193908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.029 [2024-09-27 15:57:47.193916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.029 qpair failed and we were unable to recover it. 00:39:07.029 [2024-09-27 15:57:47.194093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.029 [2024-09-27 15:57:47.194102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.029 qpair failed and we were unable to recover it. 00:39:07.029 [2024-09-27 15:57:47.194201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.029 [2024-09-27 15:57:47.194210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.029 qpair failed and we were unable to recover it. 00:39:07.029 [2024-09-27 15:57:47.194283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.029 [2024-09-27 15:57:47.194294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.029 qpair failed and we were unable to recover it. 00:39:07.029 [2024-09-27 15:57:47.194580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.029 [2024-09-27 15:57:47.194590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.029 qpair failed and we were unable to recover it. 00:39:07.029 [2024-09-27 15:57:47.194901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.029 [2024-09-27 15:57:47.194910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.029 qpair failed and we were unable to recover it. 00:39:07.029 [2024-09-27 15:57:47.195193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.029 [2024-09-27 15:57:47.195203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.029 qpair failed and we were unable to recover it. 00:39:07.029 [2024-09-27 15:57:47.195516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.029 [2024-09-27 15:57:47.195524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.029 qpair failed and we were unable to recover it. 00:39:07.029 [2024-09-27 15:57:47.195578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.029 [2024-09-27 15:57:47.195586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.029 qpair failed and we were unable to recover it. 00:39:07.029 [2024-09-27 15:57:47.195891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.029 [2024-09-27 15:57:47.195916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.029 qpair failed and we were unable to recover it. 00:39:07.029 [2024-09-27 15:57:47.196218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.029 [2024-09-27 15:57:47.196227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.029 qpair failed and we were unable to recover it. 00:39:07.029 [2024-09-27 15:57:47.196538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.029 [2024-09-27 15:57:47.196548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.029 qpair failed and we were unable to recover it. 00:39:07.029 [2024-09-27 15:57:47.196867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.029 [2024-09-27 15:57:47.196876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.029 qpair failed and we were unable to recover it. 00:39:07.029 [2024-09-27 15:57:47.197184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.029 [2024-09-27 15:57:47.197193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.029 qpair failed and we were unable to recover it. 00:39:07.029 [2024-09-27 15:57:47.197520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.029 [2024-09-27 15:57:47.197529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.029 qpair failed and we were unable to recover it. 00:39:07.029 [2024-09-27 15:57:47.197845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.029 [2024-09-27 15:57:47.197854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.029 qpair failed and we were unable to recover it. 00:39:07.029 [2024-09-27 15:57:47.198211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.029 [2024-09-27 15:57:47.198221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.029 qpair failed and we were unable to recover it. 00:39:07.029 [2024-09-27 15:57:47.198395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.030 [2024-09-27 15:57:47.198403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.030 qpair failed and we were unable to recover it. 00:39:07.030 [2024-09-27 15:57:47.198602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.030 [2024-09-27 15:57:47.198611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.030 qpair failed and we were unable to recover it. 00:39:07.030 [2024-09-27 15:57:47.198900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.030 [2024-09-27 15:57:47.198909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.030 qpair failed and we were unable to recover it. 00:39:07.030 [2024-09-27 15:57:47.199249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.030 [2024-09-27 15:57:47.199259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.030 qpair failed and we were unable to recover it. 00:39:07.030 [2024-09-27 15:57:47.199329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.030 [2024-09-27 15:57:47.199338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.030 qpair failed and we were unable to recover it. 00:39:07.030 [2024-09-27 15:57:47.199606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.030 [2024-09-27 15:57:47.199615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.030 qpair failed and we were unable to recover it. 00:39:07.030 [2024-09-27 15:57:47.199922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.030 [2024-09-27 15:57:47.199932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.030 qpair failed and we were unable to recover it. 00:39:07.030 [2024-09-27 15:57:47.200309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.030 [2024-09-27 15:57:47.200318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.030 qpair failed and we were unable to recover it. 00:39:07.030 [2024-09-27 15:57:47.200632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.030 [2024-09-27 15:57:47.200641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.030 qpair failed and we were unable to recover it. 00:39:07.030 [2024-09-27 15:57:47.200809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.030 [2024-09-27 15:57:47.200819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.030 qpair failed and we were unable to recover it. 00:39:07.030 [2024-09-27 15:57:47.201094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.030 [2024-09-27 15:57:47.201104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.030 qpair failed and we were unable to recover it. 00:39:07.030 [2024-09-27 15:57:47.201294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.030 [2024-09-27 15:57:47.201303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.030 qpair failed and we were unable to recover it. 00:39:07.030 [2024-09-27 15:57:47.201610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.030 [2024-09-27 15:57:47.201618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.030 qpair failed and we were unable to recover it. 00:39:07.030 [2024-09-27 15:57:47.201961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.030 [2024-09-27 15:57:47.201971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.030 qpair failed and we were unable to recover it. 00:39:07.030 [2024-09-27 15:57:47.202128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.030 [2024-09-27 15:57:47.202138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.030 qpair failed and we were unable to recover it. 00:39:07.030 [2024-09-27 15:57:47.202406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.030 [2024-09-27 15:57:47.202415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.030 qpair failed and we were unable to recover it. 00:39:07.030 [2024-09-27 15:57:47.202687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.030 [2024-09-27 15:57:47.202695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.030 qpair failed and we were unable to recover it. 00:39:07.030 [2024-09-27 15:57:47.202968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.030 [2024-09-27 15:57:47.202978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.030 qpair failed and we were unable to recover it. 00:39:07.030 [2024-09-27 15:57:47.203311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.030 [2024-09-27 15:57:47.203319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.030 qpair failed and we were unable to recover it. 00:39:07.030 [2024-09-27 15:57:47.203631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.030 [2024-09-27 15:57:47.203639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.030 qpair failed and we were unable to recover it. 00:39:07.030 [2024-09-27 15:57:47.203951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.030 [2024-09-27 15:57:47.203959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.030 qpair failed and we were unable to recover it. 00:39:07.030 [2024-09-27 15:57:47.204267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.030 [2024-09-27 15:57:47.204276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.030 qpair failed and we were unable to recover it. 00:39:07.030 [2024-09-27 15:57:47.204585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.030 [2024-09-27 15:57:47.204594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.030 qpair failed and we were unable to recover it. 00:39:07.030 [2024-09-27 15:57:47.204911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.030 [2024-09-27 15:57:47.204920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.030 qpair failed and we were unable to recover it. 00:39:07.030 [2024-09-27 15:57:47.205215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.030 [2024-09-27 15:57:47.205223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.030 qpair failed and we were unable to recover it. 00:39:07.030 [2024-09-27 15:57:47.205542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.030 [2024-09-27 15:57:47.205549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.030 qpair failed and we were unable to recover it. 00:39:07.030 [2024-09-27 15:57:47.205877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.030 [2024-09-27 15:57:47.205886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.030 qpair failed and we were unable to recover it. 00:39:07.030 [2024-09-27 15:57:47.206048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.030 [2024-09-27 15:57:47.206057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.030 qpair failed and we were unable to recover it. 00:39:07.030 [2024-09-27 15:57:47.206241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.030 [2024-09-27 15:57:47.206251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.030 qpair failed and we were unable to recover it. 00:39:07.030 [2024-09-27 15:57:47.206512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.030 [2024-09-27 15:57:47.206521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.030 qpair failed and we were unable to recover it. 00:39:07.030 [2024-09-27 15:57:47.206689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.030 [2024-09-27 15:57:47.206698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.030 qpair failed and we were unable to recover it. 00:39:07.030 [2024-09-27 15:57:47.206880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.030 [2024-09-27 15:57:47.206889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.030 qpair failed and we were unable to recover it. 00:39:07.030 [2024-09-27 15:57:47.207214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.030 [2024-09-27 15:57:47.207222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.030 qpair failed and we were unable to recover it. 00:39:07.030 [2024-09-27 15:57:47.207581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.030 [2024-09-27 15:57:47.207590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.030 qpair failed and we were unable to recover it. 00:39:07.030 [2024-09-27 15:57:47.207904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.030 [2024-09-27 15:57:47.207913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.030 qpair failed and we were unable to recover it. 00:39:07.030 [2024-09-27 15:57:47.208276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.030 [2024-09-27 15:57:47.208284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.030 qpair failed and we were unable to recover it. 00:39:07.030 [2024-09-27 15:57:47.208582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.030 [2024-09-27 15:57:47.208591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.030 qpair failed and we were unable to recover it. 00:39:07.031 [2024-09-27 15:57:47.208900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.031 [2024-09-27 15:57:47.208909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.031 qpair failed and we were unable to recover it. 00:39:07.031 [2024-09-27 15:57:47.209211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.031 [2024-09-27 15:57:47.209221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.031 qpair failed and we were unable to recover it. 00:39:07.031 [2024-09-27 15:57:47.209409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.031 [2024-09-27 15:57:47.209419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.031 qpair failed and we were unable to recover it. 00:39:07.031 [2024-09-27 15:57:47.209724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.031 [2024-09-27 15:57:47.209733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.031 qpair failed and we were unable to recover it. 00:39:07.031 [2024-09-27 15:57:47.210091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.031 [2024-09-27 15:57:47.210100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.031 qpair failed and we were unable to recover it. 00:39:07.031 [2024-09-27 15:57:47.210374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.031 [2024-09-27 15:57:47.210383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.031 qpair failed and we were unable to recover it. 00:39:07.031 [2024-09-27 15:57:47.210695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.031 [2024-09-27 15:57:47.210704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.031 qpair failed and we were unable to recover it. 00:39:07.031 [2024-09-27 15:57:47.211014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.031 [2024-09-27 15:57:47.211023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.031 qpair failed and we were unable to recover it. 00:39:07.031 [2024-09-27 15:57:47.211190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.031 [2024-09-27 15:57:47.211199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.031 qpair failed and we were unable to recover it. 00:39:07.031 [2024-09-27 15:57:47.211375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.031 [2024-09-27 15:57:47.211386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.031 qpair failed and we were unable to recover it. 00:39:07.031 [2024-09-27 15:57:47.211698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.031 [2024-09-27 15:57:47.211706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.031 qpair failed and we were unable to recover it. 00:39:07.031 [2024-09-27 15:57:47.211875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.031 [2024-09-27 15:57:47.211882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.031 qpair failed and we were unable to recover it. 00:39:07.031 [2024-09-27 15:57:47.212087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.031 [2024-09-27 15:57:47.212096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.031 qpair failed and we were unable to recover it. 00:39:07.031 [2024-09-27 15:57:47.212379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.031 [2024-09-27 15:57:47.212389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.031 qpair failed and we were unable to recover it. 00:39:07.031 [2024-09-27 15:57:47.212708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.031 [2024-09-27 15:57:47.212716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.031 qpair failed and we were unable to recover it. 00:39:07.031 [2024-09-27 15:57:47.213036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.031 [2024-09-27 15:57:47.213045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.031 qpair failed and we were unable to recover it. 00:39:07.031 [2024-09-27 15:57:47.213239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.031 [2024-09-27 15:57:47.213249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.031 qpair failed and we were unable to recover it. 00:39:07.031 [2024-09-27 15:57:47.213292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.031 [2024-09-27 15:57:47.213300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.031 qpair failed and we were unable to recover it. 00:39:07.031 [2024-09-27 15:57:47.213601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.031 [2024-09-27 15:57:47.213610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.031 qpair failed and we were unable to recover it. 00:39:07.031 [2024-09-27 15:57:47.213909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.031 [2024-09-27 15:57:47.213918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.031 qpair failed and we were unable to recover it. 00:39:07.031 [2024-09-27 15:57:47.214241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.031 [2024-09-27 15:57:47.214249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.031 qpair failed and we were unable to recover it. 00:39:07.031 [2024-09-27 15:57:47.214438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.031 [2024-09-27 15:57:47.214447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.031 qpair failed and we were unable to recover it. 00:39:07.031 [2024-09-27 15:57:47.214772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.031 [2024-09-27 15:57:47.214781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.031 qpair failed and we were unable to recover it. 00:39:07.031 [2024-09-27 15:57:47.215118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.031 [2024-09-27 15:57:47.215127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.031 qpair failed and we were unable to recover it. 00:39:07.031 [2024-09-27 15:57:47.215453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.031 [2024-09-27 15:57:47.215463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.031 qpair failed and we were unable to recover it. 00:39:07.031 [2024-09-27 15:57:47.215785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.031 [2024-09-27 15:57:47.215795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.031 qpair failed and we were unable to recover it. 00:39:07.031 [2024-09-27 15:57:47.216114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.031 [2024-09-27 15:57:47.216124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.031 qpair failed and we were unable to recover it. 00:39:07.031 [2024-09-27 15:57:47.216430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.031 [2024-09-27 15:57:47.216439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.031 qpair failed and we were unable to recover it. 00:39:07.031 [2024-09-27 15:57:47.216751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.031 [2024-09-27 15:57:47.216761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.031 qpair failed and we were unable to recover it. 00:39:07.031 [2024-09-27 15:57:47.217062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.031 [2024-09-27 15:57:47.217071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.031 qpair failed and we were unable to recover it. 00:39:07.031 [2024-09-27 15:57:47.217239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.031 [2024-09-27 15:57:47.217247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.031 qpair failed and we were unable to recover it. 00:39:07.031 [2024-09-27 15:57:47.217562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.031 [2024-09-27 15:57:47.217572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.031 qpair failed and we were unable to recover it. 00:39:07.031 [2024-09-27 15:57:47.217810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.031 [2024-09-27 15:57:47.217819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.031 qpair failed and we were unable to recover it. 00:39:07.031 [2024-09-27 15:57:47.218141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.031 [2024-09-27 15:57:47.218151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.031 qpair failed and we were unable to recover it. 00:39:07.031 [2024-09-27 15:57:47.218340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.031 [2024-09-27 15:57:47.218349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.031 qpair failed and we were unable to recover it. 00:39:07.031 [2024-09-27 15:57:47.218683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.031 [2024-09-27 15:57:47.218692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.031 qpair failed and we were unable to recover it. 00:39:07.031 [2024-09-27 15:57:47.219004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.031 [2024-09-27 15:57:47.219016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.031 qpair failed and we were unable to recover it. 00:39:07.031 [2024-09-27 15:57:47.219342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.032 [2024-09-27 15:57:47.219351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.032 qpair failed and we were unable to recover it. 00:39:07.032 [2024-09-27 15:57:47.219662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.032 [2024-09-27 15:57:47.219670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.032 qpair failed and we were unable to recover it. 00:39:07.032 [2024-09-27 15:57:47.219992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.032 [2024-09-27 15:57:47.220002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.032 qpair failed and we were unable to recover it. 00:39:07.032 [2024-09-27 15:57:47.220324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.032 [2024-09-27 15:57:47.220334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.032 qpair failed and we were unable to recover it. 00:39:07.032 [2024-09-27 15:57:47.220630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.032 [2024-09-27 15:57:47.220639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.032 qpair failed and we were unable to recover it. 00:39:07.032 [2024-09-27 15:57:47.220980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.032 [2024-09-27 15:57:47.220989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.032 qpair failed and we were unable to recover it. 00:39:07.032 [2024-09-27 15:57:47.221293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.032 [2024-09-27 15:57:47.221302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.032 qpair failed and we were unable to recover it. 00:39:07.032 [2024-09-27 15:57:47.221491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.032 [2024-09-27 15:57:47.221498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.032 qpair failed and we were unable to recover it. 00:39:07.032 [2024-09-27 15:57:47.221660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.032 [2024-09-27 15:57:47.221668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.032 qpair failed and we were unable to recover it. 00:39:07.032 [2024-09-27 15:57:47.221983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.032 [2024-09-27 15:57:47.221992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.032 qpair failed and we were unable to recover it. 00:39:07.032 [2024-09-27 15:57:47.222310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.032 [2024-09-27 15:57:47.222318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.032 qpair failed and we were unable to recover it. 00:39:07.032 [2024-09-27 15:57:47.222629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.032 [2024-09-27 15:57:47.222638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.032 qpair failed and we were unable to recover it. 00:39:07.032 [2024-09-27 15:57:47.222950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.032 [2024-09-27 15:57:47.222959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.032 qpair failed and we were unable to recover it. 00:39:07.032 [2024-09-27 15:57:47.223287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.032 [2024-09-27 15:57:47.223296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.032 qpair failed and we were unable to recover it. 00:39:07.032 [2024-09-27 15:57:47.223605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.032 [2024-09-27 15:57:47.223614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.032 qpair failed and we were unable to recover it. 00:39:07.032 [2024-09-27 15:57:47.223927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.032 [2024-09-27 15:57:47.223938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.032 qpair failed and we were unable to recover it. 00:39:07.032 [2024-09-27 15:57:47.224245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.032 [2024-09-27 15:57:47.224254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.032 qpair failed and we were unable to recover it. 00:39:07.032 [2024-09-27 15:57:47.224418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.032 [2024-09-27 15:57:47.224425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.032 qpair failed and we were unable to recover it. 00:39:07.032 [2024-09-27 15:57:47.224739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.032 [2024-09-27 15:57:47.224748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.032 qpair failed and we were unable to recover it. 00:39:07.032 [2024-09-27 15:57:47.225057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.032 [2024-09-27 15:57:47.225066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.032 qpair failed and we were unable to recover it. 00:39:07.032 [2024-09-27 15:57:47.225417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.032 [2024-09-27 15:57:47.225425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.032 qpair failed and we were unable to recover it. 00:39:07.032 [2024-09-27 15:57:47.225734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.032 [2024-09-27 15:57:47.225743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.032 qpair failed and we were unable to recover it. 00:39:07.032 [2024-09-27 15:57:47.226055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.032 [2024-09-27 15:57:47.226063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.032 qpair failed and we were unable to recover it. 00:39:07.032 [2024-09-27 15:57:47.226345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.032 [2024-09-27 15:57:47.226355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.032 qpair failed and we were unable to recover it. 00:39:07.032 [2024-09-27 15:57:47.226668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.032 [2024-09-27 15:57:47.226678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.032 qpair failed and we were unable to recover it. 00:39:07.032 [2024-09-27 15:57:47.226846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.032 [2024-09-27 15:57:47.226855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.032 qpair failed and we were unable to recover it. 00:39:07.032 [2024-09-27 15:57:47.227132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.032 [2024-09-27 15:57:47.227143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.032 qpair failed and we were unable to recover it. 00:39:07.032 [2024-09-27 15:57:47.227337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.032 [2024-09-27 15:57:47.227346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.032 qpair failed and we were unable to recover it. 00:39:07.032 [2024-09-27 15:57:47.227523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.032 [2024-09-27 15:57:47.227533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.032 qpair failed and we were unable to recover it. 00:39:07.032 [2024-09-27 15:57:47.227846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.032 [2024-09-27 15:57:47.227856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.032 qpair failed and we were unable to recover it. 00:39:07.032 [2024-09-27 15:57:47.228194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.032 [2024-09-27 15:57:47.228203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.032 qpair failed and we were unable to recover it. 00:39:07.032 [2024-09-27 15:57:47.228605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.032 [2024-09-27 15:57:47.228615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.032 qpair failed and we were unable to recover it. 00:39:07.032 [2024-09-27 15:57:47.228793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.032 [2024-09-27 15:57:47.228803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.032 qpair failed and we were unable to recover it. 00:39:07.032 [2024-09-27 15:57:47.228864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.032 [2024-09-27 15:57:47.228872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.032 qpair failed and we were unable to recover it. 00:39:07.032 [2024-09-27 15:57:47.229174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.032 [2024-09-27 15:57:47.229184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.032 qpair failed and we were unable to recover it. 00:39:07.032 [2024-09-27 15:57:47.229499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.032 [2024-09-27 15:57:47.229509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.032 qpair failed and we were unable to recover it. 00:39:07.032 [2024-09-27 15:57:47.229819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.032 [2024-09-27 15:57:47.229830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.032 qpair failed and we were unable to recover it. 00:39:07.032 [2024-09-27 15:57:47.230131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.033 [2024-09-27 15:57:47.230141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.033 qpair failed and we were unable to recover it. 00:39:07.033 [2024-09-27 15:57:47.230314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.033 [2024-09-27 15:57:47.230321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.033 qpair failed and we were unable to recover it. 00:39:07.033 [2024-09-27 15:57:47.230648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.033 [2024-09-27 15:57:47.230657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.033 qpair failed and we were unable to recover it. 00:39:07.033 [2024-09-27 15:57:47.230959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.033 [2024-09-27 15:57:47.230968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.033 qpair failed and we were unable to recover it. 00:39:07.033 [2024-09-27 15:57:47.231131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.033 [2024-09-27 15:57:47.231139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.033 qpair failed and we were unable to recover it. 00:39:07.033 [2024-09-27 15:57:47.231441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.033 [2024-09-27 15:57:47.231449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.033 qpair failed and we were unable to recover it. 00:39:07.033 [2024-09-27 15:57:47.231766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.033 [2024-09-27 15:57:47.231775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.033 qpair failed and we were unable to recover it. 00:39:07.033 [2024-09-27 15:57:47.232115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.033 [2024-09-27 15:57:47.232124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.033 qpair failed and we were unable to recover it. 00:39:07.033 [2024-09-27 15:57:47.232291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.033 [2024-09-27 15:57:47.232300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.033 qpair failed and we were unable to recover it. 00:39:07.033 [2024-09-27 15:57:47.232634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.033 [2024-09-27 15:57:47.232643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.033 qpair failed and we were unable to recover it. 00:39:07.033 [2024-09-27 15:57:47.232963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.033 [2024-09-27 15:57:47.232972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.033 qpair failed and we were unable to recover it. 00:39:07.033 [2024-09-27 15:57:47.233268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.033 [2024-09-27 15:57:47.233277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.033 qpair failed and we were unable to recover it. 00:39:07.033 [2024-09-27 15:57:47.233589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.033 [2024-09-27 15:57:47.233597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.033 qpair failed and we were unable to recover it. 00:39:07.033 [2024-09-27 15:57:47.233956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.033 [2024-09-27 15:57:47.233966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.033 qpair failed and we were unable to recover it. 00:39:07.033 [2024-09-27 15:57:47.234293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.033 [2024-09-27 15:57:47.234302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.033 qpair failed and we were unable to recover it. 00:39:07.033 [2024-09-27 15:57:47.234621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.033 [2024-09-27 15:57:47.234630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.033 qpair failed and we were unable to recover it. 00:39:07.033 [2024-09-27 15:57:47.234952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.033 [2024-09-27 15:57:47.234960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.033 qpair failed and we were unable to recover it. 00:39:07.033 [2024-09-27 15:57:47.235129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.033 [2024-09-27 15:57:47.235139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.033 qpair failed and we were unable to recover it. 00:39:07.033 [2024-09-27 15:57:47.235491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.033 [2024-09-27 15:57:47.235500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.033 qpair failed and we were unable to recover it. 00:39:07.033 [2024-09-27 15:57:47.235669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.033 [2024-09-27 15:57:47.235676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.033 qpair failed and we were unable to recover it. 00:39:07.033 [2024-09-27 15:57:47.236055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.033 [2024-09-27 15:57:47.236065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.033 qpair failed and we were unable to recover it. 00:39:07.033 [2024-09-27 15:57:47.236109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.033 [2024-09-27 15:57:47.236118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.033 qpair failed and we were unable to recover it. 00:39:07.033 [2024-09-27 15:57:47.236402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.033 [2024-09-27 15:57:47.236411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.033 qpair failed and we were unable to recover it. 00:39:07.033 [2024-09-27 15:57:47.236729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.033 [2024-09-27 15:57:47.236737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.033 qpair failed and we were unable to recover it. 00:39:07.033 [2024-09-27 15:57:47.237051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.033 [2024-09-27 15:57:47.237060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.033 qpair failed and we were unable to recover it. 00:39:07.033 [2024-09-27 15:57:47.237354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.033 [2024-09-27 15:57:47.237362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.033 qpair failed and we were unable to recover it. 00:39:07.033 [2024-09-27 15:57:47.237671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.033 [2024-09-27 15:57:47.237680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.033 qpair failed and we were unable to recover it. 00:39:07.033 [2024-09-27 15:57:47.237997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.033 [2024-09-27 15:57:47.238006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.033 qpair failed and we were unable to recover it. 00:39:07.033 [2024-09-27 15:57:47.238316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.033 [2024-09-27 15:57:47.238325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.033 qpair failed and we were unable to recover it. 00:39:07.033 [2024-09-27 15:57:47.238645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.033 [2024-09-27 15:57:47.238653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.033 qpair failed and we were unable to recover it. 00:39:07.033 [2024-09-27 15:57:47.238843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.033 [2024-09-27 15:57:47.238854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.033 qpair failed and we were unable to recover it. 00:39:07.033 [2024-09-27 15:57:47.239187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.033 [2024-09-27 15:57:47.239196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.033 qpair failed and we were unable to recover it. 00:39:07.033 [2024-09-27 15:57:47.239488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.033 [2024-09-27 15:57:47.239497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.033 qpair failed and we were unable to recover it. 00:39:07.033 [2024-09-27 15:57:47.239807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.033 [2024-09-27 15:57:47.239815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.033 qpair failed and we were unable to recover it. 00:39:07.033 [2024-09-27 15:57:47.240129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.034 [2024-09-27 15:57:47.240138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.034 qpair failed and we were unable to recover it. 00:39:07.034 [2024-09-27 15:57:47.240449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.034 [2024-09-27 15:57:47.240458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.034 qpair failed and we were unable to recover it. 00:39:07.034 [2024-09-27 15:57:47.240777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.034 [2024-09-27 15:57:47.240786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.034 qpair failed and we were unable to recover it. 00:39:07.034 [2024-09-27 15:57:47.241121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.034 [2024-09-27 15:57:47.241130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.034 qpair failed and we were unable to recover it. 00:39:07.034 [2024-09-27 15:57:47.241283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.034 [2024-09-27 15:57:47.241292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.034 qpair failed and we were unable to recover it. 00:39:07.034 [2024-09-27 15:57:47.241605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.034 [2024-09-27 15:57:47.241614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.034 qpair failed and we were unable to recover it. 00:39:07.034 [2024-09-27 15:57:47.241935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.034 [2024-09-27 15:57:47.241944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.034 qpair failed and we were unable to recover it. 00:39:07.034 [2024-09-27 15:57:47.242265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.034 [2024-09-27 15:57:47.242274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.034 qpair failed and we were unable to recover it. 00:39:07.034 [2024-09-27 15:57:47.242471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.034 [2024-09-27 15:57:47.242479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.034 qpair failed and we were unable to recover it. 00:39:07.034 [2024-09-27 15:57:47.242787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.034 [2024-09-27 15:57:47.242796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.034 qpair failed and we were unable to recover it. 00:39:07.034 [2024-09-27 15:57:47.243130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.034 [2024-09-27 15:57:47.243139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.034 qpair failed and we were unable to recover it. 00:39:07.034 [2024-09-27 15:57:47.243320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.034 [2024-09-27 15:57:47.243328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.034 qpair failed and we were unable to recover it. 00:39:07.034 [2024-09-27 15:57:47.243663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.034 [2024-09-27 15:57:47.243673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.034 qpair failed and we were unable to recover it. 00:39:07.034 [2024-09-27 15:57:47.243858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.034 [2024-09-27 15:57:47.243867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.034 qpair failed and we were unable to recover it. 00:39:07.034 [2024-09-27 15:57:47.244270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.034 [2024-09-27 15:57:47.244279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.034 qpair failed and we were unable to recover it. 00:39:07.034 [2024-09-27 15:57:47.244474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.034 [2024-09-27 15:57:47.244482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.034 qpair failed and we were unable to recover it. 00:39:07.034 [2024-09-27 15:57:47.244678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.034 [2024-09-27 15:57:47.244686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.034 qpair failed and we were unable to recover it. 00:39:07.034 [2024-09-27 15:57:47.245017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.034 [2024-09-27 15:57:47.245026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.034 qpair failed and we were unable to recover it. 00:39:07.034 [2024-09-27 15:57:47.245334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.034 [2024-09-27 15:57:47.245343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.034 qpair failed and we were unable to recover it. 00:39:07.034 [2024-09-27 15:57:47.245659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.034 [2024-09-27 15:57:47.245668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.034 qpair failed and we were unable to recover it. 00:39:07.034 [2024-09-27 15:57:47.245970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.034 [2024-09-27 15:57:47.245978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.034 qpair failed and we were unable to recover it. 00:39:07.034 [2024-09-27 15:57:47.246251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.034 [2024-09-27 15:57:47.246260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.034 qpair failed and we were unable to recover it. 00:39:07.034 [2024-09-27 15:57:47.246565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.034 [2024-09-27 15:57:47.246574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.034 qpair failed and we were unable to recover it. 00:39:07.034 [2024-09-27 15:57:47.246744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.034 [2024-09-27 15:57:47.246754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.034 qpair failed and we were unable to recover it. 00:39:07.034 [2024-09-27 15:57:47.246982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.034 [2024-09-27 15:57:47.246990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.034 qpair failed and we were unable to recover it. 00:39:07.034 [2024-09-27 15:57:47.247190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.034 [2024-09-27 15:57:47.247198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.034 qpair failed and we were unable to recover it. 00:39:07.034 [2024-09-27 15:57:47.247353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.034 [2024-09-27 15:57:47.247362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.034 qpair failed and we were unable to recover it. 00:39:07.034 [2024-09-27 15:57:47.247701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.034 [2024-09-27 15:57:47.247711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.034 qpair failed and we were unable to recover it. 00:39:07.034 [2024-09-27 15:57:47.248041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.034 [2024-09-27 15:57:47.248050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.034 qpair failed and we were unable to recover it. 00:39:07.034 [2024-09-27 15:57:47.248378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.034 [2024-09-27 15:57:47.248386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.034 qpair failed and we were unable to recover it. 00:39:07.034 [2024-09-27 15:57:47.248695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.034 [2024-09-27 15:57:47.248703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.034 qpair failed and we were unable to recover it. 00:39:07.034 [2024-09-27 15:57:47.249059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.034 [2024-09-27 15:57:47.249067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.034 qpair failed and we were unable to recover it. 00:39:07.034 [2024-09-27 15:57:47.249235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.034 [2024-09-27 15:57:47.249244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.034 qpair failed and we were unable to recover it. 00:39:07.034 [2024-09-27 15:57:47.249553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.034 [2024-09-27 15:57:47.249561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.034 qpair failed and we were unable to recover it. 00:39:07.034 [2024-09-27 15:57:47.249713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.034 [2024-09-27 15:57:47.249720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.035 qpair failed and we were unable to recover it. 00:39:07.035 [2024-09-27 15:57:47.249978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.035 [2024-09-27 15:57:47.249987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.035 qpair failed and we were unable to recover it. 00:39:07.035 [2024-09-27 15:57:47.250140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.035 [2024-09-27 15:57:47.250149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.035 qpair failed and we were unable to recover it. 00:39:07.035 [2024-09-27 15:57:47.250325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.035 [2024-09-27 15:57:47.250333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.035 qpair failed and we were unable to recover it. 00:39:07.035 [2024-09-27 15:57:47.250642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.035 [2024-09-27 15:57:47.250650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.035 qpair failed and we were unable to recover it. 00:39:07.035 [2024-09-27 15:57:47.250973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.035 [2024-09-27 15:57:47.250981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.035 qpair failed and we were unable to recover it. 00:39:07.035 [2024-09-27 15:57:47.251327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.035 [2024-09-27 15:57:47.251335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.035 qpair failed and we were unable to recover it. 00:39:07.035 [2024-09-27 15:57:47.251690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.035 [2024-09-27 15:57:47.251698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.035 qpair failed and we were unable to recover it. 00:39:07.035 [2024-09-27 15:57:47.252031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.035 [2024-09-27 15:57:47.252039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.035 qpair failed and we were unable to recover it. 00:39:07.035 [2024-09-27 15:57:47.252193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.035 [2024-09-27 15:57:47.252202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.035 qpair failed and we were unable to recover it. 00:39:07.035 [2024-09-27 15:57:47.252401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.035 [2024-09-27 15:57:47.252409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.035 qpair failed and we were unable to recover it. 00:39:07.035 [2024-09-27 15:57:47.252698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.035 [2024-09-27 15:57:47.252706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.035 qpair failed and we were unable to recover it. 00:39:07.035 [2024-09-27 15:57:47.253031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.035 [2024-09-27 15:57:47.253040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.035 qpair failed and we were unable to recover it. 00:39:07.035 [2024-09-27 15:57:47.253364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.035 [2024-09-27 15:57:47.253372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.035 qpair failed and we were unable to recover it. 00:39:07.035 [2024-09-27 15:57:47.253567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.035 [2024-09-27 15:57:47.253575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.035 qpair failed and we were unable to recover it. 00:39:07.035 [2024-09-27 15:57:47.253915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.035 [2024-09-27 15:57:47.253923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.035 qpair failed and we were unable to recover it. 00:39:07.035 [2024-09-27 15:57:47.254251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.035 [2024-09-27 15:57:47.254261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.035 qpair failed and we were unable to recover it. 00:39:07.035 [2024-09-27 15:57:47.254450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.035 [2024-09-27 15:57:47.254459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.035 qpair failed and we were unable to recover it. 00:39:07.035 [2024-09-27 15:57:47.254627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.035 [2024-09-27 15:57:47.254636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.035 qpair failed and we were unable to recover it. 00:39:07.035 [2024-09-27 15:57:47.254795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.035 [2024-09-27 15:57:47.254806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.035 qpair failed and we were unable to recover it. 00:39:07.035 [2024-09-27 15:57:47.255173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.035 [2024-09-27 15:57:47.255183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.035 qpair failed and we were unable to recover it. 00:39:07.035 [2024-09-27 15:57:47.255491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.035 [2024-09-27 15:57:47.255500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.035 qpair failed and we were unable to recover it. 00:39:07.035 [2024-09-27 15:57:47.255652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.035 [2024-09-27 15:57:47.255661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.035 qpair failed and we were unable to recover it. 00:39:07.035 [2024-09-27 15:57:47.255865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.035 [2024-09-27 15:57:47.255874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.035 qpair failed and we were unable to recover it. 00:39:07.035 [2024-09-27 15:57:47.256268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.035 [2024-09-27 15:57:47.256278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.035 qpair failed and we were unable to recover it. 00:39:07.035 [2024-09-27 15:57:47.256588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.035 [2024-09-27 15:57:47.256597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.035 qpair failed and we were unable to recover it. 00:39:07.035 [2024-09-27 15:57:47.256917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.035 [2024-09-27 15:57:47.256927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.035 qpair failed and we were unable to recover it. 00:39:07.035 [2024-09-27 15:57:47.257139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.035 [2024-09-27 15:57:47.257149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.035 qpair failed and we were unable to recover it. 00:39:07.035 [2024-09-27 15:57:47.257473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.035 [2024-09-27 15:57:47.257482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.035 qpair failed and we were unable to recover it. 00:39:07.035 [2024-09-27 15:57:47.257669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.035 [2024-09-27 15:57:47.257679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.035 qpair failed and we were unable to recover it. 00:39:07.035 [2024-09-27 15:57:47.257851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.035 [2024-09-27 15:57:47.257861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.035 qpair failed and we were unable to recover it. 00:39:07.035 [2024-09-27 15:57:47.258151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.035 [2024-09-27 15:57:47.258161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.035 qpair failed and we were unable to recover it. 00:39:07.035 [2024-09-27 15:57:47.258465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.035 [2024-09-27 15:57:47.258475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.035 qpair failed and we were unable to recover it. 00:39:07.035 [2024-09-27 15:57:47.258790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.035 [2024-09-27 15:57:47.258799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.035 qpair failed and we were unable to recover it. 00:39:07.035 [2024-09-27 15:57:47.259090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.035 [2024-09-27 15:57:47.259100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.035 qpair failed and we were unable to recover it. 00:39:07.035 [2024-09-27 15:57:47.259410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.035 [2024-09-27 15:57:47.259419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.035 qpair failed and we were unable to recover it. 00:39:07.035 [2024-09-27 15:57:47.259589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.035 [2024-09-27 15:57:47.259600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.035 qpair failed and we were unable to recover it. 00:39:07.035 [2024-09-27 15:57:47.259641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.036 [2024-09-27 15:57:47.259650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.036 qpair failed and we were unable to recover it. 00:39:07.036 [2024-09-27 15:57:47.259924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.036 [2024-09-27 15:57:47.259933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.036 qpair failed and we were unable to recover it. 00:39:07.036 [2024-09-27 15:57:47.260111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.036 [2024-09-27 15:57:47.260121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.036 qpair failed and we were unable to recover it. 00:39:07.036 [2024-09-27 15:57:47.260438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.036 [2024-09-27 15:57:47.260447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.036 qpair failed and we were unable to recover it. 00:39:07.036 [2024-09-27 15:57:47.260722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.036 [2024-09-27 15:57:47.260731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.036 qpair failed and we were unable to recover it. 00:39:07.036 [2024-09-27 15:57:47.260801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.036 [2024-09-27 15:57:47.260809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.036 qpair failed and we were unable to recover it. 00:39:07.036 [2024-09-27 15:57:47.261026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.036 [2024-09-27 15:57:47.261036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.036 qpair failed and we were unable to recover it. 00:39:07.036 [2024-09-27 15:57:47.261352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.036 [2024-09-27 15:57:47.261361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.036 qpair failed and we were unable to recover it. 00:39:07.036 [2024-09-27 15:57:47.261660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.036 [2024-09-27 15:57:47.261669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.036 qpair failed and we were unable to recover it. 00:39:07.036 [2024-09-27 15:57:47.261941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.036 [2024-09-27 15:57:47.261950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.036 qpair failed and we were unable to recover it. 00:39:07.036 [2024-09-27 15:57:47.262137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.036 [2024-09-27 15:57:47.262145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.036 qpair failed and we were unable to recover it. 00:39:07.036 [2024-09-27 15:57:47.262307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.036 [2024-09-27 15:57:47.262314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.036 qpair failed and we were unable to recover it. 00:39:07.036 [2024-09-27 15:57:47.262636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.036 [2024-09-27 15:57:47.262645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.036 qpair failed and we were unable to recover it. 00:39:07.036 [2024-09-27 15:57:47.262959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.036 [2024-09-27 15:57:47.262967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.036 qpair failed and we were unable to recover it. 00:39:07.036 [2024-09-27 15:57:47.263337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.036 [2024-09-27 15:57:47.263346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.036 qpair failed and we were unable to recover it. 00:39:07.036 [2024-09-27 15:57:47.263659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.036 [2024-09-27 15:57:47.263668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.036 qpair failed and we were unable to recover it. 00:39:07.036 [2024-09-27 15:57:47.263861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.036 [2024-09-27 15:57:47.263869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.036 qpair failed and we were unable to recover it. 00:39:07.036 [2024-09-27 15:57:47.264095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.036 [2024-09-27 15:57:47.264103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.036 qpair failed and we were unable to recover it. 00:39:07.036 [2024-09-27 15:57:47.264283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.036 [2024-09-27 15:57:47.264291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.036 qpair failed and we were unable to recover it. 00:39:07.036 [2024-09-27 15:57:47.264600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.036 [2024-09-27 15:57:47.264609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.036 qpair failed and we were unable to recover it. 00:39:07.036 [2024-09-27 15:57:47.264801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.036 [2024-09-27 15:57:47.264810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.036 qpair failed and we were unable to recover it. 00:39:07.036 [2024-09-27 15:57:47.265182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.036 [2024-09-27 15:57:47.265192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.036 qpair failed and we were unable to recover it. 00:39:07.036 [2024-09-27 15:57:47.265502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.036 [2024-09-27 15:57:47.265511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.036 qpair failed and we were unable to recover it. 00:39:07.036 [2024-09-27 15:57:47.265829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.036 [2024-09-27 15:57:47.265837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.036 qpair failed and we were unable to recover it. 00:39:07.036 [2024-09-27 15:57:47.266007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.036 [2024-09-27 15:57:47.266015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.036 qpair failed and we were unable to recover it. 00:39:07.036 [2024-09-27 15:57:47.266310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.036 [2024-09-27 15:57:47.266319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.036 qpair failed and we were unable to recover it. 00:39:07.036 [2024-09-27 15:57:47.266639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.036 [2024-09-27 15:57:47.266648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.036 qpair failed and we were unable to recover it. 00:39:07.036 [2024-09-27 15:57:47.267035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.036 [2024-09-27 15:57:47.267043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.036 qpair failed and we were unable to recover it. 00:39:07.036 [2024-09-27 15:57:47.267364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.036 [2024-09-27 15:57:47.267373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.036 qpair failed and we were unable to recover it. 00:39:07.036 [2024-09-27 15:57:47.267551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.036 [2024-09-27 15:57:47.267558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.036 qpair failed and we were unable to recover it. 00:39:07.036 [2024-09-27 15:57:47.267858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.036 [2024-09-27 15:57:47.267868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.036 qpair failed and we were unable to recover it. 00:39:07.036 [2024-09-27 15:57:47.268181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.036 [2024-09-27 15:57:47.268191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.036 qpair failed and we were unable to recover it. 00:39:07.036 [2024-09-27 15:57:47.268352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.036 [2024-09-27 15:57:47.268361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.036 qpair failed and we were unable to recover it. 00:39:07.036 [2024-09-27 15:57:47.268643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.036 [2024-09-27 15:57:47.268653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.036 qpair failed and we were unable to recover it. 00:39:07.036 [2024-09-27 15:57:47.268965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.036 [2024-09-27 15:57:47.268975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.036 qpair failed and we were unable to recover it. 00:39:07.036 [2024-09-27 15:57:47.269051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.036 [2024-09-27 15:57:47.269058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.036 qpair failed and we were unable to recover it. 00:39:07.036 [2024-09-27 15:57:47.269186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.036 [2024-09-27 15:57:47.269194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.036 qpair failed and we were unable to recover it. 00:39:07.036 [2024-09-27 15:57:47.269448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.037 [2024-09-27 15:57:47.269456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.037 qpair failed and we were unable to recover it. 00:39:07.037 [2024-09-27 15:57:47.269629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.037 [2024-09-27 15:57:47.269638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.037 qpair failed and we were unable to recover it. 00:39:07.037 [2024-09-27 15:57:47.269823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.037 [2024-09-27 15:57:47.269831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.037 qpair failed and we were unable to recover it. 00:39:07.037 [2024-09-27 15:57:47.269973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.037 [2024-09-27 15:57:47.269983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.037 qpair failed and we were unable to recover it. 00:39:07.037 [2024-09-27 15:57:47.270158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.037 [2024-09-27 15:57:47.270167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.037 qpair failed and we were unable to recover it. 00:39:07.037 [2024-09-27 15:57:47.270347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.037 [2024-09-27 15:57:47.270356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.037 qpair failed and we were unable to recover it. 00:39:07.037 [2024-09-27 15:57:47.270658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.037 [2024-09-27 15:57:47.270668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.037 qpair failed and we were unable to recover it. 00:39:07.037 [2024-09-27 15:57:47.270852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.037 [2024-09-27 15:57:47.270862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.037 qpair failed and we were unable to recover it. 00:39:07.037 [2024-09-27 15:57:47.271178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.037 [2024-09-27 15:57:47.271189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.037 qpair failed and we were unable to recover it. 00:39:07.037 [2024-09-27 15:57:47.271509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.037 [2024-09-27 15:57:47.271518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.037 qpair failed and we were unable to recover it. 00:39:07.037 [2024-09-27 15:57:47.271716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.037 [2024-09-27 15:57:47.271727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.037 qpair failed and we were unable to recover it. 00:39:07.037 [2024-09-27 15:57:47.271987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.037 [2024-09-27 15:57:47.271997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.037 qpair failed and we were unable to recover it. 00:39:07.037 [2024-09-27 15:57:47.272208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.037 [2024-09-27 15:57:47.272218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.037 qpair failed and we were unable to recover it. 00:39:07.037 [2024-09-27 15:57:47.272538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.037 [2024-09-27 15:57:47.272547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.037 qpair failed and we were unable to recover it. 00:39:07.037 [2024-09-27 15:57:47.272856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.037 [2024-09-27 15:57:47.272864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.037 qpair failed and we were unable to recover it. 00:39:07.037 [2024-09-27 15:57:47.273189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.037 [2024-09-27 15:57:47.273197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.037 qpair failed and we were unable to recover it. 00:39:07.037 [2024-09-27 15:57:47.273514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.037 [2024-09-27 15:57:47.273523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.037 qpair failed and we were unable to recover it. 00:39:07.037 [2024-09-27 15:57:47.273851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.037 [2024-09-27 15:57:47.273861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.037 qpair failed and we were unable to recover it. 00:39:07.037 [2024-09-27 15:57:47.274055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.037 [2024-09-27 15:57:47.274063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.037 qpair failed and we were unable to recover it. 00:39:07.037 [2024-09-27 15:57:47.274385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.037 [2024-09-27 15:57:47.274393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.037 qpair failed and we were unable to recover it. 00:39:07.037 [2024-09-27 15:57:47.274594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.037 [2024-09-27 15:57:47.274604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.037 qpair failed and we were unable to recover it. 00:39:07.037 [2024-09-27 15:57:47.274789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.037 [2024-09-27 15:57:47.274799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.037 qpair failed and we were unable to recover it. 00:39:07.037 [2024-09-27 15:57:47.274967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.037 [2024-09-27 15:57:47.274977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.037 qpair failed and we were unable to recover it. 00:39:07.037 [2024-09-27 15:57:47.275162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.037 [2024-09-27 15:57:47.275172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.037 qpair failed and we were unable to recover it. 00:39:07.037 [2024-09-27 15:57:47.275343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.037 [2024-09-27 15:57:47.275353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.037 qpair failed and we were unable to recover it. 00:39:07.037 [2024-09-27 15:57:47.275511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.037 [2024-09-27 15:57:47.275520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.037 qpair failed and we were unable to recover it. 00:39:07.037 [2024-09-27 15:57:47.275561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.037 [2024-09-27 15:57:47.275570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.037 qpair failed and we were unable to recover it. 00:39:07.037 [2024-09-27 15:57:47.275726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.037 [2024-09-27 15:57:47.275734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.037 qpair failed and we were unable to recover it. 00:39:07.037 [2024-09-27 15:57:47.276042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.037 [2024-09-27 15:57:47.276052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.037 qpair failed and we were unable to recover it. 00:39:07.037 [2024-09-27 15:57:47.276238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.037 [2024-09-27 15:57:47.276248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.037 qpair failed and we were unable to recover it. 00:39:07.037 [2024-09-27 15:57:47.276568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.037 [2024-09-27 15:57:47.276577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.037 qpair failed and we were unable to recover it. 00:39:07.037 [2024-09-27 15:57:47.276764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.037 [2024-09-27 15:57:47.276774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.037 qpair failed and we were unable to recover it. 00:39:07.037 [2024-09-27 15:57:47.277057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.037 [2024-09-27 15:57:47.277067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.037 qpair failed and we were unable to recover it. 00:39:07.037 [2024-09-27 15:57:47.277376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.037 [2024-09-27 15:57:47.277386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.037 qpair failed and we were unable to recover it. 00:39:07.037 [2024-09-27 15:57:47.277563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.037 [2024-09-27 15:57:47.277573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.037 qpair failed and we were unable to recover it. 00:39:07.037 [2024-09-27 15:57:47.277908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.037 [2024-09-27 15:57:47.277918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.037 qpair failed and we were unable to recover it. 00:39:07.037 [2024-09-27 15:57:47.278238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.037 [2024-09-27 15:57:47.278247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.037 qpair failed and we were unable to recover it. 00:39:07.038 [2024-09-27 15:57:47.278568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.038 [2024-09-27 15:57:47.278579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.038 qpair failed and we were unable to recover it. 00:39:07.038 [2024-09-27 15:57:47.278925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.038 [2024-09-27 15:57:47.278934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.038 qpair failed and we were unable to recover it. 00:39:07.038 [2024-09-27 15:57:47.279113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.038 [2024-09-27 15:57:47.279122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.038 qpair failed and we were unable to recover it. 00:39:07.038 [2024-09-27 15:57:47.279291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.038 [2024-09-27 15:57:47.279300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.038 qpair failed and we were unable to recover it. 00:39:07.038 [2024-09-27 15:57:47.279643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.038 [2024-09-27 15:57:47.279653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.038 qpair failed and we were unable to recover it. 00:39:07.038 [2024-09-27 15:57:47.279845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.038 [2024-09-27 15:57:47.279854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.038 qpair failed and we were unable to recover it. 00:39:07.038 [2024-09-27 15:57:47.280125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.038 [2024-09-27 15:57:47.280135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.038 qpair failed and we were unable to recover it. 00:39:07.038 [2024-09-27 15:57:47.280446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.038 [2024-09-27 15:57:47.280455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.038 qpair failed and we were unable to recover it. 00:39:07.038 [2024-09-27 15:57:47.280762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.038 [2024-09-27 15:57:47.280770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.038 qpair failed and we were unable to recover it. 00:39:07.038 [2024-09-27 15:57:47.281017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.038 [2024-09-27 15:57:47.281026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.038 qpair failed and we were unable to recover it. 00:39:07.038 [2024-09-27 15:57:47.281201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.038 [2024-09-27 15:57:47.281208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.038 qpair failed and we were unable to recover it. 00:39:07.038 [2024-09-27 15:57:47.281364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.038 [2024-09-27 15:57:47.281371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.038 qpair failed and we were unable to recover it. 00:39:07.038 [2024-09-27 15:57:47.281678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.038 [2024-09-27 15:57:47.281688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.038 qpair failed and we were unable to recover it. 00:39:07.038 [2024-09-27 15:57:47.281851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.038 [2024-09-27 15:57:47.281861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.038 qpair failed and we were unable to recover it. 00:39:07.038 [2024-09-27 15:57:47.282183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.038 [2024-09-27 15:57:47.282193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.038 qpair failed and we were unable to recover it. 00:39:07.038 [2024-09-27 15:57:47.282526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.038 [2024-09-27 15:57:47.282535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.038 qpair failed and we were unable to recover it. 00:39:07.038 [2024-09-27 15:57:47.282841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.038 [2024-09-27 15:57:47.282849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.038 qpair failed and we were unable to recover it. 00:39:07.038 [2024-09-27 15:57:47.283148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.038 [2024-09-27 15:57:47.283158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.038 qpair failed and we were unable to recover it. 00:39:07.038 [2024-09-27 15:57:47.283482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.038 [2024-09-27 15:57:47.283490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.038 qpair failed and we were unable to recover it. 00:39:07.038 [2024-09-27 15:57:47.283812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.038 [2024-09-27 15:57:47.283821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.038 qpair failed and we were unable to recover it. 00:39:07.038 [2024-09-27 15:57:47.284134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.038 [2024-09-27 15:57:47.284144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.038 qpair failed and we were unable to recover it. 00:39:07.038 [2024-09-27 15:57:47.284456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.038 [2024-09-27 15:57:47.284464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.038 qpair failed and we were unable to recover it. 00:39:07.038 [2024-09-27 15:57:47.284502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.038 [2024-09-27 15:57:47.284509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.038 qpair failed and we were unable to recover it. 00:39:07.038 [2024-09-27 15:57:47.284782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.038 [2024-09-27 15:57:47.284791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.038 qpair failed and we were unable to recover it. 00:39:07.038 [2024-09-27 15:57:47.285001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.038 [2024-09-27 15:57:47.285010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.038 qpair failed and we were unable to recover it. 00:39:07.038 [2024-09-27 15:57:47.285301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.038 [2024-09-27 15:57:47.285310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.038 qpair failed and we were unable to recover it. 00:39:07.038 [2024-09-27 15:57:47.285487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.038 [2024-09-27 15:57:47.285495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.038 qpair failed and we were unable to recover it. 00:39:07.038 [2024-09-27 15:57:47.285818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.038 [2024-09-27 15:57:47.285829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.038 qpair failed and we were unable to recover it. 00:39:07.038 [2024-09-27 15:57:47.286012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.038 [2024-09-27 15:57:47.286022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.038 qpair failed and we were unable to recover it. 00:39:07.038 [2024-09-27 15:57:47.286349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.038 [2024-09-27 15:57:47.286358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.038 qpair failed and we were unable to recover it. 00:39:07.038 [2024-09-27 15:57:47.286484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.038 [2024-09-27 15:57:47.286491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.038 qpair failed and we were unable to recover it. 00:39:07.038 [2024-09-27 15:57:47.286709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.038 [2024-09-27 15:57:47.286717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.038 qpair failed and we were unable to recover it. 00:39:07.038 [2024-09-27 15:57:47.287047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.038 [2024-09-27 15:57:47.287057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.038 qpair failed and we were unable to recover it. 00:39:07.038 [2024-09-27 15:57:47.287385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.038 [2024-09-27 15:57:47.287393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.039 qpair failed and we were unable to recover it. 00:39:07.039 [2024-09-27 15:57:47.287707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.039 [2024-09-27 15:57:47.287716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.039 qpair failed and we were unable to recover it. 00:39:07.039 [2024-09-27 15:57:47.287873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.039 [2024-09-27 15:57:47.287882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.039 qpair failed and we were unable to recover it. 00:39:07.039 [2024-09-27 15:57:47.288077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.039 [2024-09-27 15:57:47.288085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.039 qpair failed and we were unable to recover it. 00:39:07.039 [2024-09-27 15:57:47.288365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.039 [2024-09-27 15:57:47.288373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.039 qpair failed and we were unable to recover it. 00:39:07.039 [2024-09-27 15:57:47.288543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.039 [2024-09-27 15:57:47.288551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.039 qpair failed and we were unable to recover it. 00:39:07.039 [2024-09-27 15:57:47.288854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.039 [2024-09-27 15:57:47.288863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.039 qpair failed and we were unable to recover it. 00:39:07.039 [2024-09-27 15:57:47.289263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.039 [2024-09-27 15:57:47.289273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.039 qpair failed and we were unable to recover it. 00:39:07.039 [2024-09-27 15:57:47.289585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.039 [2024-09-27 15:57:47.289594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.039 qpair failed and we were unable to recover it. 00:39:07.039 [2024-09-27 15:57:47.289744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.039 [2024-09-27 15:57:47.289754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.039 qpair failed and we were unable to recover it. 00:39:07.039 [2024-09-27 15:57:47.290057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.039 [2024-09-27 15:57:47.290067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.039 qpair failed and we were unable to recover it. 00:39:07.039 [2024-09-27 15:57:47.290271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.039 [2024-09-27 15:57:47.290280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.039 qpair failed and we were unable to recover it. 00:39:07.039 [2024-09-27 15:57:47.290594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.039 [2024-09-27 15:57:47.290604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.039 qpair failed and we were unable to recover it. 00:39:07.039 [2024-09-27 15:57:47.290943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.039 [2024-09-27 15:57:47.290952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.039 qpair failed and we were unable to recover it. 00:39:07.039 [2024-09-27 15:57:47.291256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.039 [2024-09-27 15:57:47.291266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.039 qpair failed and we were unable to recover it. 00:39:07.039 [2024-09-27 15:57:47.291584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.039 [2024-09-27 15:57:47.291593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.039 qpair failed and we were unable to recover it. 00:39:07.039 [2024-09-27 15:57:47.291911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.039 [2024-09-27 15:57:47.291921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.039 qpair failed and we were unable to recover it. 00:39:07.039 [2024-09-27 15:57:47.292202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.039 [2024-09-27 15:57:47.292211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.039 qpair failed and we were unable to recover it. 00:39:07.039 [2024-09-27 15:57:47.292562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.039 [2024-09-27 15:57:47.292572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.039 qpair failed and we were unable to recover it. 00:39:07.039 [2024-09-27 15:57:47.292876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.039 [2024-09-27 15:57:47.292885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.039 qpair failed and we were unable to recover it. 00:39:07.039 [2024-09-27 15:57:47.293224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.039 [2024-09-27 15:57:47.293233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.039 qpair failed and we were unable to recover it. 00:39:07.039 [2024-09-27 15:57:47.293545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.039 [2024-09-27 15:57:47.293553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.039 qpair failed and we were unable to recover it. 00:39:07.039 [2024-09-27 15:57:47.293758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.039 [2024-09-27 15:57:47.293767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.039 qpair failed and we were unable to recover it. 00:39:07.039 [2024-09-27 15:57:47.294052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.039 [2024-09-27 15:57:47.294061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.039 qpair failed and we were unable to recover it. 00:39:07.039 [2024-09-27 15:57:47.294389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.039 [2024-09-27 15:57:47.294397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.039 qpair failed and we were unable to recover it. 00:39:07.039 [2024-09-27 15:57:47.294719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.039 [2024-09-27 15:57:47.294728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.039 qpair failed and we were unable to recover it. 00:39:07.039 [2024-09-27 15:57:47.294920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.039 [2024-09-27 15:57:47.294929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.039 qpair failed and we were unable to recover it. 00:39:07.039 [2024-09-27 15:57:47.295123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.039 [2024-09-27 15:57:47.295133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.039 qpair failed and we were unable to recover it. 00:39:07.039 [2024-09-27 15:57:47.295445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.039 [2024-09-27 15:57:47.295455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.039 qpair failed and we were unable to recover it. 00:39:07.039 [2024-09-27 15:57:47.295659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.039 [2024-09-27 15:57:47.295667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.039 qpair failed and we were unable to recover it. 00:39:07.039 [2024-09-27 15:57:47.295971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.039 [2024-09-27 15:57:47.295980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.039 qpair failed and we were unable to recover it. 00:39:07.039 [2024-09-27 15:57:47.296148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.039 [2024-09-27 15:57:47.296156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.039 qpair failed and we were unable to recover it. 00:39:07.039 [2024-09-27 15:57:47.296334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.039 [2024-09-27 15:57:47.296343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.039 qpair failed and we were unable to recover it. 00:39:07.039 [2024-09-27 15:57:47.296662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.039 [2024-09-27 15:57:47.296672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.039 qpair failed and we were unable to recover it. 00:39:07.039 [2024-09-27 15:57:47.296988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.039 [2024-09-27 15:57:47.297001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.039 qpair failed and we were unable to recover it. 00:39:07.039 [2024-09-27 15:57:47.297328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.039 [2024-09-27 15:57:47.297338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.039 qpair failed and we were unable to recover it. 00:39:07.039 [2024-09-27 15:57:47.297627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.039 [2024-09-27 15:57:47.297635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.039 qpair failed and we were unable to recover it. 00:39:07.039 [2024-09-27 15:57:47.297868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.040 [2024-09-27 15:57:47.297876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.040 qpair failed and we were unable to recover it. 00:39:07.040 [2024-09-27 15:57:47.298067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.040 [2024-09-27 15:57:47.298076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.040 qpair failed and we were unable to recover it. 00:39:07.040 [2024-09-27 15:57:47.298263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.040 [2024-09-27 15:57:47.298272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.040 qpair failed and we were unable to recover it. 00:39:07.040 [2024-09-27 15:57:47.298564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.040 [2024-09-27 15:57:47.298574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.040 qpair failed and we were unable to recover it. 00:39:07.040 [2024-09-27 15:57:47.298624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.040 [2024-09-27 15:57:47.298632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.040 qpair failed and we were unable to recover it. 00:39:07.040 [2024-09-27 15:57:47.298668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.040 [2024-09-27 15:57:47.298675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.040 qpair failed and we were unable to recover it. 00:39:07.040 [2024-09-27 15:57:47.298873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.040 [2024-09-27 15:57:47.298882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.040 qpair failed and we were unable to recover it. 00:39:07.040 [2024-09-27 15:57:47.299068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.040 [2024-09-27 15:57:47.299077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.040 qpair failed and we were unable to recover it. 00:39:07.040 [2024-09-27 15:57:47.299260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.040 [2024-09-27 15:57:47.299268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.040 qpair failed and we were unable to recover it. 00:39:07.040 [2024-09-27 15:57:47.299559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.040 [2024-09-27 15:57:47.299567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.040 qpair failed and we were unable to recover it. 00:39:07.040 [2024-09-27 15:57:47.299897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.040 [2024-09-27 15:57:47.299906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.040 qpair failed and we were unable to recover it. 00:39:07.040 [2024-09-27 15:57:47.300214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.040 [2024-09-27 15:57:47.300223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.040 qpair failed and we were unable to recover it. 00:39:07.040 [2024-09-27 15:57:47.300407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.040 [2024-09-27 15:57:47.300414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.040 qpair failed and we were unable to recover it. 00:39:07.040 [2024-09-27 15:57:47.300580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.040 [2024-09-27 15:57:47.300587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.040 qpair failed and we were unable to recover it. 00:39:07.040 [2024-09-27 15:57:47.300870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.040 [2024-09-27 15:57:47.300879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.040 qpair failed and we were unable to recover it. 00:39:07.040 [2024-09-27 15:57:47.301072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.040 [2024-09-27 15:57:47.301080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.040 qpair failed and we were unable to recover it. 00:39:07.040 [2024-09-27 15:57:47.301421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.040 [2024-09-27 15:57:47.301429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.040 qpair failed and we were unable to recover it. 00:39:07.040 [2024-09-27 15:57:47.301467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.040 [2024-09-27 15:57:47.301476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.040 qpair failed and we were unable to recover it. 00:39:07.040 [2024-09-27 15:57:47.301746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.040 [2024-09-27 15:57:47.301755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.040 qpair failed and we were unable to recover it. 00:39:07.040 [2024-09-27 15:57:47.301918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.040 [2024-09-27 15:57:47.301928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.040 qpair failed and we were unable to recover it. 00:39:07.040 [2024-09-27 15:57:47.302114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.040 [2024-09-27 15:57:47.302124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.040 qpair failed and we were unable to recover it. 00:39:07.040 [2024-09-27 15:57:47.302308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.040 [2024-09-27 15:57:47.302317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.040 qpair failed and we were unable to recover it. 00:39:07.040 [2024-09-27 15:57:47.302482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.040 [2024-09-27 15:57:47.302491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.040 qpair failed and we were unable to recover it. 00:39:07.040 [2024-09-27 15:57:47.302663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.040 [2024-09-27 15:57:47.302671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.040 qpair failed and we were unable to recover it. 00:39:07.040 [2024-09-27 15:57:47.302975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.040 [2024-09-27 15:57:47.302984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.040 qpair failed and we were unable to recover it. 00:39:07.040 [2024-09-27 15:57:47.303176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.040 [2024-09-27 15:57:47.303187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.040 qpair failed and we were unable to recover it. 00:39:07.040 [2024-09-27 15:57:47.303515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.040 [2024-09-27 15:57:47.303524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.040 qpair failed and we were unable to recover it. 00:39:07.040 [2024-09-27 15:57:47.303833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.040 [2024-09-27 15:57:47.303842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.040 qpair failed and we were unable to recover it. 00:39:07.040 [2024-09-27 15:57:47.304221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.040 [2024-09-27 15:57:47.304230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.040 qpair failed and we were unable to recover it. 00:39:07.040 [2024-09-27 15:57:47.304550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.040 [2024-09-27 15:57:47.304558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.040 qpair failed and we were unable to recover it. 00:39:07.040 [2024-09-27 15:57:47.304869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.040 [2024-09-27 15:57:47.304877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.040 qpair failed and we were unable to recover it. 00:39:07.040 [2024-09-27 15:57:47.305146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.040 [2024-09-27 15:57:47.305155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.040 qpair failed and we were unable to recover it. 00:39:07.040 [2024-09-27 15:57:47.305234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.040 [2024-09-27 15:57:47.305242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.040 qpair failed and we were unable to recover it. 00:39:07.040 [2024-09-27 15:57:47.305539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.040 [2024-09-27 15:57:47.305548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.040 qpair failed and we were unable to recover it. 00:39:07.040 [2024-09-27 15:57:47.305934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.040 [2024-09-27 15:57:47.305943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.040 qpair failed and we were unable to recover it. 00:39:07.040 [2024-09-27 15:57:47.306084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.040 [2024-09-27 15:57:47.306092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.040 qpair failed and we were unable to recover it. 00:39:07.040 [2024-09-27 15:57:47.306416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.040 [2024-09-27 15:57:47.306425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.040 qpair failed and we were unable to recover it. 00:39:07.040 [2024-09-27 15:57:47.306735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.041 [2024-09-27 15:57:47.306744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.041 qpair failed and we were unable to recover it. 00:39:07.041 [2024-09-27 15:57:47.307057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.041 [2024-09-27 15:57:47.307066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.041 qpair failed and we were unable to recover it. 00:39:07.041 [2024-09-27 15:57:47.307380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.041 [2024-09-27 15:57:47.307388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.041 qpair failed and we were unable to recover it. 00:39:07.041 [2024-09-27 15:57:47.307569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.041 [2024-09-27 15:57:47.307577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.041 qpair failed and we were unable to recover it. 00:39:07.041 [2024-09-27 15:57:47.307732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.041 [2024-09-27 15:57:47.307740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.041 qpair failed and we were unable to recover it. 00:39:07.041 [2024-09-27 15:57:47.308057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.041 [2024-09-27 15:57:47.308067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.041 qpair failed and we were unable to recover it. 00:39:07.041 [2024-09-27 15:57:47.308109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.041 [2024-09-27 15:57:47.308118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.041 qpair failed and we were unable to recover it. 00:39:07.041 [2024-09-27 15:57:47.308275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.041 [2024-09-27 15:57:47.308283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.041 qpair failed and we were unable to recover it. 00:39:07.041 [2024-09-27 15:57:47.308602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.041 [2024-09-27 15:57:47.308611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.041 qpair failed and we were unable to recover it. 00:39:07.041 [2024-09-27 15:57:47.308763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.041 [2024-09-27 15:57:47.308772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.041 qpair failed and we were unable to recover it. 00:39:07.041 [2024-09-27 15:57:47.308961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.041 [2024-09-27 15:57:47.308969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.041 qpair failed and we were unable to recover it. 00:39:07.041 [2024-09-27 15:57:47.309284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.041 [2024-09-27 15:57:47.309292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.041 qpair failed and we were unable to recover it. 00:39:07.041 [2024-09-27 15:57:47.309599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.041 [2024-09-27 15:57:47.309609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.041 qpair failed and we were unable to recover it. 00:39:07.041 [2024-09-27 15:57:47.309807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.041 [2024-09-27 15:57:47.309816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.041 qpair failed and we were unable to recover it. 00:39:07.041 [2024-09-27 15:57:47.310095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.041 [2024-09-27 15:57:47.310104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.041 qpair failed and we were unable to recover it. 00:39:07.041 [2024-09-27 15:57:47.310422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.041 [2024-09-27 15:57:47.310432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.041 qpair failed and we were unable to recover it. 00:39:07.041 [2024-09-27 15:57:47.310583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.041 [2024-09-27 15:57:47.310591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.041 qpair failed and we were unable to recover it. 00:39:07.041 [2024-09-27 15:57:47.310907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.041 [2024-09-27 15:57:47.310917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.041 qpair failed and we were unable to recover it. 00:39:07.041 [2024-09-27 15:57:47.311092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.041 [2024-09-27 15:57:47.311101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.041 qpair failed and we were unable to recover it. 00:39:07.041 [2024-09-27 15:57:47.311283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.041 [2024-09-27 15:57:47.311293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.041 qpair failed and we were unable to recover it. 00:39:07.041 [2024-09-27 15:57:47.311594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.041 [2024-09-27 15:57:47.311603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.041 qpair failed and we were unable to recover it. 00:39:07.041 [2024-09-27 15:57:47.311780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.041 [2024-09-27 15:57:47.311787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.041 qpair failed and we were unable to recover it. 00:39:07.041 [2024-09-27 15:57:47.312071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.041 [2024-09-27 15:57:47.312082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.041 qpair failed and we were unable to recover it. 00:39:07.041 [2024-09-27 15:57:47.312116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.041 [2024-09-27 15:57:47.312124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.041 qpair failed and we were unable to recover it. 00:39:07.041 [2024-09-27 15:57:47.312422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.041 [2024-09-27 15:57:47.312431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.041 qpair failed and we were unable to recover it. 00:39:07.041 [2024-09-27 15:57:47.312750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.041 [2024-09-27 15:57:47.312759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.041 qpair failed and we were unable to recover it. 00:39:07.041 [2024-09-27 15:57:47.313090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.041 [2024-09-27 15:57:47.313099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.041 qpair failed and we were unable to recover it. 00:39:07.041 [2024-09-27 15:57:47.313408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.041 [2024-09-27 15:57:47.313417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.041 qpair failed and we were unable to recover it. 00:39:07.041 [2024-09-27 15:57:47.313588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.041 [2024-09-27 15:57:47.313597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.041 qpair failed and we were unable to recover it. 00:39:07.041 [2024-09-27 15:57:47.313784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.041 [2024-09-27 15:57:47.313792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.041 qpair failed and we were unable to recover it. 00:39:07.041 [2024-09-27 15:57:47.313835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.041 [2024-09-27 15:57:47.313841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.041 qpair failed and we were unable to recover it. 00:39:07.041 [2024-09-27 15:57:47.314101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.041 [2024-09-27 15:57:47.314109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.041 qpair failed and we were unable to recover it. 00:39:07.041 [2024-09-27 15:57:47.314419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.041 [2024-09-27 15:57:47.314429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.041 qpair failed and we were unable to recover it. 00:39:07.041 [2024-09-27 15:57:47.314609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.041 [2024-09-27 15:57:47.314617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.041 qpair failed and we were unable to recover it. 00:39:07.041 [2024-09-27 15:57:47.314659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.041 [2024-09-27 15:57:47.314665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.041 qpair failed and we were unable to recover it. 00:39:07.041 [2024-09-27 15:57:47.314833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.041 [2024-09-27 15:57:47.314842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.041 qpair failed and we were unable to recover it. 00:39:07.041 [2024-09-27 15:57:47.315131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.041 [2024-09-27 15:57:47.315140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.041 qpair failed and we were unable to recover it. 00:39:07.041 [2024-09-27 15:57:47.315455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.041 [2024-09-27 15:57:47.315465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.042 qpair failed and we were unable to recover it. 00:39:07.042 [2024-09-27 15:57:47.315778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.042 [2024-09-27 15:57:47.315786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.042 qpair failed and we were unable to recover it. 00:39:07.042 [2024-09-27 15:57:47.316007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.042 [2024-09-27 15:57:47.316016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.042 qpair failed and we were unable to recover it. 00:39:07.042 [2024-09-27 15:57:47.316280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.042 [2024-09-27 15:57:47.316290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.042 qpair failed and we were unable to recover it. 00:39:07.042 [2024-09-27 15:57:47.316469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.042 [2024-09-27 15:57:47.316478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.042 qpair failed and we were unable to recover it. 00:39:07.042 [2024-09-27 15:57:47.316812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.042 [2024-09-27 15:57:47.316820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.042 qpair failed and we were unable to recover it. 00:39:07.042 [2024-09-27 15:57:47.317005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.042 [2024-09-27 15:57:47.317013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.042 qpair failed and we were unable to recover it. 00:39:07.042 [2024-09-27 15:57:47.317309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.042 [2024-09-27 15:57:47.317318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.042 qpair failed and we were unable to recover it. 00:39:07.042 [2024-09-27 15:57:47.317486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.042 [2024-09-27 15:57:47.317494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.042 qpair failed and we were unable to recover it. 00:39:07.042 [2024-09-27 15:57:47.317818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.042 [2024-09-27 15:57:47.317826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.042 qpair failed and we were unable to recover it. 00:39:07.042 [2024-09-27 15:57:47.318007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.042 [2024-09-27 15:57:47.318015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.042 qpair failed and we were unable to recover it. 00:39:07.042 [2024-09-27 15:57:47.318403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.042 [2024-09-27 15:57:47.318411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.042 qpair failed and we were unable to recover it. 00:39:07.042 [2024-09-27 15:57:47.318729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.042 [2024-09-27 15:57:47.318737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.042 qpair failed and we were unable to recover it. 00:39:07.042 [2024-09-27 15:57:47.318891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.042 [2024-09-27 15:57:47.318903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.042 qpair failed and we were unable to recover it. 00:39:07.042 [2024-09-27 15:57:47.319137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.042 [2024-09-27 15:57:47.319147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.042 qpair failed and we were unable to recover it. 00:39:07.042 [2024-09-27 15:57:47.319411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.042 [2024-09-27 15:57:47.319421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.042 qpair failed and we were unable to recover it. 00:39:07.042 [2024-09-27 15:57:47.319723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.042 [2024-09-27 15:57:47.319732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.042 qpair failed and we were unable to recover it. 00:39:07.042 [2024-09-27 15:57:47.319792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.042 [2024-09-27 15:57:47.319799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.042 qpair failed and we were unable to recover it. 00:39:07.042 [2024-09-27 15:57:47.319993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.042 [2024-09-27 15:57:47.320001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.042 qpair failed and we were unable to recover it. 00:39:07.042 [2024-09-27 15:57:47.320377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.042 [2024-09-27 15:57:47.320386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.042 qpair failed and we were unable to recover it. 00:39:07.042 [2024-09-27 15:57:47.320704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.042 [2024-09-27 15:57:47.320712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.042 qpair failed and we were unable to recover it. 00:39:07.042 [2024-09-27 15:57:47.320897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.042 [2024-09-27 15:57:47.320906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.042 qpair failed and we were unable to recover it. 00:39:07.042 [2024-09-27 15:57:47.321208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.042 [2024-09-27 15:57:47.321217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.042 qpair failed and we were unable to recover it. 00:39:07.042 [2024-09-27 15:57:47.321532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.042 [2024-09-27 15:57:47.321542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.042 qpair failed and we were unable to recover it. 00:39:07.042 [2024-09-27 15:57:47.321779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.042 [2024-09-27 15:57:47.321788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.042 qpair failed and we were unable to recover it. 00:39:07.042 [2024-09-27 15:57:47.321957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.042 [2024-09-27 15:57:47.321966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.042 qpair failed and we were unable to recover it. 00:39:07.042 [2024-09-27 15:57:47.322133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.042 [2024-09-27 15:57:47.322141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.042 qpair failed and we were unable to recover it. 00:39:07.042 [2024-09-27 15:57:47.322468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.042 [2024-09-27 15:57:47.322477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.042 qpair failed and we were unable to recover it. 00:39:07.042 [2024-09-27 15:57:47.322789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.042 [2024-09-27 15:57:47.322799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.042 qpair failed and we were unable to recover it. 00:39:07.042 [2024-09-27 15:57:47.323113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.042 [2024-09-27 15:57:47.323123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.042 qpair failed and we were unable to recover it. 00:39:07.042 [2024-09-27 15:57:47.323439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.042 [2024-09-27 15:57:47.323448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.042 qpair failed and we were unable to recover it. 00:39:07.042 [2024-09-27 15:57:47.323711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.042 [2024-09-27 15:57:47.323719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.042 qpair failed and we were unable to recover it. 00:39:07.042 [2024-09-27 15:57:47.323940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.042 [2024-09-27 15:57:47.323949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.042 qpair failed and we were unable to recover it. 00:39:07.042 [2024-09-27 15:57:47.324242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.042 [2024-09-27 15:57:47.324251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.042 qpair failed and we were unable to recover it. 00:39:07.042 [2024-09-27 15:57:47.324564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.042 [2024-09-27 15:57:47.324574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.042 qpair failed and we were unable to recover it. 00:39:07.042 [2024-09-27 15:57:47.324904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.042 [2024-09-27 15:57:47.324914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.042 qpair failed and we were unable to recover it. 00:39:07.042 [2024-09-27 15:57:47.325102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.042 [2024-09-27 15:57:47.325113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.042 qpair failed and we were unable to recover it. 00:39:07.042 [2024-09-27 15:57:47.325437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.042 [2024-09-27 15:57:47.325447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.042 qpair failed and we were unable to recover it. 00:39:07.043 [2024-09-27 15:57:47.325601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.043 [2024-09-27 15:57:47.325611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.043 qpair failed and we were unable to recover it. 00:39:07.043 [2024-09-27 15:57:47.325806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.043 [2024-09-27 15:57:47.325816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.043 qpair failed and we were unable to recover it. 00:39:07.043 [2024-09-27 15:57:47.325904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.043 [2024-09-27 15:57:47.325912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.043 qpair failed and we were unable to recover it. 00:39:07.043 [2024-09-27 15:57:47.326094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.043 [2024-09-27 15:57:47.326105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.043 qpair failed and we were unable to recover it. 00:39:07.043 [2024-09-27 15:57:47.326144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.043 [2024-09-27 15:57:47.326156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.043 qpair failed and we were unable to recover it. 00:39:07.043 [2024-09-27 15:57:47.326481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.043 [2024-09-27 15:57:47.326491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.043 qpair failed and we were unable to recover it. 00:39:07.043 [2024-09-27 15:57:47.326541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.043 [2024-09-27 15:57:47.326549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.043 qpair failed and we were unable to recover it. 00:39:07.043 [2024-09-27 15:57:47.326840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.043 [2024-09-27 15:57:47.326850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.043 qpair failed and we were unable to recover it. 00:39:07.043 [2024-09-27 15:57:47.327214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.043 [2024-09-27 15:57:47.327226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.043 qpair failed and we were unable to recover it. 00:39:07.043 [2024-09-27 15:57:47.327539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.043 [2024-09-27 15:57:47.327548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.043 qpair failed and we were unable to recover it. 00:39:07.043 [2024-09-27 15:57:47.327937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.043 [2024-09-27 15:57:47.327946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.043 qpair failed and we were unable to recover it. 00:39:07.043 [2024-09-27 15:57:47.328117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.043 [2024-09-27 15:57:47.328125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.043 qpair failed and we were unable to recover it. 00:39:07.043 [2024-09-27 15:57:47.328276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.043 [2024-09-27 15:57:47.328284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.043 qpair failed and we were unable to recover it. 00:39:07.043 [2024-09-27 15:57:47.328461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.043 [2024-09-27 15:57:47.328470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.043 qpair failed and we were unable to recover it. 00:39:07.043 [2024-09-27 15:57:47.328731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.043 [2024-09-27 15:57:47.328740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.043 qpair failed and we were unable to recover it. 00:39:07.043 [2024-09-27 15:57:47.328932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.043 [2024-09-27 15:57:47.328941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.043 qpair failed and we were unable to recover it. 00:39:07.043 [2024-09-27 15:57:47.329226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.043 [2024-09-27 15:57:47.329235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.043 qpair failed and we were unable to recover it. 00:39:07.043 [2024-09-27 15:57:47.329545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.043 [2024-09-27 15:57:47.329554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.043 qpair failed and we were unable to recover it. 00:39:07.043 [2024-09-27 15:57:47.329863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.043 [2024-09-27 15:57:47.329871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.043 qpair failed and we were unable to recover it. 00:39:07.043 [2024-09-27 15:57:47.330225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.043 [2024-09-27 15:57:47.330234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.043 qpair failed and we were unable to recover it. 00:39:07.043 [2024-09-27 15:57:47.330418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.043 [2024-09-27 15:57:47.330426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.043 qpair failed and we were unable to recover it. 00:39:07.043 [2024-09-27 15:57:47.330827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.043 [2024-09-27 15:57:47.330836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.043 qpair failed and we were unable to recover it. 00:39:07.043 [2024-09-27 15:57:47.331155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.043 [2024-09-27 15:57:47.331165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.043 qpair failed and we were unable to recover it. 00:39:07.043 [2024-09-27 15:57:47.331484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.043 [2024-09-27 15:57:47.331492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.043 qpair failed and we were unable to recover it. 00:39:07.043 [2024-09-27 15:57:47.331661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.043 [2024-09-27 15:57:47.331670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.043 qpair failed and we were unable to recover it. 00:39:07.043 [2024-09-27 15:57:47.331992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.043 [2024-09-27 15:57:47.332001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.043 qpair failed and we were unable to recover it. 00:39:07.043 [2024-09-27 15:57:47.332198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.043 [2024-09-27 15:57:47.332208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.043 qpair failed and we were unable to recover it. 00:39:07.043 [2024-09-27 15:57:47.332562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.043 [2024-09-27 15:57:47.332571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.043 qpair failed and we were unable to recover it. 00:39:07.043 [2024-09-27 15:57:47.332753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.043 [2024-09-27 15:57:47.332762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.043 qpair failed and we were unable to recover it. 00:39:07.043 [2024-09-27 15:57:47.332979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.043 [2024-09-27 15:57:47.332988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.043 qpair failed and we were unable to recover it. 00:39:07.043 [2024-09-27 15:57:47.333160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.043 [2024-09-27 15:57:47.333169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.043 qpair failed and we were unable to recover it. 00:39:07.043 [2024-09-27 15:57:47.333344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.043 [2024-09-27 15:57:47.333351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.043 qpair failed and we were unable to recover it. 00:39:07.044 [2024-09-27 15:57:47.333693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.044 [2024-09-27 15:57:47.333702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.044 qpair failed and we were unable to recover it. 00:39:07.044 [2024-09-27 15:57:47.333883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.044 [2024-09-27 15:57:47.333892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.044 qpair failed and we were unable to recover it. 00:39:07.044 [2024-09-27 15:57:47.334102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.044 [2024-09-27 15:57:47.334111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.044 qpair failed and we were unable to recover it. 00:39:07.044 [2024-09-27 15:57:47.334381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.044 [2024-09-27 15:57:47.334392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.044 qpair failed and we were unable to recover it. 00:39:07.044 [2024-09-27 15:57:47.334710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.044 [2024-09-27 15:57:47.334719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.044 qpair failed and we were unable to recover it. 00:39:07.044 [2024-09-27 15:57:47.334899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.044 [2024-09-27 15:57:47.334909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.044 qpair failed and we were unable to recover it. 00:39:07.044 [2024-09-27 15:57:47.335245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.044 [2024-09-27 15:57:47.335254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.044 qpair failed and we were unable to recover it. 00:39:07.044 [2024-09-27 15:57:47.335539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.044 [2024-09-27 15:57:47.335547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.044 qpair failed and we were unable to recover it. 00:39:07.044 [2024-09-27 15:57:47.335743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.044 [2024-09-27 15:57:47.335752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.044 qpair failed and we were unable to recover it. 00:39:07.044 [2024-09-27 15:57:47.335919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.044 [2024-09-27 15:57:47.335929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.044 qpair failed and we were unable to recover it. 00:39:07.044 [2024-09-27 15:57:47.336255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.044 [2024-09-27 15:57:47.336264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.044 qpair failed and we were unable to recover it. 00:39:07.044 [2024-09-27 15:57:47.336453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.044 [2024-09-27 15:57:47.336461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.044 qpair failed and we were unable to recover it. 00:39:07.044 [2024-09-27 15:57:47.336597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.044 [2024-09-27 15:57:47.336605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.044 qpair failed and we were unable to recover it. 00:39:07.044 [2024-09-27 15:57:47.336644] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a7cb60 (9): Bad file descriptor 00:39:07.044 [2024-09-27 15:57:47.337230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.044 [2024-09-27 15:57:47.337322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.044 qpair failed and we were unable to recover it. 00:39:07.044 [2024-09-27 15:57:47.337797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.044 [2024-09-27 15:57:47.337836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.044 qpair failed and we were unable to recover it. 00:39:07.044 [2024-09-27 15:57:47.338165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.044 [2024-09-27 15:57:47.338174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.044 qpair failed and we were unable to recover it. 00:39:07.044 [2024-09-27 15:57:47.338506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.044 [2024-09-27 15:57:47.338517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.044 qpair failed and we were unable to recover it. 00:39:07.044 [2024-09-27 15:57:47.338688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.044 [2024-09-27 15:57:47.338698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.044 qpair failed and we were unable to recover it. 00:39:07.044 [2024-09-27 15:57:47.339044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.044 [2024-09-27 15:57:47.339054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.044 qpair failed and we were unable to recover it. 00:39:07.044 [2024-09-27 15:57:47.339376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.044 [2024-09-27 15:57:47.339384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.044 qpair failed and we were unable to recover it. 00:39:07.044 [2024-09-27 15:57:47.339746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.044 [2024-09-27 15:57:47.339755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.044 qpair failed and we were unable to recover it. 00:39:07.044 [2024-09-27 15:57:47.339929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.044 [2024-09-27 15:57:47.339939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.044 qpair failed and we were unable to recover it. 00:39:07.044 [2024-09-27 15:57:47.340281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.044 [2024-09-27 15:57:47.340291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.044 qpair failed and we were unable to recover it. 00:39:07.044 [2024-09-27 15:57:47.340602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.044 [2024-09-27 15:57:47.340612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.044 qpair failed and we were unable to recover it. 00:39:07.044 [2024-09-27 15:57:47.340926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.044 [2024-09-27 15:57:47.340935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.044 qpair failed and we were unable to recover it. 00:39:07.044 [2024-09-27 15:57:47.341316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.044 [2024-09-27 15:57:47.341325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.044 qpair failed and we were unable to recover it. 00:39:07.044 [2024-09-27 15:57:47.341639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.044 [2024-09-27 15:57:47.341648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.044 qpair failed and we were unable to recover it. 00:39:07.044 [2024-09-27 15:57:47.341966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.044 [2024-09-27 15:57:47.341975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.044 qpair failed and we were unable to recover it. 00:39:07.044 [2024-09-27 15:57:47.342331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.044 [2024-09-27 15:57:47.342340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.044 qpair failed and we were unable to recover it. 00:39:07.044 [2024-09-27 15:57:47.342653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.044 [2024-09-27 15:57:47.342661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.044 qpair failed and we were unable to recover it. 00:39:07.044 [2024-09-27 15:57:47.342843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.044 [2024-09-27 15:57:47.342851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.044 qpair failed and we were unable to recover it. 00:39:07.044 [2024-09-27 15:57:47.342904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.044 [2024-09-27 15:57:47.342911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.044 qpair failed and we were unable to recover it. 00:39:07.044 [2024-09-27 15:57:47.343099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.044 [2024-09-27 15:57:47.343106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.044 qpair failed and we were unable to recover it. 00:39:07.044 [2024-09-27 15:57:47.343397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.044 [2024-09-27 15:57:47.343406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.044 qpair failed and we were unable to recover it. 00:39:07.044 [2024-09-27 15:57:47.343621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.044 [2024-09-27 15:57:47.343629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.044 qpair failed and we were unable to recover it. 00:39:07.044 [2024-09-27 15:57:47.343941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.044 [2024-09-27 15:57:47.343949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.045 qpair failed and we were unable to recover it. 00:39:07.045 [2024-09-27 15:57:47.344139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.045 [2024-09-27 15:57:47.344147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.045 qpair failed and we were unable to recover it. 00:39:07.045 [2024-09-27 15:57:47.344311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.045 [2024-09-27 15:57:47.344319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.045 qpair failed and we were unable to recover it. 00:39:07.045 [2024-09-27 15:57:47.344688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.045 [2024-09-27 15:57:47.344696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.045 qpair failed and we were unable to recover it. 00:39:07.045 [2024-09-27 15:57:47.345036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.045 [2024-09-27 15:57:47.345044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.045 qpair failed and we were unable to recover it. 00:39:07.045 [2024-09-27 15:57:47.345376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.045 [2024-09-27 15:57:47.345384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.045 qpair failed and we were unable to recover it. 00:39:07.045 [2024-09-27 15:57:47.345695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.045 [2024-09-27 15:57:47.345703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.045 qpair failed and we were unable to recover it. 00:39:07.045 [2024-09-27 15:57:47.346088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.045 [2024-09-27 15:57:47.346096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.045 qpair failed and we were unable to recover it. 00:39:07.045 [2024-09-27 15:57:47.346383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.045 [2024-09-27 15:57:47.346391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.045 qpair failed and we were unable to recover it. 00:39:07.045 [2024-09-27 15:57:47.346701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.045 [2024-09-27 15:57:47.346709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.045 qpair failed and we were unable to recover it. 00:39:07.045 [2024-09-27 15:57:47.347095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.045 [2024-09-27 15:57:47.347104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.045 qpair failed and we were unable to recover it. 00:39:07.045 [2024-09-27 15:57:47.347421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.045 [2024-09-27 15:57:47.347429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.045 qpair failed and we were unable to recover it. 00:39:07.045 [2024-09-27 15:57:47.347745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.045 [2024-09-27 15:57:47.347754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.045 qpair failed and we were unable to recover it. 00:39:07.045 [2024-09-27 15:57:47.348049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.045 [2024-09-27 15:57:47.348058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.045 qpair failed and we were unable to recover it. 00:39:07.045 [2024-09-27 15:57:47.348369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.045 [2024-09-27 15:57:47.348377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.045 qpair failed and we were unable to recover it. 00:39:07.045 [2024-09-27 15:57:47.348693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.045 [2024-09-27 15:57:47.348702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.045 qpair failed and we were unable to recover it. 00:39:07.045 [2024-09-27 15:57:47.348862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.045 [2024-09-27 15:57:47.348871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.045 qpair failed and we were unable to recover it. 00:39:07.045 [2024-09-27 15:57:47.349038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.045 [2024-09-27 15:57:47.349046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.045 qpair failed and we were unable to recover it. 00:39:07.045 [2024-09-27 15:57:47.349343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.045 [2024-09-27 15:57:47.349352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.045 qpair failed and we were unable to recover it. 00:39:07.045 [2024-09-27 15:57:47.349521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.045 [2024-09-27 15:57:47.349529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.045 qpair failed and we were unable to recover it. 00:39:07.045 [2024-09-27 15:57:47.349675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.045 [2024-09-27 15:57:47.349684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.045 qpair failed and we were unable to recover it. 00:39:07.045 [2024-09-27 15:57:47.350020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.045 [2024-09-27 15:57:47.350029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.045 qpair failed and we were unable to recover it. 00:39:07.045 [2024-09-27 15:57:47.350199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.045 [2024-09-27 15:57:47.350206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.045 qpair failed and we were unable to recover it. 00:39:07.045 [2024-09-27 15:57:47.350266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.045 [2024-09-27 15:57:47.350273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.045 qpair failed and we were unable to recover it. 00:39:07.045 [2024-09-27 15:57:47.350600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.045 [2024-09-27 15:57:47.350608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.045 qpair failed and we were unable to recover it. 00:39:07.045 [2024-09-27 15:57:47.350914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.045 [2024-09-27 15:57:47.350922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.045 qpair failed and we were unable to recover it. 00:39:07.045 [2024-09-27 15:57:47.351229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.045 [2024-09-27 15:57:47.351237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.045 qpair failed and we were unable to recover it. 00:39:07.045 [2024-09-27 15:57:47.351585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.045 [2024-09-27 15:57:47.351594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.045 qpair failed and we were unable to recover it. 00:39:07.045 [2024-09-27 15:57:47.351918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.045 [2024-09-27 15:57:47.351926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.045 qpair failed and we were unable to recover it. 00:39:07.045 [2024-09-27 15:57:47.352221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.045 [2024-09-27 15:57:47.352229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.045 qpair failed and we were unable to recover it. 00:39:07.045 [2024-09-27 15:57:47.352545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.045 [2024-09-27 15:57:47.352552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.045 qpair failed and we were unable to recover it. 00:39:07.045 [2024-09-27 15:57:47.352853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.045 [2024-09-27 15:57:47.352861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.045 qpair failed and we were unable to recover it. 00:39:07.045 [2024-09-27 15:57:47.353161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.045 [2024-09-27 15:57:47.353169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.045 qpair failed and we were unable to recover it. 00:39:07.045 [2024-09-27 15:57:47.353329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.045 [2024-09-27 15:57:47.353336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.045 qpair failed and we were unable to recover it. 00:39:07.045 [2024-09-27 15:57:47.353533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.045 [2024-09-27 15:57:47.353543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.045 qpair failed and we were unable to recover it. 00:39:07.045 [2024-09-27 15:57:47.353865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.045 [2024-09-27 15:57:47.353874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.045 qpair failed and we were unable to recover it. 00:39:07.045 [2024-09-27 15:57:47.354055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.045 [2024-09-27 15:57:47.354063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.045 qpair failed and we were unable to recover it. 00:39:07.045 [2024-09-27 15:57:47.354343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.046 [2024-09-27 15:57:47.354351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.046 qpair failed and we were unable to recover it. 00:39:07.046 [2024-09-27 15:57:47.354658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.046 [2024-09-27 15:57:47.354666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.046 qpair failed and we were unable to recover it. 00:39:07.046 [2024-09-27 15:57:47.354981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.046 [2024-09-27 15:57:47.354990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.046 qpair failed and we were unable to recover it. 00:39:07.046 [2024-09-27 15:57:47.355326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.046 [2024-09-27 15:57:47.355334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.046 qpair failed and we were unable to recover it. 00:39:07.046 [2024-09-27 15:57:47.355635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.046 [2024-09-27 15:57:47.355643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.046 qpair failed and we were unable to recover it. 00:39:07.046 [2024-09-27 15:57:47.355952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.046 [2024-09-27 15:57:47.355960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.046 qpair failed and we were unable to recover it. 00:39:07.046 [2024-09-27 15:57:47.356288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.046 [2024-09-27 15:57:47.356297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.046 qpair failed and we were unable to recover it. 00:39:07.046 [2024-09-27 15:57:47.356476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.046 [2024-09-27 15:57:47.356483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.046 qpair failed and we were unable to recover it. 00:39:07.046 [2024-09-27 15:57:47.356799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.046 [2024-09-27 15:57:47.356808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.046 qpair failed and we were unable to recover it. 00:39:07.046 [2024-09-27 15:57:47.356957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.046 [2024-09-27 15:57:47.356967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.046 qpair failed and we were unable to recover it. 00:39:07.046 [2024-09-27 15:57:47.357172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.046 [2024-09-27 15:57:47.357180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.046 qpair failed and we were unable to recover it. 00:39:07.046 [2024-09-27 15:57:47.357459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.046 [2024-09-27 15:57:47.357467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.046 qpair failed and we were unable to recover it. 00:39:07.046 [2024-09-27 15:57:47.357835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.046 [2024-09-27 15:57:47.357844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.046 qpair failed and we were unable to recover it. 00:39:07.046 [2024-09-27 15:57:47.358156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.046 [2024-09-27 15:57:47.358164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.046 qpair failed and we were unable to recover it. 00:39:07.046 [2024-09-27 15:57:47.358341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.046 [2024-09-27 15:57:47.358349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.046 qpair failed and we were unable to recover it. 00:39:07.046 [2024-09-27 15:57:47.358694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.046 [2024-09-27 15:57:47.358703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.046 qpair failed and we were unable to recover it. 00:39:07.046 [2024-09-27 15:57:47.358868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.046 [2024-09-27 15:57:47.358875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.046 qpair failed and we were unable to recover it. 00:39:07.046 [2024-09-27 15:57:47.359092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.046 [2024-09-27 15:57:47.359100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.046 qpair failed and we were unable to recover it. 00:39:07.046 [2024-09-27 15:57:47.359469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.046 [2024-09-27 15:57:47.359478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.046 qpair failed and we were unable to recover it. 00:39:07.046 [2024-09-27 15:57:47.359830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.046 [2024-09-27 15:57:47.359839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.046 qpair failed and we were unable to recover it. 00:39:07.046 [2024-09-27 15:57:47.360208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.046 [2024-09-27 15:57:47.360216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.046 qpair failed and we were unable to recover it. 00:39:07.046 [2024-09-27 15:57:47.360532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.046 [2024-09-27 15:57:47.360540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.046 qpair failed and we were unable to recover it. 00:39:07.046 [2024-09-27 15:57:47.360852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.046 [2024-09-27 15:57:47.360861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.046 qpair failed and we were unable to recover it. 00:39:07.046 [2024-09-27 15:57:47.361208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.046 [2024-09-27 15:57:47.361217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.046 qpair failed and we were unable to recover it. 00:39:07.046 [2024-09-27 15:57:47.361493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.046 [2024-09-27 15:57:47.361502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.046 qpair failed and we were unable to recover it. 00:39:07.046 [2024-09-27 15:57:47.361811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.046 [2024-09-27 15:57:47.361820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.046 qpair failed and we were unable to recover it. 00:39:07.046 [2024-09-27 15:57:47.362129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.046 [2024-09-27 15:57:47.362139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.046 qpair failed and we were unable to recover it. 00:39:07.046 [2024-09-27 15:57:47.362395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.046 [2024-09-27 15:57:47.362404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.046 qpair failed and we were unable to recover it. 00:39:07.046 [2024-09-27 15:57:47.362579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.046 [2024-09-27 15:57:47.362588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.046 qpair failed and we were unable to recover it. 00:39:07.046 [2024-09-27 15:57:47.362913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.046 [2024-09-27 15:57:47.362923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.046 qpair failed and we were unable to recover it. 00:39:07.046 [2024-09-27 15:57:47.362957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.046 [2024-09-27 15:57:47.362965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.046 qpair failed and we were unable to recover it. 00:39:07.046 [2024-09-27 15:57:47.363266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.046 [2024-09-27 15:57:47.363275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.046 qpair failed and we were unable to recover it. 00:39:07.046 [2024-09-27 15:57:47.363590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.046 [2024-09-27 15:57:47.363598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.046 qpair failed and we were unable to recover it. 00:39:07.046 [2024-09-27 15:57:47.363916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.046 [2024-09-27 15:57:47.363925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.046 qpair failed and we were unable to recover it. 00:39:07.046 [2024-09-27 15:57:47.364222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.046 [2024-09-27 15:57:47.364230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.046 qpair failed and we were unable to recover it. 00:39:07.046 [2024-09-27 15:57:47.364409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.046 [2024-09-27 15:57:47.364418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.046 qpair failed and we were unable to recover it. 00:39:07.046 [2024-09-27 15:57:47.364700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.046 [2024-09-27 15:57:47.364708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.046 qpair failed and we were unable to recover it. 00:39:07.046 [2024-09-27 15:57:47.364874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.047 [2024-09-27 15:57:47.364884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.047 qpair failed and we were unable to recover it. 00:39:07.047 [2024-09-27 15:57:47.365069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.047 [2024-09-27 15:57:47.365077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.047 qpair failed and we were unable to recover it. 00:39:07.047 [2024-09-27 15:57:47.365380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.047 [2024-09-27 15:57:47.365390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.047 qpair failed and we were unable to recover it. 00:39:07.047 [2024-09-27 15:57:47.365698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.047 [2024-09-27 15:57:47.365706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.047 qpair failed and we were unable to recover it. 00:39:07.047 [2024-09-27 15:57:47.366084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.047 [2024-09-27 15:57:47.366092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.047 qpair failed and we were unable to recover it. 00:39:07.047 [2024-09-27 15:57:47.366397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.047 [2024-09-27 15:57:47.366405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.047 qpair failed and we were unable to recover it. 00:39:07.047 [2024-09-27 15:57:47.366716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.047 [2024-09-27 15:57:47.366724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.047 qpair failed and we were unable to recover it. 00:39:07.047 [2024-09-27 15:57:47.367033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.047 [2024-09-27 15:57:47.367042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.047 qpair failed and we were unable to recover it. 00:39:07.047 [2024-09-27 15:57:47.367356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.047 [2024-09-27 15:57:47.367364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.047 qpair failed and we were unable to recover it. 00:39:07.047 [2024-09-27 15:57:47.367682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.047 [2024-09-27 15:57:47.367690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.047 qpair failed and we were unable to recover it. 00:39:07.047 [2024-09-27 15:57:47.368000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.047 [2024-09-27 15:57:47.368008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.047 qpair failed and we were unable to recover it. 00:39:07.047 [2024-09-27 15:57:47.368306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.047 [2024-09-27 15:57:47.368314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.047 qpair failed and we were unable to recover it. 00:39:07.047 [2024-09-27 15:57:47.368475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.047 [2024-09-27 15:57:47.368481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.047 qpair failed and we were unable to recover it. 00:39:07.047 [2024-09-27 15:57:47.368832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.047 [2024-09-27 15:57:47.368841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.047 qpair failed and we were unable to recover it. 00:39:07.047 [2024-09-27 15:57:47.369067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.047 [2024-09-27 15:57:47.369076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.047 qpair failed and we were unable to recover it. 00:39:07.047 [2024-09-27 15:57:47.369396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.047 [2024-09-27 15:57:47.369405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.047 qpair failed and we were unable to recover it. 00:39:07.047 [2024-09-27 15:57:47.369715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.047 [2024-09-27 15:57:47.369723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.047 qpair failed and we were unable to recover it. 00:39:07.047 [2024-09-27 15:57:47.369875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.047 [2024-09-27 15:57:47.369881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.047 qpair failed and we were unable to recover it. 00:39:07.047 [2024-09-27 15:57:47.370212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.047 [2024-09-27 15:57:47.370220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.047 qpair failed and we were unable to recover it. 00:39:07.047 [2024-09-27 15:57:47.370531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.047 [2024-09-27 15:57:47.370539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.047 qpair failed and we were unable to recover it. 00:39:07.047 [2024-09-27 15:57:47.370823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.047 [2024-09-27 15:57:47.370831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.047 qpair failed and we were unable to recover it. 00:39:07.047 [2024-09-27 15:57:47.371196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.047 [2024-09-27 15:57:47.371204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.047 qpair failed and we were unable to recover it. 00:39:07.047 [2024-09-27 15:57:47.371510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.047 [2024-09-27 15:57:47.371518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.047 qpair failed and we were unable to recover it. 00:39:07.047 [2024-09-27 15:57:47.371827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.047 [2024-09-27 15:57:47.371836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.047 qpair failed and we were unable to recover it. 00:39:07.047 [2024-09-27 15:57:47.372144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.047 [2024-09-27 15:57:47.372153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.047 qpair failed and we were unable to recover it. 00:39:07.047 [2024-09-27 15:57:47.372470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.047 [2024-09-27 15:57:47.372479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.047 qpair failed and we were unable to recover it. 00:39:07.047 [2024-09-27 15:57:47.372648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.047 [2024-09-27 15:57:47.372657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.047 qpair failed and we were unable to recover it. 00:39:07.047 [2024-09-27 15:57:47.372907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.047 [2024-09-27 15:57:47.372916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.047 qpair failed and we were unable to recover it. 00:39:07.047 [2024-09-27 15:57:47.373254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.047 [2024-09-27 15:57:47.373262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.047 qpair failed and we were unable to recover it. 00:39:07.047 [2024-09-27 15:57:47.373582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.047 [2024-09-27 15:57:47.373591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.047 qpair failed and we were unable to recover it. 00:39:07.047 [2024-09-27 15:57:47.373793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.047 [2024-09-27 15:57:47.373802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.047 qpair failed and we were unable to recover it. 00:39:07.047 [2024-09-27 15:57:47.373956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.047 [2024-09-27 15:57:47.373964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.047 qpair failed and we were unable to recover it. 00:39:07.047 [2024-09-27 15:57:47.374209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.047 [2024-09-27 15:57:47.374217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.047 qpair failed and we were unable to recover it. 00:39:07.047 [2024-09-27 15:57:47.374525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.047 [2024-09-27 15:57:47.374533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.047 qpair failed and we were unable to recover it. 00:39:07.047 [2024-09-27 15:57:47.374739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.047 [2024-09-27 15:57:47.374747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.047 qpair failed and we were unable to recover it. 00:39:07.047 [2024-09-27 15:57:47.374956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.047 [2024-09-27 15:57:47.374965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.047 qpair failed and we were unable to recover it. 00:39:07.047 [2024-09-27 15:57:47.375243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.047 [2024-09-27 15:57:47.375252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.047 qpair failed and we were unable to recover it. 00:39:07.047 [2024-09-27 15:57:47.375560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.048 [2024-09-27 15:57:47.375568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.048 qpair failed and we were unable to recover it. 00:39:07.048 [2024-09-27 15:57:47.375736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.048 [2024-09-27 15:57:47.375745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.048 qpair failed and we were unable to recover it. 00:39:07.048 [2024-09-27 15:57:47.376048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.048 [2024-09-27 15:57:47.376057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.048 qpair failed and we were unable to recover it. 00:39:07.048 [2024-09-27 15:57:47.376375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.048 [2024-09-27 15:57:47.376383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.048 qpair failed and we were unable to recover it. 00:39:07.048 [2024-09-27 15:57:47.376708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.048 [2024-09-27 15:57:47.376716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.048 qpair failed and we were unable to recover it. 00:39:07.048 [2024-09-27 15:57:47.376997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.048 [2024-09-27 15:57:47.377006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.048 qpair failed and we were unable to recover it. 00:39:07.048 [2024-09-27 15:57:47.377158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.048 [2024-09-27 15:57:47.377166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.048 qpair failed and we were unable to recover it. 00:39:07.048 [2024-09-27 15:57:47.377446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.048 [2024-09-27 15:57:47.377455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.048 qpair failed and we were unable to recover it. 00:39:07.048 [2024-09-27 15:57:47.377621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.048 [2024-09-27 15:57:47.377630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.048 qpair failed and we were unable to recover it. 00:39:07.048 [2024-09-27 15:57:47.377844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.048 [2024-09-27 15:57:47.377852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.048 qpair failed and we were unable to recover it. 00:39:07.048 [2024-09-27 15:57:47.378172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.048 [2024-09-27 15:57:47.378181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.048 qpair failed and we were unable to recover it. 00:39:07.048 [2024-09-27 15:57:47.378491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.048 [2024-09-27 15:57:47.378500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.048 qpair failed and we were unable to recover it. 00:39:07.048 [2024-09-27 15:57:47.378846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.048 [2024-09-27 15:57:47.378854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.048 qpair failed and we were unable to recover it. 00:39:07.048 [2024-09-27 15:57:47.379035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.048 [2024-09-27 15:57:47.379045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.048 qpair failed and we were unable to recover it. 00:39:07.048 [2024-09-27 15:57:47.379340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.048 [2024-09-27 15:57:47.379348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.048 qpair failed and we were unable to recover it. 00:39:07.048 [2024-09-27 15:57:47.379693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.048 [2024-09-27 15:57:47.379701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.048 qpair failed and we were unable to recover it. 00:39:07.048 [2024-09-27 15:57:47.380022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.048 [2024-09-27 15:57:47.380031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.048 qpair failed and we were unable to recover it. 00:39:07.048 [2024-09-27 15:57:47.380340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.048 [2024-09-27 15:57:47.380349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.048 qpair failed and we were unable to recover it. 00:39:07.048 [2024-09-27 15:57:47.380600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.048 [2024-09-27 15:57:47.380608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.048 qpair failed and we were unable to recover it. 00:39:07.048 [2024-09-27 15:57:47.380898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.048 [2024-09-27 15:57:47.380907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.048 qpair failed and we were unable to recover it. 00:39:07.048 [2024-09-27 15:57:47.381190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.048 [2024-09-27 15:57:47.381198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.048 qpair failed and we were unable to recover it. 00:39:07.048 [2024-09-27 15:57:47.381514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.048 [2024-09-27 15:57:47.381523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.048 qpair failed and we were unable to recover it. 00:39:07.048 [2024-09-27 15:57:47.381905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.048 [2024-09-27 15:57:47.381915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.048 qpair failed and we were unable to recover it. 00:39:07.048 [2024-09-27 15:57:47.382089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.048 [2024-09-27 15:57:47.382097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.048 qpair failed and we were unable to recover it. 00:39:07.048 [2024-09-27 15:57:47.382286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.048 [2024-09-27 15:57:47.382294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.048 qpair failed and we were unable to recover it. 00:39:07.048 [2024-09-27 15:57:47.382545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.048 [2024-09-27 15:57:47.382553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.048 qpair failed and we were unable to recover it. 00:39:07.048 [2024-09-27 15:57:47.382859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.048 [2024-09-27 15:57:47.382868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.048 qpair failed and we were unable to recover it. 00:39:07.048 [2024-09-27 15:57:47.383179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.048 [2024-09-27 15:57:47.383188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.048 qpair failed and we were unable to recover it. 00:39:07.048 [2024-09-27 15:57:47.383350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.048 [2024-09-27 15:57:47.383358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.048 qpair failed and we were unable to recover it. 00:39:07.048 [2024-09-27 15:57:47.383632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.048 [2024-09-27 15:57:47.383641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.048 qpair failed and we were unable to recover it. 00:39:07.048 [2024-09-27 15:57:47.383938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.048 [2024-09-27 15:57:47.383946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.048 qpair failed and we were unable to recover it. 00:39:07.048 [2024-09-27 15:57:47.384242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.048 [2024-09-27 15:57:47.384250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.048 qpair failed and we were unable to recover it. 00:39:07.048 [2024-09-27 15:57:47.384554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.048 [2024-09-27 15:57:47.384562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.048 qpair failed and we were unable to recover it. 00:39:07.048 [2024-09-27 15:57:47.384869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.048 [2024-09-27 15:57:47.384879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.049 qpair failed and we were unable to recover it. 00:39:07.049 [2024-09-27 15:57:47.385049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.049 [2024-09-27 15:57:47.385057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.049 qpair failed and we were unable to recover it. 00:39:07.049 [2024-09-27 15:57:47.385250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.049 [2024-09-27 15:57:47.385257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.049 qpair failed and we were unable to recover it. 00:39:07.049 [2024-09-27 15:57:47.385519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.049 [2024-09-27 15:57:47.385528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.049 qpair failed and we were unable to recover it. 00:39:07.049 [2024-09-27 15:57:47.385692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.049 [2024-09-27 15:57:47.385701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.049 qpair failed and we were unable to recover it. 00:39:07.049 [2024-09-27 15:57:47.386029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.049 [2024-09-27 15:57:47.386038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.049 qpair failed and we were unable to recover it. 00:39:07.049 [2024-09-27 15:57:47.386357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.049 [2024-09-27 15:57:47.386365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.049 qpair failed and we were unable to recover it. 00:39:07.049 [2024-09-27 15:57:47.386672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.049 [2024-09-27 15:57:47.386681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.049 qpair failed and we were unable to recover it. 00:39:07.049 [2024-09-27 15:57:47.386989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.049 [2024-09-27 15:57:47.386997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.049 qpair failed and we were unable to recover it. 00:39:07.049 [2024-09-27 15:57:47.387307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.049 [2024-09-27 15:57:47.387316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.049 qpair failed and we were unable to recover it. 00:39:07.049 [2024-09-27 15:57:47.387509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.049 [2024-09-27 15:57:47.387516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.049 qpair failed and we were unable to recover it. 00:39:07.049 [2024-09-27 15:57:47.387555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.049 [2024-09-27 15:57:47.387562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.049 qpair failed and we were unable to recover it. 00:39:07.049 [2024-09-27 15:57:47.387747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.049 [2024-09-27 15:57:47.387755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.049 qpair failed and we were unable to recover it. 00:39:07.049 [2024-09-27 15:57:47.388044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.049 [2024-09-27 15:57:47.388053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.049 qpair failed and we were unable to recover it. 00:39:07.049 [2024-09-27 15:57:47.388355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.049 [2024-09-27 15:57:47.388364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.049 qpair failed and we were unable to recover it. 00:39:07.049 [2024-09-27 15:57:47.388532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.049 [2024-09-27 15:57:47.388539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.049 qpair failed and we were unable to recover it. 00:39:07.049 [2024-09-27 15:57:47.388850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.049 [2024-09-27 15:57:47.388858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.049 qpair failed and we were unable to recover it. 00:39:07.049 [2024-09-27 15:57:47.389082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.049 [2024-09-27 15:57:47.389091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.049 qpair failed and we were unable to recover it. 00:39:07.049 [2024-09-27 15:57:47.389412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.049 [2024-09-27 15:57:47.389421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.049 qpair failed and we were unable to recover it. 00:39:07.049 [2024-09-27 15:57:47.389590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.049 [2024-09-27 15:57:47.389598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.049 qpair failed and we were unable to recover it. 00:39:07.049 [2024-09-27 15:57:47.389786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.049 [2024-09-27 15:57:47.389793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.049 qpair failed and we were unable to recover it. 00:39:07.049 [2024-09-27 15:57:47.390009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.049 [2024-09-27 15:57:47.390017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.049 qpair failed and we were unable to recover it. 00:39:07.049 [2024-09-27 15:57:47.390208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.049 [2024-09-27 15:57:47.390217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.049 qpair failed and we were unable to recover it. 00:39:07.049 [2024-09-27 15:57:47.390500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.049 [2024-09-27 15:57:47.390510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.049 qpair failed and we were unable to recover it. 00:39:07.049 [2024-09-27 15:57:47.390823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.049 [2024-09-27 15:57:47.390832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.049 qpair failed and we were unable to recover it. 00:39:07.049 [2024-09-27 15:57:47.391028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.049 [2024-09-27 15:57:47.391037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.049 qpair failed and we were unable to recover it. 00:39:07.049 [2024-09-27 15:57:47.391354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.049 [2024-09-27 15:57:47.391362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.049 qpair failed and we were unable to recover it. 00:39:07.049 [2024-09-27 15:57:47.391675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.049 [2024-09-27 15:57:47.391687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.049 qpair failed and we were unable to recover it. 00:39:07.049 [2024-09-27 15:57:47.391993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.049 [2024-09-27 15:57:47.392002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.049 qpair failed and we were unable to recover it. 00:39:07.049 [2024-09-27 15:57:47.392286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.049 [2024-09-27 15:57:47.392294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.049 qpair failed and we were unable to recover it. 00:39:07.049 [2024-09-27 15:57:47.392615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.049 [2024-09-27 15:57:47.392624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.049 qpair failed and we were unable to recover it. 00:39:07.049 [2024-09-27 15:57:47.392799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.049 [2024-09-27 15:57:47.392808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.049 qpair failed and we were unable to recover it. 00:39:07.049 [2024-09-27 15:57:47.393127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.049 [2024-09-27 15:57:47.393135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.049 qpair failed and we were unable to recover it. 00:39:07.049 [2024-09-27 15:57:47.393442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.049 [2024-09-27 15:57:47.393450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.049 qpair failed and we were unable to recover it. 00:39:07.049 [2024-09-27 15:57:47.393765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.049 [2024-09-27 15:57:47.393773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.049 qpair failed and we were unable to recover it. 00:39:07.049 [2024-09-27 15:57:47.393925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.049 [2024-09-27 15:57:47.393932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.049 qpair failed and we were unable to recover it. 00:39:07.049 [2024-09-27 15:57:47.394246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.049 [2024-09-27 15:57:47.394255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.049 qpair failed and we were unable to recover it. 00:39:07.050 [2024-09-27 15:57:47.394563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.050 [2024-09-27 15:57:47.394571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.050 qpair failed and we were unable to recover it. 00:39:07.050 [2024-09-27 15:57:47.394714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.050 [2024-09-27 15:57:47.394721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.050 qpair failed and we were unable to recover it. 00:39:07.050 [2024-09-27 15:57:47.395037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.050 [2024-09-27 15:57:47.395046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.050 qpair failed and we were unable to recover it. 00:39:07.050 [2024-09-27 15:57:47.395361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.050 [2024-09-27 15:57:47.395369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.050 qpair failed and we were unable to recover it. 00:39:07.050 [2024-09-27 15:57:47.395697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.050 [2024-09-27 15:57:47.395707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.050 qpair failed and we were unable to recover it. 00:39:07.050 [2024-09-27 15:57:47.395783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.050 [2024-09-27 15:57:47.395790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.050 qpair failed and we were unable to recover it. 00:39:07.050 [2024-09-27 15:57:47.396092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.050 [2024-09-27 15:57:47.396100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.050 qpair failed and we were unable to recover it. 00:39:07.050 [2024-09-27 15:57:47.396282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.050 [2024-09-27 15:57:47.396290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.050 qpair failed and we were unable to recover it. 00:39:07.050 [2024-09-27 15:57:47.396605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.050 [2024-09-27 15:57:47.396615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.050 qpair failed and we were unable to recover it. 00:39:07.050 [2024-09-27 15:57:47.396919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.050 [2024-09-27 15:57:47.396927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.050 qpair failed and we were unable to recover it. 00:39:07.050 [2024-09-27 15:57:47.397244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.050 [2024-09-27 15:57:47.397252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.050 qpair failed and we were unable to recover it. 00:39:07.050 [2024-09-27 15:57:47.397421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.050 [2024-09-27 15:57:47.397431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.050 qpair failed and we were unable to recover it. 00:39:07.050 [2024-09-27 15:57:47.397739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.050 [2024-09-27 15:57:47.397747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.050 qpair failed and we were unable to recover it. 00:39:07.050 [2024-09-27 15:57:47.398141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.050 [2024-09-27 15:57:47.398150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.050 qpair failed and we were unable to recover it. 00:39:07.050 [2024-09-27 15:57:47.398450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.050 [2024-09-27 15:57:47.398458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.050 qpair failed and we were unable to recover it. 00:39:07.050 [2024-09-27 15:57:47.398775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.050 [2024-09-27 15:57:47.398783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.050 qpair failed and we were unable to recover it. 00:39:07.050 [2024-09-27 15:57:47.398935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.050 [2024-09-27 15:57:47.398944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.050 qpair failed and we were unable to recover it. 00:39:07.050 [2024-09-27 15:57:47.399231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.050 [2024-09-27 15:57:47.399241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.050 qpair failed and we were unable to recover it. 00:39:07.050 [2024-09-27 15:57:47.399430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.050 [2024-09-27 15:57:47.399437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.050 qpair failed and we were unable to recover it. 00:39:07.050 [2024-09-27 15:57:47.399753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.050 [2024-09-27 15:57:47.399762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.050 qpair failed and we were unable to recover it. 00:39:07.050 [2024-09-27 15:57:47.400055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.050 [2024-09-27 15:57:47.400063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.050 qpair failed and we were unable to recover it. 00:39:07.050 [2024-09-27 15:57:47.400391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.050 [2024-09-27 15:57:47.400399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.050 qpair failed and we were unable to recover it. 00:39:07.050 [2024-09-27 15:57:47.400706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.050 [2024-09-27 15:57:47.400714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.050 qpair failed and we were unable to recover it. 00:39:07.050 [2024-09-27 15:57:47.401027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.050 [2024-09-27 15:57:47.401035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.050 qpair failed and we were unable to recover it. 00:39:07.050 [2024-09-27 15:57:47.401351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.050 [2024-09-27 15:57:47.401359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.050 qpair failed and we were unable to recover it. 00:39:07.050 [2024-09-27 15:57:47.401670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.050 [2024-09-27 15:57:47.401678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.050 qpair failed and we were unable to recover it. 00:39:07.050 [2024-09-27 15:57:47.401988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.050 [2024-09-27 15:57:47.401997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.050 qpair failed and we were unable to recover it. 00:39:07.050 [2024-09-27 15:57:47.402303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.050 [2024-09-27 15:57:47.402311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.050 qpair failed and we were unable to recover it. 00:39:07.050 [2024-09-27 15:57:47.402621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.050 [2024-09-27 15:57:47.402629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.050 qpair failed and we were unable to recover it. 00:39:07.050 [2024-09-27 15:57:47.402697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.050 [2024-09-27 15:57:47.402703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.050 qpair failed and we were unable to recover it. 00:39:07.050 [2024-09-27 15:57:47.402991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.050 [2024-09-27 15:57:47.403000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.050 qpair failed and we were unable to recover it. 00:39:07.050 [2024-09-27 15:57:47.403309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.050 [2024-09-27 15:57:47.403318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.050 qpair failed and we were unable to recover it. 00:39:07.050 [2024-09-27 15:57:47.403630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.050 [2024-09-27 15:57:47.403639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.050 qpair failed and we were unable to recover it. 00:39:07.050 [2024-09-27 15:57:47.403949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.050 [2024-09-27 15:57:47.403957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.050 qpair failed and we were unable to recover it. 00:39:07.050 [2024-09-27 15:57:47.404271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.050 [2024-09-27 15:57:47.404279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.050 qpair failed and we were unable to recover it. 00:39:07.050 [2024-09-27 15:57:47.404444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.050 [2024-09-27 15:57:47.404452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.050 qpair failed and we were unable to recover it. 00:39:07.050 [2024-09-27 15:57:47.404633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.050 [2024-09-27 15:57:47.404641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.050 qpair failed and we were unable to recover it. 00:39:07.050 [2024-09-27 15:57:47.404960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.051 [2024-09-27 15:57:47.404968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.051 qpair failed and we were unable to recover it. 00:39:07.051 [2024-09-27 15:57:47.405276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.051 [2024-09-27 15:57:47.405284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.051 qpair failed and we were unable to recover it. 00:39:07.051 [2024-09-27 15:57:47.405599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.051 [2024-09-27 15:57:47.405606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.051 qpair failed and we were unable to recover it. 00:39:07.051 [2024-09-27 15:57:47.405787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.051 [2024-09-27 15:57:47.405796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.051 qpair failed and we were unable to recover it. 00:39:07.051 [2024-09-27 15:57:47.406077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.051 [2024-09-27 15:57:47.406085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.051 qpair failed and we were unable to recover it. 00:39:07.051 [2024-09-27 15:57:47.406261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.051 [2024-09-27 15:57:47.406268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.051 qpair failed and we were unable to recover it. 00:39:07.051 [2024-09-27 15:57:47.406534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.051 [2024-09-27 15:57:47.406543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.051 qpair failed and we were unable to recover it. 00:39:07.051 [2024-09-27 15:57:47.406850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.051 [2024-09-27 15:57:47.406859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.051 qpair failed and we were unable to recover it. 00:39:07.051 [2024-09-27 15:57:47.407166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.051 [2024-09-27 15:57:47.407175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.051 qpair failed and we were unable to recover it. 00:39:07.051 [2024-09-27 15:57:47.407329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.051 [2024-09-27 15:57:47.407336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.051 qpair failed and we were unable to recover it. 00:39:07.051 [2024-09-27 15:57:47.407662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.051 [2024-09-27 15:57:47.407671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.051 qpair failed and we were unable to recover it. 00:39:07.051 [2024-09-27 15:57:47.407992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.051 [2024-09-27 15:57:47.408000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.051 qpair failed and we were unable to recover it. 00:39:07.051 [2024-09-27 15:57:47.408317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.051 [2024-09-27 15:57:47.408325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.051 qpair failed and we were unable to recover it. 00:39:07.051 [2024-09-27 15:57:47.408531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.051 [2024-09-27 15:57:47.408540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.051 qpair failed and we were unable to recover it. 00:39:07.051 [2024-09-27 15:57:47.408938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.051 [2024-09-27 15:57:47.408947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.051 qpair failed and we were unable to recover it. 00:39:07.051 [2024-09-27 15:57:47.409212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.051 [2024-09-27 15:57:47.409220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.051 qpair failed and we were unable to recover it. 00:39:07.051 [2024-09-27 15:57:47.409387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.051 [2024-09-27 15:57:47.409395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.051 qpair failed and we were unable to recover it. 00:39:07.051 [2024-09-27 15:57:47.409433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.051 [2024-09-27 15:57:47.409441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.051 qpair failed and we were unable to recover it. 00:39:07.051 [2024-09-27 15:57:47.409724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.051 [2024-09-27 15:57:47.409731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.051 qpair failed and we were unable to recover it. 00:39:07.051 [2024-09-27 15:57:47.410001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.051 [2024-09-27 15:57:47.410010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.051 qpair failed and we were unable to recover it. 00:39:07.051 [2024-09-27 15:57:47.410334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.051 [2024-09-27 15:57:47.410342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.051 qpair failed and we were unable to recover it. 00:39:07.051 [2024-09-27 15:57:47.410701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.051 [2024-09-27 15:57:47.410709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.051 qpair failed and we were unable to recover it. 00:39:07.051 [2024-09-27 15:57:47.410997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.051 [2024-09-27 15:57:47.411005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.051 qpair failed and we were unable to recover it. 00:39:07.051 [2024-09-27 15:57:47.411196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.051 [2024-09-27 15:57:47.411203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.051 qpair failed and we were unable to recover it. 00:39:07.051 [2024-09-27 15:57:47.411503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.051 [2024-09-27 15:57:47.411511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.051 qpair failed and we were unable to recover it. 00:39:07.051 [2024-09-27 15:57:47.411816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.051 [2024-09-27 15:57:47.411825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.051 qpair failed and we were unable to recover it. 00:39:07.051 [2024-09-27 15:57:47.411982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.051 [2024-09-27 15:57:47.411989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.051 qpair failed and we were unable to recover it. 00:39:07.051 [2024-09-27 15:57:47.412153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.051 [2024-09-27 15:57:47.412161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.051 qpair failed and we were unable to recover it. 00:39:07.051 [2024-09-27 15:57:47.412318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.051 [2024-09-27 15:57:47.412327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.051 qpair failed and we were unable to recover it. 00:39:07.051 [2024-09-27 15:57:47.412641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.051 [2024-09-27 15:57:47.412649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.051 qpair failed and we were unable to recover it. 00:39:07.051 [2024-09-27 15:57:47.412956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.051 [2024-09-27 15:57:47.412964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.051 qpair failed and we were unable to recover it. 00:39:07.051 [2024-09-27 15:57:47.413280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.051 [2024-09-27 15:57:47.413288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.051 qpair failed and we were unable to recover it. 00:39:07.051 [2024-09-27 15:57:47.413455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.051 [2024-09-27 15:57:47.413463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.051 qpair failed and we were unable to recover it. 00:39:07.051 [2024-09-27 15:57:47.413734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.051 [2024-09-27 15:57:47.413742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.051 qpair failed and we were unable to recover it. 00:39:07.051 [2024-09-27 15:57:47.414058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.051 [2024-09-27 15:57:47.414066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.051 qpair failed and we were unable to recover it. 00:39:07.051 [2024-09-27 15:57:47.414378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.051 [2024-09-27 15:57:47.414386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.051 qpair failed and we were unable to recover it. 00:39:07.051 [2024-09-27 15:57:47.414662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.052 [2024-09-27 15:57:47.414671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.052 qpair failed and we were unable to recover it. 00:39:07.052 [2024-09-27 15:57:47.414868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.052 [2024-09-27 15:57:47.414877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.052 qpair failed and we were unable to recover it. 00:39:07.052 [2024-09-27 15:57:47.415196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.052 [2024-09-27 15:57:47.415204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.052 qpair failed and we were unable to recover it. 00:39:07.052 [2024-09-27 15:57:47.415572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.052 [2024-09-27 15:57:47.415580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.052 qpair failed and we were unable to recover it. 00:39:07.052 [2024-09-27 15:57:47.415748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.052 [2024-09-27 15:57:47.415756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.052 qpair failed and we were unable to recover it. 00:39:07.052 [2024-09-27 15:57:47.416069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.052 [2024-09-27 15:57:47.416077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.052 qpair failed and we were unable to recover it. 00:39:07.052 [2024-09-27 15:57:47.416378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.052 [2024-09-27 15:57:47.416386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.052 qpair failed and we were unable to recover it. 00:39:07.052 [2024-09-27 15:57:47.416552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.052 [2024-09-27 15:57:47.416561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.052 qpair failed and we were unable to recover it. 00:39:07.052 [2024-09-27 15:57:47.416599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.052 [2024-09-27 15:57:47.416605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.052 qpair failed and we were unable to recover it. 00:39:07.052 [2024-09-27 15:57:47.416889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.052 [2024-09-27 15:57:47.416899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.052 qpair failed and we were unable to recover it. 00:39:07.052 [2024-09-27 15:57:47.417227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.052 [2024-09-27 15:57:47.417235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.052 qpair failed and we were unable to recover it. 00:39:07.052 [2024-09-27 15:57:47.417550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.052 [2024-09-27 15:57:47.417558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.052 qpair failed and we were unable to recover it. 00:39:07.052 [2024-09-27 15:57:47.417882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.052 [2024-09-27 15:57:47.417891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.052 qpair failed and we were unable to recover it. 00:39:07.052 [2024-09-27 15:57:47.418203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.052 [2024-09-27 15:57:47.418213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.052 qpair failed and we were unable to recover it. 00:39:07.052 [2024-09-27 15:57:47.418524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.052 [2024-09-27 15:57:47.418532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.052 qpair failed and we were unable to recover it. 00:39:07.052 [2024-09-27 15:57:47.418867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.052 [2024-09-27 15:57:47.418875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.052 qpair failed and we were unable to recover it. 00:39:07.052 [2024-09-27 15:57:47.419055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.052 [2024-09-27 15:57:47.419064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.052 qpair failed and we were unable to recover it. 00:39:07.052 [2024-09-27 15:57:47.419342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.052 [2024-09-27 15:57:47.419351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.052 qpair failed and we were unable to recover it. 00:39:07.052 [2024-09-27 15:57:47.419536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.052 [2024-09-27 15:57:47.419543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.052 qpair failed and we were unable to recover it. 00:39:07.052 [2024-09-27 15:57:47.419697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.052 [2024-09-27 15:57:47.419704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.052 qpair failed and we were unable to recover it. 00:39:07.052 [2024-09-27 15:57:47.420026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.052 [2024-09-27 15:57:47.420035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.052 qpair failed and we were unable to recover it. 00:39:07.052 [2024-09-27 15:57:47.420211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.052 [2024-09-27 15:57:47.420220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.052 qpair failed and we were unable to recover it. 00:39:07.052 [2024-09-27 15:57:47.420396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.052 [2024-09-27 15:57:47.420405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.052 qpair failed and we were unable to recover it. 00:39:07.052 [2024-09-27 15:57:47.420597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.052 [2024-09-27 15:57:47.420607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.052 qpair failed and we were unable to recover it. 00:39:07.052 [2024-09-27 15:57:47.420905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.052 [2024-09-27 15:57:47.420914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.052 qpair failed and we were unable to recover it. 00:39:07.052 [2024-09-27 15:57:47.421285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.052 [2024-09-27 15:57:47.421294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.052 qpair failed and we were unable to recover it. 00:39:07.052 [2024-09-27 15:57:47.421601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.052 [2024-09-27 15:57:47.421610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.052 qpair failed and we were unable to recover it. 00:39:07.052 [2024-09-27 15:57:47.421924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.052 [2024-09-27 15:57:47.421933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.052 qpair failed and we were unable to recover it. 00:39:07.052 [2024-09-27 15:57:47.422219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.052 [2024-09-27 15:57:47.422228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.052 qpair failed and we were unable to recover it. 00:39:07.052 [2024-09-27 15:57:47.422538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.052 [2024-09-27 15:57:47.422546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.052 qpair failed and we were unable to recover it. 00:39:07.052 [2024-09-27 15:57:47.422710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.052 [2024-09-27 15:57:47.422718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.052 qpair failed and we were unable to recover it. 00:39:07.052 [2024-09-27 15:57:47.423023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.052 [2024-09-27 15:57:47.423032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.052 qpair failed and we were unable to recover it. 00:39:07.052 [2024-09-27 15:57:47.423351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.052 [2024-09-27 15:57:47.423359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.052 qpair failed and we were unable to recover it. 00:39:07.052 [2024-09-27 15:57:47.423666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.052 [2024-09-27 15:57:47.423675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.052 qpair failed and we were unable to recover it. 00:39:07.052 [2024-09-27 15:57:47.423966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.052 [2024-09-27 15:57:47.423974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.052 qpair failed and we were unable to recover it. 00:39:07.052 [2024-09-27 15:57:47.424147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.052 [2024-09-27 15:57:47.424155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.053 qpair failed and we were unable to recover it. 00:39:07.053 [2024-09-27 15:57:47.424466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.053 [2024-09-27 15:57:47.424475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.053 qpair failed and we were unable to recover it. 00:39:07.053 [2024-09-27 15:57:47.424783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.053 [2024-09-27 15:57:47.424791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.053 qpair failed and we were unable to recover it. 00:39:07.053 [2024-09-27 15:57:47.424955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.053 [2024-09-27 15:57:47.424963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.053 qpair failed and we were unable to recover it. 00:39:07.053 [2024-09-27 15:57:47.425147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.053 [2024-09-27 15:57:47.425157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.053 qpair failed and we were unable to recover it. 00:39:07.053 [2024-09-27 15:57:47.425469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.053 [2024-09-27 15:57:47.425477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.053 qpair failed and we were unable to recover it. 00:39:07.053 [2024-09-27 15:57:47.425832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.053 [2024-09-27 15:57:47.425840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.053 qpair failed and we were unable to recover it. 00:39:07.053 [2024-09-27 15:57:47.425878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.053 [2024-09-27 15:57:47.425886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.053 qpair failed and we were unable to recover it. 00:39:07.053 [2024-09-27 15:57:47.426189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.053 [2024-09-27 15:57:47.426198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.053 qpair failed and we were unable to recover it. 00:39:07.053 [2024-09-27 15:57:47.426393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.053 [2024-09-27 15:57:47.426400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.053 qpair failed and we were unable to recover it. 00:39:07.053 [2024-09-27 15:57:47.426703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.053 [2024-09-27 15:57:47.426713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.053 qpair failed and we were unable to recover it. 00:39:07.053 [2024-09-27 15:57:47.426751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.053 [2024-09-27 15:57:47.426759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.053 qpair failed and we were unable to recover it. 00:39:07.053 [2024-09-27 15:57:47.427025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.053 [2024-09-27 15:57:47.427035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.053 qpair failed and we were unable to recover it. 00:39:07.053 [2024-09-27 15:57:47.427343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.053 [2024-09-27 15:57:47.427353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.053 qpair failed and we were unable to recover it. 00:39:07.053 [2024-09-27 15:57:47.427548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.053 [2024-09-27 15:57:47.427557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.053 qpair failed and we were unable to recover it. 00:39:07.053 [2024-09-27 15:57:47.427870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.053 [2024-09-27 15:57:47.427880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.053 qpair failed and we were unable to recover it. 00:39:07.053 [2024-09-27 15:57:47.428051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.053 [2024-09-27 15:57:47.428062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.053 qpair failed and we were unable to recover it. 00:39:07.053 [2024-09-27 15:57:47.428429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.053 [2024-09-27 15:57:47.428439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.053 qpair failed and we were unable to recover it. 00:39:07.053 [2024-09-27 15:57:47.428709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.053 [2024-09-27 15:57:47.428719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.053 qpair failed and we were unable to recover it. 00:39:07.053 [2024-09-27 15:57:47.428880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.053 [2024-09-27 15:57:47.428890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.053 qpair failed and we were unable to recover it. 00:39:07.053 [2024-09-27 15:57:47.429220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.053 [2024-09-27 15:57:47.429229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.053 qpair failed and we were unable to recover it. 00:39:07.053 [2024-09-27 15:57:47.429547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.053 [2024-09-27 15:57:47.429556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.053 qpair failed and we were unable to recover it. 00:39:07.053 [2024-09-27 15:57:47.429731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.053 [2024-09-27 15:57:47.429741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.053 qpair failed and we were unable to recover it. 00:39:07.053 [2024-09-27 15:57:47.429915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.053 [2024-09-27 15:57:47.429925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.053 qpair failed and we were unable to recover it. 00:39:07.053 [2024-09-27 15:57:47.430241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.053 [2024-09-27 15:57:47.430250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.053 qpair failed and we were unable to recover it. 00:39:07.053 [2024-09-27 15:57:47.430579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.053 [2024-09-27 15:57:47.430588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.053 qpair failed and we were unable to recover it. 00:39:07.053 [2024-09-27 15:57:47.430902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.053 [2024-09-27 15:57:47.430911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.053 qpair failed and we were unable to recover it. 00:39:07.053 [2024-09-27 15:57:47.431191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.053 [2024-09-27 15:57:47.431200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.053 qpair failed and we were unable to recover it. 00:39:07.053 [2024-09-27 15:57:47.431395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.053 [2024-09-27 15:57:47.431405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.053 qpair failed and we were unable to recover it. 00:39:07.053 [2024-09-27 15:57:47.431576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.053 [2024-09-27 15:57:47.431585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.053 qpair failed and we were unable to recover it. 00:39:07.053 [2024-09-27 15:57:47.431910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.053 [2024-09-27 15:57:47.431920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.053 qpair failed and we were unable to recover it. 00:39:07.053 [2024-09-27 15:57:47.432261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.053 [2024-09-27 15:57:47.432272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.053 qpair failed and we were unable to recover it. 00:39:07.053 [2024-09-27 15:57:47.432592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.053 [2024-09-27 15:57:47.432602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.053 qpair failed and we were unable to recover it. 00:39:07.053 [2024-09-27 15:57:47.432920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.053 [2024-09-27 15:57:47.432931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.053 qpair failed and we were unable to recover it. 00:39:07.053 [2024-09-27 15:57:47.433109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.053 [2024-09-27 15:57:47.433118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.053 qpair failed and we were unable to recover it. 00:39:07.054 [2024-09-27 15:57:47.433448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.054 [2024-09-27 15:57:47.433458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.054 qpair failed and we were unable to recover it. 00:39:07.054 [2024-09-27 15:57:47.433787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.054 [2024-09-27 15:57:47.433797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.054 qpair failed and we were unable to recover it. 00:39:07.054 [2024-09-27 15:57:47.434095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.054 [2024-09-27 15:57:47.434104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.054 qpair failed and we were unable to recover it. 00:39:07.054 [2024-09-27 15:57:47.434394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.054 [2024-09-27 15:57:47.434402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.054 qpair failed and we were unable to recover it. 00:39:07.054 [2024-09-27 15:57:47.434715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.054 [2024-09-27 15:57:47.434724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.054 qpair failed and we were unable to recover it. 00:39:07.054 [2024-09-27 15:57:47.434890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.054 [2024-09-27 15:57:47.434902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.054 qpair failed and we were unable to recover it. 00:39:07.054 [2024-09-27 15:57:47.435207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.054 [2024-09-27 15:57:47.435215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.054 qpair failed and we were unable to recover it. 00:39:07.054 [2024-09-27 15:57:47.435525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.054 [2024-09-27 15:57:47.435534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.054 qpair failed and we were unable to recover it. 00:39:07.054 [2024-09-27 15:57:47.435845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.054 [2024-09-27 15:57:47.435854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.054 qpair failed and we were unable to recover it. 00:39:07.054 [2024-09-27 15:57:47.436165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.054 [2024-09-27 15:57:47.436173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.054 qpair failed and we were unable to recover it. 00:39:07.054 [2024-09-27 15:57:47.436487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.054 [2024-09-27 15:57:47.436506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.054 qpair failed and we were unable to recover it. 00:39:07.054 [2024-09-27 15:57:47.436798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.054 [2024-09-27 15:57:47.436806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.054 qpair failed and we were unable to recover it. 00:39:07.054 [2024-09-27 15:57:47.436973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.054 [2024-09-27 15:57:47.436981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.054 qpair failed and we were unable to recover it. 00:39:07.054 [2024-09-27 15:57:47.437278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.054 [2024-09-27 15:57:47.437288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.054 qpair failed and we were unable to recover it. 00:39:07.054 [2024-09-27 15:57:47.437598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.054 [2024-09-27 15:57:47.437606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.054 qpair failed and we were unable to recover it. 00:39:07.054 [2024-09-27 15:57:47.437912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.054 [2024-09-27 15:57:47.437921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.054 qpair failed and we were unable to recover it. 00:39:07.054 [2024-09-27 15:57:47.438243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.054 [2024-09-27 15:57:47.438252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.054 qpair failed and we were unable to recover it. 00:39:07.054 [2024-09-27 15:57:47.438600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.054 [2024-09-27 15:57:47.438609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.054 qpair failed and we were unable to recover it. 00:39:07.054 [2024-09-27 15:57:47.438916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.054 [2024-09-27 15:57:47.438924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.054 qpair failed and we were unable to recover it. 00:39:07.054 [2024-09-27 15:57:47.439343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.054 [2024-09-27 15:57:47.439352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.054 qpair failed and we were unable to recover it. 00:39:07.054 [2024-09-27 15:57:47.439529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.054 [2024-09-27 15:57:47.439535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.054 qpair failed and we were unable to recover it. 00:39:07.054 [2024-09-27 15:57:47.439853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.054 [2024-09-27 15:57:47.439862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.054 qpair failed and we were unable to recover it. 00:39:07.054 [2024-09-27 15:57:47.440175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.054 [2024-09-27 15:57:47.440184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.054 qpair failed and we were unable to recover it. 00:39:07.054 [2024-09-27 15:57:47.440509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.054 [2024-09-27 15:57:47.440517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.054 qpair failed and we were unable to recover it. 00:39:07.054 [2024-09-27 15:57:47.440828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.054 [2024-09-27 15:57:47.440838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.054 qpair failed and we were unable to recover it. 00:39:07.054 [2024-09-27 15:57:47.441189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.054 [2024-09-27 15:57:47.441198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.054 qpair failed and we were unable to recover it. 00:39:07.054 [2024-09-27 15:57:47.441379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.054 [2024-09-27 15:57:47.441387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.054 qpair failed and we were unable to recover it. 00:39:07.054 [2024-09-27 15:57:47.441673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.054 [2024-09-27 15:57:47.441683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.054 qpair failed and we were unable to recover it. 00:39:07.054 [2024-09-27 15:57:47.441968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.054 [2024-09-27 15:57:47.441978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.054 qpair failed and we were unable to recover it. 00:39:07.054 [2024-09-27 15:57:47.442314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.054 [2024-09-27 15:57:47.442323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.054 qpair failed and we were unable to recover it. 00:39:07.054 [2024-09-27 15:57:47.442505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.054 [2024-09-27 15:57:47.442513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.054 qpair failed and we were unable to recover it. 00:39:07.054 [2024-09-27 15:57:47.442689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.054 [2024-09-27 15:57:47.442697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.054 qpair failed and we were unable to recover it. 00:39:07.054 [2024-09-27 15:57:47.442878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.054 [2024-09-27 15:57:47.442887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.054 qpair failed and we were unable to recover it. 00:39:07.054 [2024-09-27 15:57:47.443177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.054 [2024-09-27 15:57:47.443185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.054 qpair failed and we were unable to recover it. 00:39:07.054 [2024-09-27 15:57:47.443375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.054 [2024-09-27 15:57:47.443383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.054 qpair failed and we were unable to recover it. 00:39:07.054 [2024-09-27 15:57:47.443676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.054 [2024-09-27 15:57:47.443684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.054 qpair failed and we were unable to recover it. 00:39:07.054 [2024-09-27 15:57:47.443994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.054 [2024-09-27 15:57:47.444002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.055 qpair failed and we were unable to recover it. 00:39:07.055 [2024-09-27 15:57:47.444328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.055 [2024-09-27 15:57:47.444338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.055 qpair failed and we were unable to recover it. 00:39:07.055 [2024-09-27 15:57:47.444646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.055 [2024-09-27 15:57:47.444654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.055 qpair failed and we were unable to recover it. 00:39:07.055 [2024-09-27 15:57:47.444695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.055 [2024-09-27 15:57:47.444703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.055 qpair failed and we were unable to recover it. 00:39:07.055 [2024-09-27 15:57:47.444851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.055 [2024-09-27 15:57:47.444859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.055 qpair failed and we were unable to recover it. 00:39:07.055 [2024-09-27 15:57:47.445156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.055 [2024-09-27 15:57:47.445164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.055 qpair failed and we were unable to recover it. 00:39:07.055 [2024-09-27 15:57:47.445328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.055 [2024-09-27 15:57:47.445336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.055 qpair failed and we were unable to recover it. 00:39:07.055 [2024-09-27 15:57:47.445558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.055 [2024-09-27 15:57:47.445567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.055 qpair failed and we were unable to recover it. 00:39:07.055 [2024-09-27 15:57:47.445785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.055 [2024-09-27 15:57:47.445793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.055 qpair failed and we were unable to recover it. 00:39:07.055 [2024-09-27 15:57:47.446108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.055 [2024-09-27 15:57:47.446117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.055 qpair failed and we were unable to recover it. 00:39:07.055 [2024-09-27 15:57:47.446431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.055 [2024-09-27 15:57:47.446440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.055 qpair failed and we were unable to recover it. 00:39:07.055 [2024-09-27 15:57:47.446738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.055 [2024-09-27 15:57:47.446746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.055 qpair failed and we were unable to recover it. 00:39:07.055 [2024-09-27 15:57:47.447059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.055 [2024-09-27 15:57:47.447068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.055 qpair failed and we were unable to recover it. 00:39:07.055 [2024-09-27 15:57:47.447252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.055 [2024-09-27 15:57:47.447260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.055 qpair failed and we were unable to recover it. 00:39:07.055 [2024-09-27 15:57:47.447634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.055 [2024-09-27 15:57:47.447643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.055 qpair failed and we were unable to recover it. 00:39:07.055 [2024-09-27 15:57:47.447951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.055 [2024-09-27 15:57:47.447960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.055 qpair failed and we were unable to recover it. 00:39:07.055 [2024-09-27 15:57:47.448246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.055 [2024-09-27 15:57:47.448254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.055 qpair failed and we were unable to recover it. 00:39:07.055 [2024-09-27 15:57:47.448569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.055 [2024-09-27 15:57:47.448577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.055 qpair failed and we were unable to recover it. 00:39:07.055 [2024-09-27 15:57:47.448883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.055 [2024-09-27 15:57:47.448892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.055 qpair failed and we were unable to recover it. 00:39:07.055 [2024-09-27 15:57:47.449214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.055 [2024-09-27 15:57:47.449223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.055 qpair failed and we were unable to recover it. 00:39:07.055 [2024-09-27 15:57:47.449260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.055 [2024-09-27 15:57:47.449267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.055 qpair failed and we were unable to recover it. 00:39:07.055 [2024-09-27 15:57:47.449415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.055 [2024-09-27 15:57:47.449423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.055 qpair failed and we were unable to recover it. 00:39:07.055 [2024-09-27 15:57:47.449604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.055 [2024-09-27 15:57:47.449612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.055 qpair failed and we were unable to recover it. 00:39:07.055 [2024-09-27 15:57:47.449924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.055 [2024-09-27 15:57:47.449933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.055 qpair failed and we were unable to recover it. 00:39:07.055 [2024-09-27 15:57:47.450217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.055 [2024-09-27 15:57:47.450226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.055 qpair failed and we were unable to recover it. 00:39:07.055 [2024-09-27 15:57:47.450399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.055 [2024-09-27 15:57:47.450407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.055 qpair failed and we were unable to recover it. 00:39:07.055 [2024-09-27 15:57:47.450723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.055 [2024-09-27 15:57:47.450733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.055 qpair failed and we were unable to recover it. 00:39:07.055 [2024-09-27 15:57:47.451044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.055 [2024-09-27 15:57:47.451053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.055 qpair failed and we were unable to recover it. 00:39:07.055 [2024-09-27 15:57:47.451206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.055 [2024-09-27 15:57:47.451215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.055 qpair failed and we were unable to recover it. 00:39:07.055 [2024-09-27 15:57:47.451521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.055 [2024-09-27 15:57:47.451530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.055 qpair failed and we were unable to recover it. 00:39:07.055 [2024-09-27 15:57:47.451799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.055 [2024-09-27 15:57:47.451808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.055 qpair failed and we were unable to recover it. 00:39:07.055 [2024-09-27 15:57:47.451976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.055 [2024-09-27 15:57:47.451985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.055 qpair failed and we were unable to recover it. 00:39:07.055 [2024-09-27 15:57:47.452254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.055 [2024-09-27 15:57:47.452263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.055 qpair failed and we were unable to recover it. 00:39:07.055 [2024-09-27 15:57:47.452541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.055 [2024-09-27 15:57:47.452549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.055 qpair failed and we were unable to recover it. 00:39:07.055 [2024-09-27 15:57:47.452863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.055 [2024-09-27 15:57:47.452872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.055 qpair failed and we were unable to recover it. 00:39:07.055 [2024-09-27 15:57:47.453025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.055 [2024-09-27 15:57:47.453032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.055 qpair failed and we were unable to recover it. 00:39:07.055 [2024-09-27 15:57:47.453105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.055 [2024-09-27 15:57:47.453112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.055 qpair failed and we were unable to recover it. 00:39:07.055 [2024-09-27 15:57:47.453415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.055 [2024-09-27 15:57:47.453423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.056 qpair failed and we were unable to recover it. 00:39:07.056 [2024-09-27 15:57:47.453789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.056 [2024-09-27 15:57:47.453798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.056 qpair failed and we were unable to recover it. 00:39:07.056 [2024-09-27 15:57:47.454077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.056 [2024-09-27 15:57:47.454085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.056 qpair failed and we were unable to recover it. 00:39:07.056 [2024-09-27 15:57:47.454263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.056 [2024-09-27 15:57:47.454271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.056 qpair failed and we were unable to recover it. 00:39:07.056 [2024-09-27 15:57:47.454430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.056 [2024-09-27 15:57:47.454439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.056 qpair failed and we were unable to recover it. 00:39:07.056 [2024-09-27 15:57:47.454703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.056 [2024-09-27 15:57:47.454712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.056 qpair failed and we were unable to recover it. 00:39:07.056 [2024-09-27 15:57:47.455030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.056 [2024-09-27 15:57:47.455038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.056 qpair failed and we were unable to recover it. 00:39:07.056 [2024-09-27 15:57:47.455313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.056 [2024-09-27 15:57:47.455322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.056 qpair failed and we were unable to recover it. 00:39:07.056 [2024-09-27 15:57:47.455633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.056 [2024-09-27 15:57:47.455642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.056 qpair failed and we were unable to recover it. 00:39:07.056 [2024-09-27 15:57:47.455843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.056 [2024-09-27 15:57:47.455852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.056 qpair failed and we were unable to recover it. 00:39:07.056 [2024-09-27 15:57:47.456023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.056 [2024-09-27 15:57:47.456031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.056 qpair failed and we were unable to recover it. 00:39:07.056 [2024-09-27 15:57:47.456338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.056 [2024-09-27 15:57:47.456348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.056 qpair failed and we were unable to recover it. 00:39:07.056 [2024-09-27 15:57:47.456658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.056 [2024-09-27 15:57:47.456667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.056 qpair failed and we were unable to recover it. 00:39:07.056 [2024-09-27 15:57:47.456993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.056 [2024-09-27 15:57:47.457001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.056 qpair failed and we were unable to recover it. 00:39:07.056 [2024-09-27 15:57:47.457198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.056 [2024-09-27 15:57:47.457207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.056 qpair failed and we were unable to recover it. 00:39:07.056 [2024-09-27 15:57:47.457359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.056 [2024-09-27 15:57:47.457367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.056 qpair failed and we were unable to recover it. 00:39:07.056 [2024-09-27 15:57:47.457707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.056 [2024-09-27 15:57:47.457715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.056 qpair failed and we were unable to recover it. 00:39:07.056 [2024-09-27 15:57:47.457907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.056 [2024-09-27 15:57:47.457915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.056 qpair failed and we were unable to recover it. 00:39:07.056 [2024-09-27 15:57:47.458221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.056 [2024-09-27 15:57:47.458230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.056 qpair failed and we were unable to recover it. 00:39:07.056 [2024-09-27 15:57:47.458537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.056 [2024-09-27 15:57:47.458546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.056 qpair failed and we were unable to recover it. 00:39:07.056 [2024-09-27 15:57:47.458881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.056 [2024-09-27 15:57:47.458889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.056 qpair failed and we were unable to recover it. 00:39:07.056 [2024-09-27 15:57:47.459198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.056 [2024-09-27 15:57:47.459207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.056 qpair failed and we were unable to recover it. 00:39:07.056 [2024-09-27 15:57:47.459374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.056 [2024-09-27 15:57:47.459381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.056 qpair failed and we were unable to recover it. 00:39:07.056 [2024-09-27 15:57:47.459537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.056 [2024-09-27 15:57:47.459545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.056 qpair failed and we were unable to recover it. 00:39:07.056 [2024-09-27 15:57:47.459872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.056 [2024-09-27 15:57:47.459881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.056 qpair failed and we were unable to recover it. 00:39:07.056 [2024-09-27 15:57:47.460057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.056 [2024-09-27 15:57:47.460066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.056 qpair failed and we were unable to recover it. 00:39:07.056 [2024-09-27 15:57:47.460377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.056 [2024-09-27 15:57:47.460386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.056 qpair failed and we were unable to recover it. 00:39:07.056 [2024-09-27 15:57:47.460691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.056 [2024-09-27 15:57:47.460701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.056 qpair failed and we were unable to recover it. 00:39:07.056 [2024-09-27 15:57:47.461011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.056 [2024-09-27 15:57:47.461019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.056 qpair failed and we were unable to recover it. 00:39:07.056 [2024-09-27 15:57:47.461188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.056 [2024-09-27 15:57:47.461194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.056 qpair failed and we were unable to recover it. 00:39:07.056 [2024-09-27 15:57:47.461510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.056 [2024-09-27 15:57:47.461518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.056 qpair failed and we were unable to recover it. 00:39:07.056 [2024-09-27 15:57:47.461838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.056 [2024-09-27 15:57:47.461846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.056 qpair failed and we were unable to recover it. 00:39:07.056 [2024-09-27 15:57:47.462146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.056 [2024-09-27 15:57:47.462155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.056 qpair failed and we were unable to recover it. 00:39:07.056 [2024-09-27 15:57:47.462474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.056 [2024-09-27 15:57:47.462482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.056 qpair failed and we were unable to recover it. 00:39:07.056 [2024-09-27 15:57:47.462804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.056 [2024-09-27 15:57:47.462812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.056 qpair failed and we were unable to recover it. 00:39:07.056 [2024-09-27 15:57:47.463137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.056 [2024-09-27 15:57:47.463146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.056 qpair failed and we were unable to recover it. 00:39:07.056 [2024-09-27 15:57:47.463412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.056 [2024-09-27 15:57:47.463420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.056 qpair failed and we were unable to recover it. 00:39:07.057 [2024-09-27 15:57:47.463599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.057 [2024-09-27 15:57:47.463607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.057 qpair failed and we were unable to recover it. 00:39:07.057 [2024-09-27 15:57:47.463934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.057 [2024-09-27 15:57:47.463942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.057 qpair failed and we were unable to recover it. 00:39:07.057 [2024-09-27 15:57:47.464282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.057 [2024-09-27 15:57:47.464290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.057 qpair failed and we were unable to recover it. 00:39:07.057 [2024-09-27 15:57:47.464477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.057 [2024-09-27 15:57:47.464487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.057 qpair failed and we were unable to recover it. 00:39:07.057 [2024-09-27 15:57:47.464769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.057 [2024-09-27 15:57:47.464777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.057 qpair failed and we were unable to recover it. 00:39:07.057 [2024-09-27 15:57:47.465117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.057 [2024-09-27 15:57:47.465125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.057 qpair failed and we were unable to recover it. 00:39:07.057 [2024-09-27 15:57:47.465438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.057 [2024-09-27 15:57:47.465447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.057 qpair failed and we were unable to recover it. 00:39:07.057 [2024-09-27 15:57:47.465757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.057 [2024-09-27 15:57:47.465766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.057 qpair failed and we were unable to recover it. 00:39:07.057 [2024-09-27 15:57:47.466056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.057 [2024-09-27 15:57:47.466066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.057 qpair failed and we were unable to recover it. 00:39:07.057 [2024-09-27 15:57:47.466398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.057 [2024-09-27 15:57:47.466407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.057 qpair failed and we were unable to recover it. 00:39:07.057 [2024-09-27 15:57:47.466741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.057 [2024-09-27 15:57:47.466751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.057 qpair failed and we were unable to recover it. 00:39:07.057 [2024-09-27 15:57:47.467086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.057 [2024-09-27 15:57:47.467095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.057 qpair failed and we were unable to recover it. 00:39:07.057 [2024-09-27 15:57:47.467406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.057 [2024-09-27 15:57:47.467415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.057 qpair failed and we were unable to recover it. 00:39:07.057 [2024-09-27 15:57:47.467725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.057 [2024-09-27 15:57:47.467733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.057 qpair failed and we were unable to recover it. 00:39:07.057 [2024-09-27 15:57:47.468051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.057 [2024-09-27 15:57:47.468059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.057 qpair failed and we were unable to recover it. 00:39:07.057 [2024-09-27 15:57:47.468358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.057 [2024-09-27 15:57:47.468366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.057 qpair failed and we were unable to recover it. 00:39:07.057 [2024-09-27 15:57:47.468678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.057 [2024-09-27 15:57:47.468686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.057 qpair failed and we were unable to recover it. 00:39:07.057 [2024-09-27 15:57:47.468994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.057 [2024-09-27 15:57:47.469003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.057 qpair failed and we were unable to recover it. 00:39:07.057 [2024-09-27 15:57:47.469182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.057 [2024-09-27 15:57:47.469191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.057 qpair failed and we were unable to recover it. 00:39:07.057 [2024-09-27 15:57:47.469525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.057 [2024-09-27 15:57:47.469534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.057 qpair failed and we were unable to recover it. 00:39:07.057 [2024-09-27 15:57:47.469685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.057 [2024-09-27 15:57:47.469692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.057 qpair failed and we were unable to recover it. 00:39:07.057 [2024-09-27 15:57:47.469981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.057 [2024-09-27 15:57:47.469989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.057 qpair failed and we were unable to recover it. 00:39:07.057 [2024-09-27 15:57:47.470305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.057 [2024-09-27 15:57:47.470314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.057 qpair failed and we were unable to recover it. 00:39:07.057 [2024-09-27 15:57:47.470625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.057 [2024-09-27 15:57:47.470634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.057 qpair failed and we were unable to recover it. 00:39:07.057 [2024-09-27 15:57:47.470951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.057 [2024-09-27 15:57:47.470960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.057 qpair failed and we were unable to recover it. 00:39:07.057 [2024-09-27 15:57:47.471137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.057 [2024-09-27 15:57:47.471146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.057 qpair failed and we were unable to recover it. 00:39:07.057 [2024-09-27 15:57:47.471475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.057 [2024-09-27 15:57:47.471484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.057 qpair failed and we were unable to recover it. 00:39:07.057 [2024-09-27 15:57:47.471679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.057 [2024-09-27 15:57:47.471688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.057 qpair failed and we were unable to recover it. 00:39:07.057 [2024-09-27 15:57:47.471838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.057 [2024-09-27 15:57:47.471846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.057 qpair failed and we were unable to recover it. 00:39:07.057 [2024-09-27 15:57:47.472132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.057 [2024-09-27 15:57:47.472140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.057 qpair failed and we were unable to recover it. 00:39:07.057 [2024-09-27 15:57:47.472448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.057 [2024-09-27 15:57:47.472456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.057 qpair failed and we were unable to recover it. 00:39:07.057 [2024-09-27 15:57:47.472783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.057 [2024-09-27 15:57:47.472791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.057 qpair failed and we were unable to recover it. 00:39:07.057 [2024-09-27 15:57:47.473182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.058 [2024-09-27 15:57:47.473190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.058 qpair failed and we were unable to recover it. 00:39:07.058 [2024-09-27 15:57:47.473507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.058 [2024-09-27 15:57:47.473515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.058 qpair failed and we were unable to recover it. 00:39:07.058 [2024-09-27 15:57:47.473823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.058 [2024-09-27 15:57:47.473831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.058 qpair failed and we were unable to recover it. 00:39:07.058 [2024-09-27 15:57:47.474141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.058 [2024-09-27 15:57:47.474149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.058 qpair failed and we were unable to recover it. 00:39:07.058 [2024-09-27 15:57:47.474498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.058 [2024-09-27 15:57:47.474506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.058 qpair failed and we were unable to recover it. 00:39:07.058 [2024-09-27 15:57:47.474697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.058 [2024-09-27 15:57:47.474705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.058 qpair failed and we were unable to recover it. 00:39:07.058 [2024-09-27 15:57:47.475022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.058 [2024-09-27 15:57:47.475030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.058 qpair failed and we were unable to recover it. 00:39:07.058 [2024-09-27 15:57:47.475210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.058 [2024-09-27 15:57:47.475218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.058 qpair failed and we were unable to recover it. 00:39:07.058 [2024-09-27 15:57:47.475371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.058 [2024-09-27 15:57:47.475379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.058 qpair failed and we were unable to recover it. 00:39:07.058 [2024-09-27 15:57:47.475596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.058 [2024-09-27 15:57:47.475603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.058 qpair failed and we were unable to recover it. 00:39:07.058 [2024-09-27 15:57:47.475869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.058 [2024-09-27 15:57:47.475877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.058 qpair failed and we were unable to recover it. 00:39:07.058 [2024-09-27 15:57:47.476042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.058 [2024-09-27 15:57:47.476049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.058 qpair failed and we were unable to recover it. 00:39:07.058 [2024-09-27 15:57:47.476319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.058 [2024-09-27 15:57:47.476327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.058 qpair failed and we were unable to recover it. 00:39:07.058 [2024-09-27 15:57:47.476635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.058 [2024-09-27 15:57:47.476643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.058 qpair failed and we were unable to recover it. 00:39:07.058 [2024-09-27 15:57:47.476948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.058 [2024-09-27 15:57:47.476957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.058 qpair failed and we were unable to recover it. 00:39:07.058 [2024-09-27 15:57:47.477253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.058 [2024-09-27 15:57:47.477261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.058 qpair failed and we were unable to recover it. 00:39:07.058 [2024-09-27 15:57:47.477577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.058 [2024-09-27 15:57:47.477586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.058 qpair failed and we were unable to recover it. 00:39:07.058 [2024-09-27 15:57:47.477903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.058 [2024-09-27 15:57:47.477914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.058 qpair failed and we were unable to recover it. 00:39:07.058 [2024-09-27 15:57:47.478264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.058 [2024-09-27 15:57:47.478272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.058 qpair failed and we were unable to recover it. 00:39:07.058 [2024-09-27 15:57:47.478597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.058 [2024-09-27 15:57:47.478605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.058 qpair failed and we were unable to recover it. 00:39:07.058 [2024-09-27 15:57:47.478787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.058 [2024-09-27 15:57:47.478794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.058 qpair failed and we were unable to recover it. 00:39:07.058 [2024-09-27 15:57:47.479004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.058 [2024-09-27 15:57:47.479012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.058 qpair failed and we were unable to recover it. 00:39:07.058 [2024-09-27 15:57:47.479120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.058 [2024-09-27 15:57:47.479128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.058 qpair failed and we were unable to recover it. 00:39:07.058 [2024-09-27 15:57:47.479455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.058 [2024-09-27 15:57:47.479463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.058 qpair failed and we were unable to recover it. 00:39:07.058 [2024-09-27 15:57:47.479777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.058 [2024-09-27 15:57:47.479785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.058 qpair failed and we were unable to recover it. 00:39:07.058 [2024-09-27 15:57:47.480027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.058 [2024-09-27 15:57:47.480036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.058 qpair failed and we were unable to recover it. 00:39:07.058 [2024-09-27 15:57:47.480225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.058 [2024-09-27 15:57:47.480233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.058 qpair failed and we were unable to recover it. 00:39:07.058 [2024-09-27 15:57:47.480550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.058 [2024-09-27 15:57:47.480559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.058 qpair failed and we were unable to recover it. 00:39:07.058 [2024-09-27 15:57:47.480731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.058 [2024-09-27 15:57:47.480738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.058 qpair failed and we were unable to recover it. 00:39:07.058 [2024-09-27 15:57:47.481044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.058 [2024-09-27 15:57:47.481053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.058 qpair failed and we were unable to recover it. 00:39:07.058 [2024-09-27 15:57:47.481378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.058 [2024-09-27 15:57:47.481387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.058 qpair failed and we were unable to recover it. 00:39:07.058 [2024-09-27 15:57:47.481698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.058 [2024-09-27 15:57:47.481706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.058 qpair failed and we were unable to recover it. 00:39:07.058 [2024-09-27 15:57:47.482029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.058 [2024-09-27 15:57:47.482038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.058 qpair failed and we were unable to recover it. 00:39:07.058 [2024-09-27 15:57:47.482393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.058 [2024-09-27 15:57:47.482401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.058 qpair failed and we were unable to recover it. 00:39:07.058 [2024-09-27 15:57:47.482724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.058 [2024-09-27 15:57:47.482733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.058 qpair failed and we were unable to recover it. 00:39:07.058 [2024-09-27 15:57:47.483046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.058 [2024-09-27 15:57:47.483054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.058 qpair failed and we were unable to recover it. 00:39:07.058 [2024-09-27 15:57:47.483221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.058 [2024-09-27 15:57:47.483228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.059 qpair failed and we were unable to recover it. 00:39:07.059 [2024-09-27 15:57:47.483562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.059 [2024-09-27 15:57:47.483573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.059 qpair failed and we were unable to recover it. 00:39:07.059 [2024-09-27 15:57:47.483841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.059 [2024-09-27 15:57:47.483849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.059 qpair failed and we were unable to recover it. 00:39:07.059 [2024-09-27 15:57:47.484033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.059 [2024-09-27 15:57:47.484041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.059 qpair failed and we were unable to recover it. 00:39:07.059 [2024-09-27 15:57:47.484331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.059 [2024-09-27 15:57:47.484340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.059 qpair failed and we were unable to recover it. 00:39:07.059 [2024-09-27 15:57:47.484377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.059 [2024-09-27 15:57:47.484385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.059 qpair failed and we were unable to recover it. 00:39:07.059 [2024-09-27 15:57:47.484716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.059 [2024-09-27 15:57:47.484724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.059 qpair failed and we were unable to recover it. 00:39:07.059 [2024-09-27 15:57:47.484875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.059 [2024-09-27 15:57:47.484881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.059 qpair failed and we were unable to recover it. 00:39:07.059 [2024-09-27 15:57:47.485093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.059 [2024-09-27 15:57:47.485103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.059 qpair failed and we were unable to recover it. 00:39:07.059 [2024-09-27 15:57:47.485416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.059 [2024-09-27 15:57:47.485424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.059 qpair failed and we were unable to recover it. 00:39:07.059 [2024-09-27 15:57:47.485593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.059 [2024-09-27 15:57:47.485602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.059 qpair failed and we were unable to recover it. 00:39:07.059 [2024-09-27 15:57:47.485903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.059 [2024-09-27 15:57:47.485912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.059 qpair failed and we were unable to recover it. 00:39:07.059 [2024-09-27 15:57:47.486070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.059 [2024-09-27 15:57:47.486078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.059 qpair failed and we were unable to recover it. 00:39:07.059 [2024-09-27 15:57:47.486344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.059 [2024-09-27 15:57:47.486353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.059 qpair failed and we were unable to recover it. 00:39:07.059 [2024-09-27 15:57:47.486666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.059 [2024-09-27 15:57:47.486674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.059 qpair failed and we were unable to recover it. 00:39:07.059 [2024-09-27 15:57:47.486834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.059 [2024-09-27 15:57:47.486840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.059 qpair failed and we were unable to recover it. 00:39:07.059 [2024-09-27 15:57:47.487014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.059 [2024-09-27 15:57:47.487022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.059 qpair failed and we were unable to recover it. 00:39:07.059 [2024-09-27 15:57:47.487297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.059 [2024-09-27 15:57:47.487307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.059 qpair failed and we were unable to recover it. 00:39:07.059 [2024-09-27 15:57:47.487613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.059 [2024-09-27 15:57:47.487621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.059 qpair failed and we were unable to recover it. 00:39:07.059 [2024-09-27 15:57:47.487888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.059 [2024-09-27 15:57:47.487902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.059 qpair failed and we were unable to recover it. 00:39:07.059 [2024-09-27 15:57:47.488080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.059 [2024-09-27 15:57:47.488088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.059 qpair failed and we were unable to recover it. 00:39:07.059 [2024-09-27 15:57:47.488245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.059 [2024-09-27 15:57:47.488254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.059 qpair failed and we were unable to recover it. 00:39:07.059 [2024-09-27 15:57:47.488294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.059 [2024-09-27 15:57:47.488302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.059 qpair failed and we were unable to recover it. 00:39:07.059 [2024-09-27 15:57:47.488571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.059 [2024-09-27 15:57:47.488579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.059 qpair failed and we were unable to recover it. 00:39:07.059 [2024-09-27 15:57:47.488751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.059 [2024-09-27 15:57:47.488759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.059 qpair failed and we were unable to recover it. 00:39:07.059 [2024-09-27 15:57:47.489045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.059 [2024-09-27 15:57:47.489053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.059 qpair failed and we were unable to recover it. 00:39:07.059 [2024-09-27 15:57:47.489371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.059 [2024-09-27 15:57:47.489379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.059 qpair failed and we were unable to recover it. 00:39:07.059 [2024-09-27 15:57:47.489705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.059 [2024-09-27 15:57:47.489713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.059 qpair failed and we were unable to recover it. 00:39:07.059 [2024-09-27 15:57:47.490075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.059 [2024-09-27 15:57:47.490083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.059 qpair failed and we were unable to recover it. 00:39:07.059 [2024-09-27 15:57:47.490399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.059 [2024-09-27 15:57:47.490407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.059 qpair failed and we were unable to recover it. 00:39:07.059 [2024-09-27 15:57:47.490726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.059 [2024-09-27 15:57:47.490734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.059 qpair failed and we were unable to recover it. 00:39:07.059 [2024-09-27 15:57:47.491062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.059 [2024-09-27 15:57:47.491071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.059 qpair failed and we were unable to recover it. 00:39:07.059 [2024-09-27 15:57:47.491375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.059 [2024-09-27 15:57:47.491383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.059 qpair failed and we were unable to recover it. 00:39:07.059 [2024-09-27 15:57:47.491564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.059 [2024-09-27 15:57:47.491571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.059 qpair failed and we were unable to recover it. 00:39:07.059 [2024-09-27 15:57:47.491854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.059 [2024-09-27 15:57:47.491862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.059 qpair failed and we were unable to recover it. 00:39:07.059 [2024-09-27 15:57:47.492025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.059 [2024-09-27 15:57:47.492036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.059 qpair failed and we were unable to recover it. 00:39:07.059 [2024-09-27 15:57:47.492221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.059 [2024-09-27 15:57:47.492230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.059 qpair failed and we were unable to recover it. 00:39:07.059 [2024-09-27 15:57:47.492501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.060 [2024-09-27 15:57:47.492509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.060 qpair failed and we were unable to recover it. 00:39:07.060 [2024-09-27 15:57:47.492782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.060 [2024-09-27 15:57:47.492792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.060 qpair failed and we were unable to recover it. 00:39:07.060 [2024-09-27 15:57:47.493118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.060 [2024-09-27 15:57:47.493127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.060 qpair failed and we were unable to recover it. 00:39:07.060 [2024-09-27 15:57:47.493456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.060 [2024-09-27 15:57:47.493465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.060 qpair failed and we were unable to recover it. 00:39:07.060 [2024-09-27 15:57:47.493636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.060 [2024-09-27 15:57:47.493645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.060 qpair failed and we were unable to recover it. 00:39:07.060 [2024-09-27 15:57:47.493836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.060 [2024-09-27 15:57:47.493845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.060 qpair failed and we were unable to recover it. 00:39:07.060 [2024-09-27 15:57:47.494116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.060 [2024-09-27 15:57:47.494125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.060 qpair failed and we were unable to recover it. 00:39:07.060 [2024-09-27 15:57:47.494444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.060 [2024-09-27 15:57:47.494453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.060 qpair failed and we were unable to recover it. 00:39:07.060 [2024-09-27 15:57:47.494617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.060 [2024-09-27 15:57:47.494626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.060 qpair failed and we were unable to recover it. 00:39:07.060 [2024-09-27 15:57:47.494951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.060 [2024-09-27 15:57:47.494960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.060 qpair failed and we were unable to recover it. 00:39:07.060 [2024-09-27 15:57:47.495277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.060 [2024-09-27 15:57:47.495285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.060 qpair failed and we were unable to recover it. 00:39:07.060 [2024-09-27 15:57:47.495559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.060 [2024-09-27 15:57:47.495567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.060 qpair failed and we were unable to recover it. 00:39:07.060 [2024-09-27 15:57:47.495843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.060 [2024-09-27 15:57:47.495852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.060 qpair failed and we were unable to recover it. 00:39:07.060 [2024-09-27 15:57:47.496168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.060 [2024-09-27 15:57:47.496177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.060 qpair failed and we were unable to recover it. 00:39:07.060 [2024-09-27 15:57:47.496387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.060 [2024-09-27 15:57:47.496394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.060 qpair failed and we were unable to recover it. 00:39:07.060 [2024-09-27 15:57:47.496693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.060 [2024-09-27 15:57:47.496702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.060 qpair failed and we were unable to recover it. 00:39:07.060 [2024-09-27 15:57:47.496735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.060 [2024-09-27 15:57:47.496741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.060 qpair failed and we were unable to recover it. 00:39:07.060 [2024-09-27 15:57:47.496920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.060 [2024-09-27 15:57:47.496928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.060 qpair failed and we were unable to recover it. 00:39:07.060 [2024-09-27 15:57:47.497106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.060 [2024-09-27 15:57:47.497115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.060 qpair failed and we were unable to recover it. 00:39:07.060 [2024-09-27 15:57:47.497330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.060 [2024-09-27 15:57:47.497338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.060 qpair failed and we were unable to recover it. 00:39:07.060 [2024-09-27 15:57:47.497649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.060 [2024-09-27 15:57:47.497657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.060 qpair failed and we were unable to recover it. 00:39:07.060 [2024-09-27 15:57:47.497958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.060 [2024-09-27 15:57:47.497967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.060 qpair failed and we were unable to recover it. 00:39:07.060 [2024-09-27 15:57:47.498291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.060 [2024-09-27 15:57:47.498300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.060 qpair failed and we were unable to recover it. 00:39:07.060 [2024-09-27 15:57:47.498612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.060 [2024-09-27 15:57:47.498620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.060 qpair failed and we were unable to recover it. 00:39:07.341 [2024-09-27 15:57:47.498925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.341 [2024-09-27 15:57:47.498934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.341 qpair failed and we were unable to recover it. 00:39:07.341 [2024-09-27 15:57:47.499246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.341 [2024-09-27 15:57:47.499256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.341 qpair failed and we were unable to recover it. 00:39:07.341 [2024-09-27 15:57:47.499572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.341 [2024-09-27 15:57:47.499583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.341 qpair failed and we were unable to recover it. 00:39:07.341 [2024-09-27 15:57:47.499704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.341 [2024-09-27 15:57:47.499711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.341 qpair failed and we were unable to recover it. 00:39:07.341 [2024-09-27 15:57:47.499850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.341 [2024-09-27 15:57:47.499858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.341 qpair failed and we were unable to recover it. 00:39:07.342 [2024-09-27 15:57:47.500138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.342 [2024-09-27 15:57:47.500148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.342 qpair failed and we were unable to recover it. 00:39:07.342 [2024-09-27 15:57:47.500447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.342 [2024-09-27 15:57:47.500455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.342 qpair failed and we were unable to recover it. 00:39:07.342 [2024-09-27 15:57:47.500768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.342 [2024-09-27 15:57:47.500777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.342 qpair failed and we were unable to recover it. 00:39:07.342 [2024-09-27 15:57:47.501103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.342 [2024-09-27 15:57:47.501112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.342 qpair failed and we were unable to recover it. 00:39:07.342 [2024-09-27 15:57:47.501413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.342 [2024-09-27 15:57:47.501421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.342 qpair failed and we were unable to recover it. 00:39:07.342 [2024-09-27 15:57:47.501575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.342 [2024-09-27 15:57:47.501584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.342 qpair failed and we were unable to recover it. 00:39:07.342 [2024-09-27 15:57:47.501904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.342 [2024-09-27 15:57:47.501912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.342 qpair failed and we were unable to recover it. 00:39:07.342 [2024-09-27 15:57:47.502218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.342 [2024-09-27 15:57:47.502227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.342 qpair failed and we were unable to recover it. 00:39:07.342 [2024-09-27 15:57:47.502454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.342 [2024-09-27 15:57:47.502462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.342 qpair failed and we were unable to recover it. 00:39:07.342 [2024-09-27 15:57:47.502686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.342 [2024-09-27 15:57:47.502694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.342 qpair failed and we were unable to recover it. 00:39:07.342 [2024-09-27 15:57:47.502983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.342 [2024-09-27 15:57:47.502992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.342 qpair failed and we were unable to recover it. 00:39:07.342 [2024-09-27 15:57:47.503363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.342 [2024-09-27 15:57:47.503371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.342 qpair failed and we were unable to recover it. 00:39:07.342 [2024-09-27 15:57:47.503698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.342 [2024-09-27 15:57:47.503707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.342 qpair failed and we were unable to recover it. 00:39:07.342 [2024-09-27 15:57:47.504019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.342 [2024-09-27 15:57:47.504027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.342 qpair failed and we were unable to recover it. 00:39:07.342 [2024-09-27 15:57:47.504254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.342 [2024-09-27 15:57:47.504262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.342 qpair failed and we were unable to recover it. 00:39:07.342 [2024-09-27 15:57:47.504436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.342 [2024-09-27 15:57:47.504445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.342 qpair failed and we were unable to recover it. 00:39:07.342 [2024-09-27 15:57:47.504631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.342 [2024-09-27 15:57:47.504639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.342 qpair failed and we were unable to recover it. 00:39:07.342 [2024-09-27 15:57:47.504963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.342 [2024-09-27 15:57:47.504973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.342 qpair failed and we were unable to recover it. 00:39:07.342 [2024-09-27 15:57:47.505278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.342 [2024-09-27 15:57:47.505287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.342 qpair failed and we were unable to recover it. 00:39:07.342 [2024-09-27 15:57:47.505603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.342 [2024-09-27 15:57:47.505612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.342 qpair failed and we were unable to recover it. 00:39:07.342 [2024-09-27 15:57:47.505777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.342 [2024-09-27 15:57:47.505786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.342 qpair failed and we were unable to recover it. 00:39:07.342 [2024-09-27 15:57:47.505859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.342 [2024-09-27 15:57:47.505867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.342 qpair failed and we were unable to recover it. 00:39:07.342 [2024-09-27 15:57:47.506010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.342 [2024-09-27 15:57:47.506020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.342 qpair failed and we were unable to recover it. 00:39:07.342 [2024-09-27 15:57:47.506246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.342 [2024-09-27 15:57:47.506255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.342 qpair failed and we were unable to recover it. 00:39:07.342 [2024-09-27 15:57:47.506577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.342 [2024-09-27 15:57:47.506587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.342 qpair failed and we were unable to recover it. 00:39:07.342 [2024-09-27 15:57:47.506756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.342 [2024-09-27 15:57:47.506766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.342 qpair failed and we were unable to recover it. 00:39:07.342 [2024-09-27 15:57:47.506967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.342 [2024-09-27 15:57:47.506977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.342 qpair failed and we were unable to recover it. 00:39:07.342 [2024-09-27 15:57:47.507266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.342 [2024-09-27 15:57:47.507276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.342 qpair failed and we were unable to recover it. 00:39:07.342 [2024-09-27 15:57:47.507469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.342 [2024-09-27 15:57:47.507479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.342 qpair failed and we were unable to recover it. 00:39:07.342 [2024-09-27 15:57:47.507795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.342 [2024-09-27 15:57:47.507805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.342 qpair failed and we were unable to recover it. 00:39:07.342 [2024-09-27 15:57:47.508110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.342 [2024-09-27 15:57:47.508120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.342 qpair failed and we were unable to recover it. 00:39:07.342 [2024-09-27 15:57:47.508309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.342 [2024-09-27 15:57:47.508319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.342 qpair failed and we were unable to recover it. 00:39:07.342 [2024-09-27 15:57:47.508637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.342 [2024-09-27 15:57:47.508647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.342 qpair failed and we were unable to recover it. 00:39:07.342 [2024-09-27 15:57:47.508950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.342 [2024-09-27 15:57:47.508960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.342 qpair failed and we were unable to recover it. 00:39:07.342 [2024-09-27 15:57:47.509191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.342 [2024-09-27 15:57:47.509201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.342 qpair failed and we were unable to recover it. 00:39:07.342 [2024-09-27 15:57:47.509518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.342 [2024-09-27 15:57:47.509527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.342 qpair failed and we were unable to recover it. 00:39:07.342 [2024-09-27 15:57:47.509565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.343 [2024-09-27 15:57:47.509574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.343 qpair failed and we were unable to recover it. 00:39:07.343 [2024-09-27 15:57:47.509796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.343 [2024-09-27 15:57:47.509807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.343 qpair failed and we were unable to recover it. 00:39:07.343 [2024-09-27 15:57:47.510131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.343 [2024-09-27 15:57:47.510141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.343 qpair failed and we were unable to recover it. 00:39:07.343 [2024-09-27 15:57:47.510411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.343 [2024-09-27 15:57:47.510421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.343 qpair failed and we were unable to recover it. 00:39:07.343 [2024-09-27 15:57:47.510614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.343 [2024-09-27 15:57:47.510624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.343 qpair failed and we were unable to recover it. 00:39:07.343 [2024-09-27 15:57:47.510897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.343 [2024-09-27 15:57:47.510907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.343 qpair failed and we were unable to recover it. 00:39:07.343 [2024-09-27 15:57:47.511216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.343 [2024-09-27 15:57:47.511225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.343 qpair failed and we were unable to recover it. 00:39:07.343 [2024-09-27 15:57:47.511536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.343 [2024-09-27 15:57:47.511545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.343 qpair failed and we were unable to recover it. 00:39:07.343 [2024-09-27 15:57:47.511727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.343 [2024-09-27 15:57:47.511736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.343 qpair failed and we were unable to recover it. 00:39:07.343 [2024-09-27 15:57:47.512059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.343 [2024-09-27 15:57:47.512069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.343 qpair failed and we were unable to recover it. 00:39:07.343 [2024-09-27 15:57:47.512220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.343 [2024-09-27 15:57:47.512230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.343 qpair failed and we were unable to recover it. 00:39:07.343 [2024-09-27 15:57:47.512439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.343 [2024-09-27 15:57:47.512449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.343 qpair failed and we were unable to recover it. 00:39:07.343 [2024-09-27 15:57:47.512786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.343 [2024-09-27 15:57:47.512795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.343 qpair failed and we were unable to recover it. 00:39:07.343 [2024-09-27 15:57:47.513127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.343 [2024-09-27 15:57:47.513137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.343 qpair failed and we were unable to recover it. 00:39:07.343 [2024-09-27 15:57:47.513177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.343 [2024-09-27 15:57:47.513186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.343 qpair failed and we were unable to recover it. 00:39:07.343 [2024-09-27 15:57:47.513496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.343 [2024-09-27 15:57:47.513510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.343 qpair failed and we were unable to recover it. 00:39:07.343 [2024-09-27 15:57:47.513827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.343 [2024-09-27 15:57:47.513837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.343 qpair failed and we were unable to recover it. 00:39:07.343 [2024-09-27 15:57:47.514159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.343 [2024-09-27 15:57:47.514170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.343 qpair failed and we were unable to recover it. 00:39:07.343 [2024-09-27 15:57:47.514497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.343 [2024-09-27 15:57:47.514507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.343 qpair failed and we were unable to recover it. 00:39:07.343 [2024-09-27 15:57:47.514638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.343 [2024-09-27 15:57:47.514647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.343 qpair failed and we were unable to recover it. 00:39:07.343 [2024-09-27 15:57:47.514803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.343 [2024-09-27 15:57:47.514812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.343 qpair failed and we were unable to recover it. 00:39:07.343 [2024-09-27 15:57:47.515136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.343 [2024-09-27 15:57:47.515146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.343 qpair failed and we were unable to recover it. 00:39:07.343 [2024-09-27 15:57:47.515185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.343 [2024-09-27 15:57:47.515194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.343 qpair failed and we were unable to recover it. 00:39:07.343 [2024-09-27 15:57:47.515377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.343 [2024-09-27 15:57:47.515385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.343 qpair failed and we were unable to recover it. 00:39:07.343 [2024-09-27 15:57:47.515705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.343 [2024-09-27 15:57:47.515715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.343 qpair failed and we were unable to recover it. 00:39:07.343 [2024-09-27 15:57:47.516026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.343 [2024-09-27 15:57:47.516037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.343 qpair failed and we were unable to recover it. 00:39:07.343 [2024-09-27 15:57:47.516347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.343 [2024-09-27 15:57:47.516356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.343 qpair failed and we were unable to recover it. 00:39:07.343 [2024-09-27 15:57:47.516758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.343 [2024-09-27 15:57:47.516767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.343 qpair failed and we were unable to recover it. 00:39:07.343 [2024-09-27 15:57:47.517106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.343 [2024-09-27 15:57:47.517118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.343 qpair failed and we were unable to recover it. 00:39:07.343 [2024-09-27 15:57:47.517272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.343 [2024-09-27 15:57:47.517282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.343 qpair failed and we were unable to recover it. 00:39:07.343 [2024-09-27 15:57:47.517443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.343 [2024-09-27 15:57:47.517453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.343 qpair failed and we were unable to recover it. 00:39:07.343 [2024-09-27 15:57:47.517636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.343 [2024-09-27 15:57:47.517646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.343 qpair failed and we were unable to recover it. 00:39:07.343 [2024-09-27 15:57:47.517827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.343 [2024-09-27 15:57:47.517837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.343 qpair failed and we were unable to recover it. 00:39:07.343 [2024-09-27 15:57:47.517995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.343 [2024-09-27 15:57:47.518006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.343 qpair failed and we were unable to recover it. 00:39:07.343 [2024-09-27 15:57:47.518399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.343 [2024-09-27 15:57:47.518409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.343 qpair failed and we were unable to recover it. 00:39:07.343 [2024-09-27 15:57:47.518717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.343 [2024-09-27 15:57:47.518726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.343 qpair failed and we were unable to recover it. 00:39:07.343 [2024-09-27 15:57:47.518987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.343 [2024-09-27 15:57:47.518997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.343 qpair failed and we were unable to recover it. 00:39:07.343 [2024-09-27 15:57:47.519302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.344 [2024-09-27 15:57:47.519311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.344 qpair failed and we were unable to recover it. 00:39:07.344 [2024-09-27 15:57:47.519496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.344 [2024-09-27 15:57:47.519507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.344 qpair failed and we were unable to recover it. 00:39:07.344 [2024-09-27 15:57:47.519811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.344 [2024-09-27 15:57:47.519821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.344 qpair failed and we were unable to recover it. 00:39:07.344 [2024-09-27 15:57:47.519994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.344 [2024-09-27 15:57:47.520003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.344 qpair failed and we were unable to recover it. 00:39:07.344 [2024-09-27 15:57:47.520050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.344 [2024-09-27 15:57:47.520058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.344 qpair failed and we were unable to recover it. 00:39:07.344 [2024-09-27 15:57:47.520278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.344 [2024-09-27 15:57:47.520287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.344 qpair failed and we were unable to recover it. 00:39:07.344 [2024-09-27 15:57:47.520479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.344 [2024-09-27 15:57:47.520488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.344 qpair failed and we were unable to recover it. 00:39:07.344 [2024-09-27 15:57:47.520808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.344 [2024-09-27 15:57:47.520816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.344 qpair failed and we were unable to recover it. 00:39:07.344 [2024-09-27 15:57:47.520998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.344 [2024-09-27 15:57:47.521006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.344 qpair failed and we were unable to recover it. 00:39:07.344 [2024-09-27 15:57:47.521290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.344 [2024-09-27 15:57:47.521299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.344 qpair failed and we were unable to recover it. 00:39:07.344 [2024-09-27 15:57:47.521609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.344 [2024-09-27 15:57:47.521617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.344 qpair failed and we were unable to recover it. 00:39:07.344 [2024-09-27 15:57:47.521926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.344 [2024-09-27 15:57:47.521935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.344 qpair failed and we were unable to recover it. 00:39:07.344 [2024-09-27 15:57:47.522252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.344 [2024-09-27 15:57:47.522261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.344 qpair failed and we were unable to recover it. 00:39:07.344 [2024-09-27 15:57:47.522617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.344 [2024-09-27 15:57:47.522626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.344 qpair failed and we were unable to recover it. 00:39:07.344 [2024-09-27 15:57:47.522683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.344 [2024-09-27 15:57:47.522690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.344 qpair failed and we were unable to recover it. 00:39:07.344 [2024-09-27 15:57:47.522937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.344 [2024-09-27 15:57:47.522946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.344 qpair failed and we were unable to recover it. 00:39:07.344 [2024-09-27 15:57:47.523304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.344 [2024-09-27 15:57:47.523313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.344 qpair failed and we were unable to recover it. 00:39:07.344 [2024-09-27 15:57:47.523514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.344 [2024-09-27 15:57:47.523523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.344 qpair failed and we were unable to recover it. 00:39:07.344 [2024-09-27 15:57:47.523839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.344 [2024-09-27 15:57:47.523847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.344 qpair failed and we were unable to recover it. 00:39:07.344 [2024-09-27 15:57:47.524019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.344 [2024-09-27 15:57:47.524027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.344 qpair failed and we were unable to recover it. 00:39:07.344 [2024-09-27 15:57:47.524364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.344 [2024-09-27 15:57:47.524373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.344 qpair failed and we were unable to recover it. 00:39:07.344 [2024-09-27 15:57:47.524539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.344 [2024-09-27 15:57:47.524549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.344 qpair failed and we were unable to recover it. 00:39:07.344 [2024-09-27 15:57:47.524867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.344 [2024-09-27 15:57:47.524875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.344 qpair failed and we were unable to recover it. 00:39:07.344 [2024-09-27 15:57:47.525216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.344 [2024-09-27 15:57:47.525224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.344 qpair failed and we were unable to recover it. 00:39:07.344 [2024-09-27 15:57:47.525499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.344 [2024-09-27 15:57:47.525508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.344 qpair failed and we were unable to recover it. 00:39:07.344 [2024-09-27 15:57:47.525827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.344 [2024-09-27 15:57:47.525835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.344 qpair failed and we were unable to recover it. 00:39:07.344 [2024-09-27 15:57:47.526151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.344 [2024-09-27 15:57:47.526161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.344 qpair failed and we were unable to recover it. 00:39:07.344 [2024-09-27 15:57:47.526467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.344 [2024-09-27 15:57:47.526476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.344 qpair failed and we were unable to recover it. 00:39:07.344 [2024-09-27 15:57:47.526788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.344 [2024-09-27 15:57:47.526796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.344 qpair failed and we were unable to recover it. 00:39:07.344 [2024-09-27 15:57:47.527124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.344 [2024-09-27 15:57:47.527132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.344 qpair failed and we were unable to recover it. 00:39:07.344 [2024-09-27 15:57:47.527318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.344 [2024-09-27 15:57:47.527327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.344 qpair failed and we were unable to recover it. 00:39:07.344 [2024-09-27 15:57:47.527495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.344 [2024-09-27 15:57:47.527505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.344 qpair failed and we were unable to recover it. 00:39:07.344 [2024-09-27 15:57:47.527798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.344 [2024-09-27 15:57:47.527807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.344 qpair failed and we were unable to recover it. 00:39:07.344 [2024-09-27 15:57:47.528111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.344 [2024-09-27 15:57:47.528119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.344 qpair failed and we were unable to recover it. 00:39:07.344 [2024-09-27 15:57:47.528435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.344 [2024-09-27 15:57:47.528444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.344 qpair failed and we were unable to recover it. 00:39:07.344 [2024-09-27 15:57:47.528752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.344 [2024-09-27 15:57:47.528761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.344 qpair failed and we were unable to recover it. 00:39:07.344 [2024-09-27 15:57:47.529056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.344 [2024-09-27 15:57:47.529064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.344 qpair failed and we were unable to recover it. 00:39:07.344 [2024-09-27 15:57:47.529382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.345 [2024-09-27 15:57:47.529391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.345 qpair failed and we were unable to recover it. 00:39:07.345 [2024-09-27 15:57:47.529552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.345 [2024-09-27 15:57:47.529560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.345 qpair failed and we were unable to recover it. 00:39:07.345 [2024-09-27 15:57:47.529756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.345 [2024-09-27 15:57:47.529765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.345 qpair failed and we were unable to recover it. 00:39:07.345 [2024-09-27 15:57:47.530082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.345 [2024-09-27 15:57:47.530091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.345 qpair failed and we were unable to recover it. 00:39:07.345 [2024-09-27 15:57:47.530248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.345 [2024-09-27 15:57:47.530257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.345 qpair failed and we were unable to recover it. 00:39:07.345 [2024-09-27 15:57:47.530557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.345 [2024-09-27 15:57:47.530566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.345 qpair failed and we were unable to recover it. 00:39:07.345 [2024-09-27 15:57:47.530608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.345 [2024-09-27 15:57:47.530617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.345 qpair failed and we were unable to recover it. 00:39:07.345 [2024-09-27 15:57:47.530910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.345 [2024-09-27 15:57:47.530919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.345 qpair failed and we were unable to recover it. 00:39:07.345 [2024-09-27 15:57:47.531297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.345 [2024-09-27 15:57:47.531314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.345 qpair failed and we were unable to recover it. 00:39:07.345 [2024-09-27 15:57:47.531611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.345 [2024-09-27 15:57:47.531620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.345 qpair failed and we were unable to recover it. 00:39:07.345 [2024-09-27 15:57:47.531934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.345 [2024-09-27 15:57:47.531942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.345 qpair failed and we were unable to recover it. 00:39:07.345 [2024-09-27 15:57:47.532308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.345 [2024-09-27 15:57:47.532317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.345 qpair failed and we were unable to recover it. 00:39:07.345 [2024-09-27 15:57:47.532622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.345 [2024-09-27 15:57:47.532630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.345 qpair failed and we were unable to recover it. 00:39:07.345 [2024-09-27 15:57:47.532958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.345 [2024-09-27 15:57:47.532966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.345 qpair failed and we were unable to recover it. 00:39:07.345 [2024-09-27 15:57:47.533268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.345 [2024-09-27 15:57:47.533278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.345 qpair failed and we were unable to recover it. 00:39:07.345 [2024-09-27 15:57:47.533467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.345 [2024-09-27 15:57:47.533476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.345 qpair failed and we were unable to recover it. 00:39:07.345 [2024-09-27 15:57:47.533800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.345 [2024-09-27 15:57:47.533810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.345 qpair failed and we were unable to recover it. 00:39:07.345 [2024-09-27 15:57:47.534116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.345 [2024-09-27 15:57:47.534125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.345 qpair failed and we were unable to recover it. 00:39:07.345 [2024-09-27 15:57:47.534352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.345 [2024-09-27 15:57:47.534360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.345 qpair failed and we were unable to recover it. 00:39:07.345 [2024-09-27 15:57:47.534667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.345 [2024-09-27 15:57:47.534677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.345 qpair failed and we were unable to recover it. 00:39:07.345 [2024-09-27 15:57:47.534833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.345 [2024-09-27 15:57:47.534841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.345 qpair failed and we were unable to recover it. 00:39:07.345 [2024-09-27 15:57:47.535011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.345 [2024-09-27 15:57:47.535020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.345 qpair failed and we were unable to recover it. 00:39:07.345 [2024-09-27 15:57:47.535196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.345 [2024-09-27 15:57:47.535207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.345 qpair failed and we were unable to recover it. 00:39:07.345 [2024-09-27 15:57:47.535505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.345 [2024-09-27 15:57:47.535514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.345 qpair failed and we were unable to recover it. 00:39:07.345 [2024-09-27 15:57:47.535835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.345 [2024-09-27 15:57:47.535844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.345 qpair failed and we were unable to recover it. 00:39:07.345 [2024-09-27 15:57:47.536220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.345 [2024-09-27 15:57:47.536230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.345 qpair failed and we were unable to recover it. 00:39:07.345 [2024-09-27 15:57:47.536416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.345 [2024-09-27 15:57:47.536426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.345 qpair failed and we were unable to recover it. 00:39:07.345 [2024-09-27 15:57:47.536588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.345 [2024-09-27 15:57:47.536597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.345 qpair failed and we were unable to recover it. 00:39:07.345 [2024-09-27 15:57:47.536865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.345 [2024-09-27 15:57:47.536874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.345 qpair failed and we were unable to recover it. 00:39:07.345 [2024-09-27 15:57:47.537179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.345 [2024-09-27 15:57:47.537189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.345 qpair failed and we were unable to recover it. 00:39:07.345 [2024-09-27 15:57:47.537366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.345 [2024-09-27 15:57:47.537376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.345 qpair failed and we were unable to recover it. 00:39:07.345 [2024-09-27 15:57:47.537695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.345 [2024-09-27 15:57:47.537704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.345 qpair failed and we were unable to recover it. 00:39:07.345 [2024-09-27 15:57:47.537898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.345 [2024-09-27 15:57:47.537908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.345 qpair failed and we were unable to recover it. 00:39:07.345 [2024-09-27 15:57:47.538091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.345 [2024-09-27 15:57:47.538101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.345 qpair failed and we were unable to recover it. 00:39:07.345 [2024-09-27 15:57:47.538444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.345 [2024-09-27 15:57:47.538452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.345 qpair failed and we were unable to recover it. 00:39:07.345 [2024-09-27 15:57:47.538646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.345 [2024-09-27 15:57:47.538655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.345 qpair failed and we were unable to recover it. 00:39:07.345 [2024-09-27 15:57:47.538968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.345 [2024-09-27 15:57:47.538977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.345 qpair failed and we were unable to recover it. 00:39:07.346 [2024-09-27 15:57:47.539300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.346 [2024-09-27 15:57:47.539308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.346 qpair failed and we were unable to recover it. 00:39:07.346 [2024-09-27 15:57:47.539614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.346 [2024-09-27 15:57:47.539623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.346 qpair failed and we were unable to recover it. 00:39:07.346 [2024-09-27 15:57:47.539777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.346 [2024-09-27 15:57:47.539787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.346 qpair failed and we were unable to recover it. 00:39:07.346 [2024-09-27 15:57:47.539976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.346 [2024-09-27 15:57:47.539987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.346 qpair failed and we were unable to recover it. 00:39:07.346 [2024-09-27 15:57:47.540036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.346 [2024-09-27 15:57:47.540045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.346 qpair failed and we were unable to recover it. 00:39:07.346 [2024-09-27 15:57:47.540384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.346 [2024-09-27 15:57:47.540394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.346 qpair failed and we were unable to recover it. 00:39:07.346 [2024-09-27 15:57:47.540639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.346 [2024-09-27 15:57:47.540649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.346 qpair failed and we were unable to recover it. 00:39:07.346 [2024-09-27 15:57:47.540960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.346 [2024-09-27 15:57:47.540969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.346 qpair failed and we were unable to recover it. 00:39:07.346 [2024-09-27 15:57:47.541292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.346 [2024-09-27 15:57:47.541302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.346 qpair failed and we were unable to recover it. 00:39:07.346 [2024-09-27 15:57:47.541603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.346 [2024-09-27 15:57:47.541612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.346 qpair failed and we were unable to recover it. 00:39:07.346 [2024-09-27 15:57:47.541931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.346 [2024-09-27 15:57:47.541939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.346 qpair failed and we were unable to recover it. 00:39:07.346 [2024-09-27 15:57:47.541986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.346 [2024-09-27 15:57:47.541992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.346 qpair failed and we were unable to recover it. 00:39:07.346 [2024-09-27 15:57:47.542251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.346 [2024-09-27 15:57:47.542261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.346 qpair failed and we were unable to recover it. 00:39:07.346 [2024-09-27 15:57:47.542574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.346 [2024-09-27 15:57:47.542583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.346 qpair failed and we were unable to recover it. 00:39:07.346 [2024-09-27 15:57:47.542901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.346 [2024-09-27 15:57:47.542911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.346 qpair failed and we were unable to recover it. 00:39:07.346 [2024-09-27 15:57:47.543193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.346 [2024-09-27 15:57:47.543202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.346 qpair failed and we were unable to recover it. 00:39:07.346 [2024-09-27 15:57:47.543387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.346 [2024-09-27 15:57:47.543395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.346 qpair failed and we were unable to recover it. 00:39:07.346 [2024-09-27 15:57:47.543671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.346 [2024-09-27 15:57:47.543680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.346 qpair failed and we were unable to recover it. 00:39:07.346 [2024-09-27 15:57:47.543864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.346 [2024-09-27 15:57:47.543874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.346 qpair failed and we were unable to recover it. 00:39:07.346 [2024-09-27 15:57:47.544190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.346 [2024-09-27 15:57:47.544200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.346 qpair failed and we were unable to recover it. 00:39:07.346 [2024-09-27 15:57:47.544513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.346 [2024-09-27 15:57:47.544522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.346 qpair failed and we were unable to recover it. 00:39:07.346 [2024-09-27 15:57:47.544679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.346 [2024-09-27 15:57:47.544689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.346 qpair failed and we were unable to recover it. 00:39:07.346 [2024-09-27 15:57:47.545018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.346 [2024-09-27 15:57:47.545027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.346 qpair failed and we were unable to recover it. 00:39:07.346 [2024-09-27 15:57:47.545315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.346 [2024-09-27 15:57:47.545323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.346 qpair failed and we were unable to recover it. 00:39:07.346 [2024-09-27 15:57:47.545490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.346 [2024-09-27 15:57:47.545498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.346 qpair failed and we were unable to recover it. 00:39:07.346 [2024-09-27 15:57:47.545828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.346 [2024-09-27 15:57:47.545837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.346 qpair failed and we were unable to recover it. 00:39:07.346 [2024-09-27 15:57:47.546156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.346 [2024-09-27 15:57:47.546165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.346 qpair failed and we were unable to recover it. 00:39:07.346 [2024-09-27 15:57:47.546323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.346 [2024-09-27 15:57:47.546332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.346 qpair failed and we were unable to recover it. 00:39:07.346 [2024-09-27 15:57:47.546650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.346 [2024-09-27 15:57:47.546659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.346 qpair failed and we were unable to recover it. 00:39:07.346 [2024-09-27 15:57:47.546706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.346 [2024-09-27 15:57:47.546714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.346 qpair failed and we were unable to recover it. 00:39:07.346 [2024-09-27 15:57:47.546874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.346 [2024-09-27 15:57:47.546883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.346 qpair failed and we were unable to recover it. 00:39:07.346 [2024-09-27 15:57:47.547245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.346 [2024-09-27 15:57:47.547256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.346 qpair failed and we were unable to recover it. 00:39:07.347 [2024-09-27 15:57:47.547445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.347 [2024-09-27 15:57:47.547455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.347 qpair failed and we were unable to recover it. 00:39:07.347 [2024-09-27 15:57:47.547723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.347 [2024-09-27 15:57:47.547732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.347 qpair failed and we were unable to recover it. 00:39:07.347 [2024-09-27 15:57:47.548053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.347 [2024-09-27 15:57:47.548062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.347 qpair failed and we were unable to recover it. 00:39:07.347 [2024-09-27 15:57:47.548338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.347 [2024-09-27 15:57:47.548347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.347 qpair failed and we were unable to recover it. 00:39:07.347 [2024-09-27 15:57:47.548530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.347 [2024-09-27 15:57:47.548539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.347 qpair failed and we were unable to recover it. 00:39:07.347 [2024-09-27 15:57:47.548703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.347 [2024-09-27 15:57:47.548712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.347 qpair failed and we were unable to recover it. 00:39:07.347 [2024-09-27 15:57:47.549022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.347 [2024-09-27 15:57:47.549032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.347 qpair failed and we were unable to recover it. 00:39:07.347 [2024-09-27 15:57:47.549346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.347 [2024-09-27 15:57:47.549355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.347 qpair failed and we were unable to recover it. 00:39:07.347 [2024-09-27 15:57:47.549671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.347 [2024-09-27 15:57:47.549679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.347 qpair failed and we were unable to recover it. 00:39:07.347 [2024-09-27 15:57:47.550041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.347 [2024-09-27 15:57:47.550050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.347 qpair failed and we were unable to recover it. 00:39:07.347 [2024-09-27 15:57:47.550370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.347 [2024-09-27 15:57:47.550379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.347 qpair failed and we were unable to recover it. 00:39:07.347 [2024-09-27 15:57:47.550693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.347 [2024-09-27 15:57:47.550701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.347 qpair failed and we were unable to recover it. 00:39:07.347 [2024-09-27 15:57:47.550864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.347 [2024-09-27 15:57:47.550873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.347 qpair failed and we were unable to recover it. 00:39:07.347 [2024-09-27 15:57:47.551034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.347 [2024-09-27 15:57:47.551043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.347 qpair failed and we were unable to recover it. 00:39:07.347 [2024-09-27 15:57:47.551335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.347 [2024-09-27 15:57:47.551342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.347 qpair failed and we were unable to recover it. 00:39:07.347 [2024-09-27 15:57:47.551535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.347 [2024-09-27 15:57:47.551541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.347 qpair failed and we were unable to recover it. 00:39:07.347 [2024-09-27 15:57:47.551698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.347 [2024-09-27 15:57:47.551705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.347 qpair failed and we were unable to recover it. 00:39:07.347 [2024-09-27 15:57:47.552008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.347 [2024-09-27 15:57:47.552015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.347 qpair failed and we were unable to recover it. 00:39:07.347 [2024-09-27 15:57:47.552184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.347 [2024-09-27 15:57:47.552191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.347 qpair failed and we were unable to recover it. 00:39:07.347 [2024-09-27 15:57:47.552360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.347 [2024-09-27 15:57:47.552367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.347 qpair failed and we were unable to recover it. 00:39:07.347 [2024-09-27 15:57:47.552643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.347 [2024-09-27 15:57:47.552650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.347 qpair failed and we were unable to recover it. 00:39:07.347 [2024-09-27 15:57:47.552823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.347 [2024-09-27 15:57:47.552830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.347 qpair failed and we were unable to recover it. 00:39:07.347 [2024-09-27 15:57:47.553140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.347 [2024-09-27 15:57:47.553147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.347 qpair failed and we were unable to recover it. 00:39:07.347 [2024-09-27 15:57:47.553465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.347 [2024-09-27 15:57:47.553472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.347 qpair failed and we were unable to recover it. 00:39:07.347 [2024-09-27 15:57:47.553784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.347 [2024-09-27 15:57:47.553791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.347 qpair failed and we were unable to recover it. 00:39:07.347 [2024-09-27 15:57:47.553965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.347 [2024-09-27 15:57:47.553972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.347 qpair failed and we were unable to recover it. 00:39:07.347 [2024-09-27 15:57:47.554154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.347 [2024-09-27 15:57:47.554161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.347 qpair failed and we were unable to recover it. 00:39:07.347 [2024-09-27 15:57:47.554310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.347 [2024-09-27 15:57:47.554316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.347 qpair failed and we were unable to recover it. 00:39:07.347 [2024-09-27 15:57:47.554465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.347 [2024-09-27 15:57:47.554472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.347 qpair failed and we were unable to recover it. 00:39:07.347 [2024-09-27 15:57:47.554812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.347 [2024-09-27 15:57:47.554821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.347 qpair failed and we were unable to recover it. 00:39:07.347 [2024-09-27 15:57:47.555016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.347 [2024-09-27 15:57:47.555024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.347 qpair failed and we were unable to recover it. 00:39:07.347 [2024-09-27 15:57:47.555363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.347 [2024-09-27 15:57:47.555371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.347 qpair failed and we were unable to recover it. 00:39:07.347 [2024-09-27 15:57:47.555659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.347 [2024-09-27 15:57:47.555667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.347 qpair failed and we were unable to recover it. 00:39:07.347 [2024-09-27 15:57:47.555879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.347 [2024-09-27 15:57:47.555888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.347 qpair failed and we were unable to recover it. 00:39:07.347 [2024-09-27 15:57:47.556086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.347 [2024-09-27 15:57:47.556095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.347 qpair failed and we were unable to recover it. 00:39:07.347 [2024-09-27 15:57:47.556389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.347 [2024-09-27 15:57:47.556398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.347 qpair failed and we were unable to recover it. 00:39:07.347 [2024-09-27 15:57:47.556712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.347 [2024-09-27 15:57:47.556721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.348 qpair failed and we were unable to recover it. 00:39:07.348 [2024-09-27 15:57:47.556886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.348 [2024-09-27 15:57:47.556899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.348 qpair failed and we were unable to recover it. 00:39:07.348 [2024-09-27 15:57:47.557073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.348 [2024-09-27 15:57:47.557082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.348 qpair failed and we were unable to recover it. 00:39:07.348 [2024-09-27 15:57:47.557369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.348 [2024-09-27 15:57:47.557378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.348 qpair failed and we were unable to recover it. 00:39:07.348 [2024-09-27 15:57:47.557546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.348 [2024-09-27 15:57:47.557554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.348 qpair failed and we were unable to recover it. 00:39:07.348 [2024-09-27 15:57:47.557721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.348 [2024-09-27 15:57:47.557731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.348 qpair failed and we were unable to recover it. 00:39:07.348 [2024-09-27 15:57:47.557918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.348 [2024-09-27 15:57:47.557929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.348 qpair failed and we were unable to recover it. 00:39:07.348 [2024-09-27 15:57:47.558261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.348 [2024-09-27 15:57:47.558270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.348 qpair failed and we were unable to recover it. 00:39:07.348 [2024-09-27 15:57:47.558584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.348 [2024-09-27 15:57:47.558594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.348 qpair failed and we were unable to recover it. 00:39:07.348 [2024-09-27 15:57:47.558633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.348 [2024-09-27 15:57:47.558641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.348 qpair failed and we were unable to recover it. 00:39:07.348 [2024-09-27 15:57:47.558802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.348 [2024-09-27 15:57:47.558812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.348 qpair failed and we were unable to recover it. 00:39:07.348 [2024-09-27 15:57:47.558987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.348 [2024-09-27 15:57:47.558997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.348 qpair failed and we were unable to recover it. 00:39:07.348 [2024-09-27 15:57:47.559186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.348 [2024-09-27 15:57:47.559196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.348 qpair failed and we were unable to recover it. 00:39:07.348 [2024-09-27 15:57:47.559554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.348 [2024-09-27 15:57:47.559564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.348 qpair failed and we were unable to recover it. 00:39:07.348 [2024-09-27 15:57:47.559744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.348 [2024-09-27 15:57:47.559754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.348 qpair failed and we were unable to recover it. 00:39:07.348 [2024-09-27 15:57:47.560083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.348 [2024-09-27 15:57:47.560092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.348 qpair failed and we were unable to recover it. 00:39:07.348 [2024-09-27 15:57:47.560420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.348 [2024-09-27 15:57:47.560429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.348 qpair failed and we were unable to recover it. 00:39:07.348 [2024-09-27 15:57:47.560610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.348 [2024-09-27 15:57:47.560619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.348 qpair failed and we were unable to recover it. 00:39:07.348 [2024-09-27 15:57:47.560805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.348 [2024-09-27 15:57:47.560815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.348 qpair failed and we were unable to recover it. 00:39:07.348 [2024-09-27 15:57:47.560945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.348 [2024-09-27 15:57:47.560956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.348 qpair failed and we were unable to recover it. 00:39:07.348 [2024-09-27 15:57:47.561246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.348 [2024-09-27 15:57:47.561256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.348 qpair failed and we were unable to recover it. 00:39:07.348 [2024-09-27 15:57:47.561410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.348 [2024-09-27 15:57:47.561419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.348 qpair failed and we were unable to recover it. 00:39:07.348 [2024-09-27 15:57:47.561665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.348 [2024-09-27 15:57:47.561674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.348 qpair failed and we were unable to recover it. 00:39:07.348 [2024-09-27 15:57:47.561725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.348 [2024-09-27 15:57:47.561733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.348 qpair failed and we were unable to recover it. 00:39:07.348 [2024-09-27 15:57:47.562038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.348 [2024-09-27 15:57:47.562048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.348 qpair failed and we were unable to recover it. 00:39:07.348 [2024-09-27 15:57:47.562204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.348 [2024-09-27 15:57:47.562213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.348 qpair failed and we were unable to recover it. 00:39:07.348 [2024-09-27 15:57:47.562526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.348 [2024-09-27 15:57:47.562536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.348 qpair failed and we were unable to recover it. 00:39:07.348 [2024-09-27 15:57:47.562839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.348 [2024-09-27 15:57:47.562849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.348 qpair failed and we were unable to recover it. 00:39:07.348 [2024-09-27 15:57:47.563026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.348 [2024-09-27 15:57:47.563036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.348 qpair failed and we were unable to recover it. 00:39:07.348 [2024-09-27 15:57:47.563193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.348 [2024-09-27 15:57:47.563203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.348 qpair failed and we were unable to recover it. 00:39:07.348 [2024-09-27 15:57:47.563499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.348 [2024-09-27 15:57:47.563510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.348 qpair failed and we were unable to recover it. 00:39:07.348 [2024-09-27 15:57:47.563683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.348 [2024-09-27 15:57:47.563693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.348 qpair failed and we were unable to recover it. 00:39:07.348 [2024-09-27 15:57:47.563961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.348 [2024-09-27 15:57:47.563971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.348 qpair failed and we were unable to recover it. 00:39:07.348 [2024-09-27 15:57:47.564281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.348 [2024-09-27 15:57:47.564292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.348 qpair failed and we were unable to recover it. 00:39:07.348 [2024-09-27 15:57:47.564605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.348 [2024-09-27 15:57:47.564615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.348 qpair failed and we were unable to recover it. 00:39:07.348 [2024-09-27 15:57:47.564935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.348 [2024-09-27 15:57:47.564945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.348 qpair failed and we were unable to recover it. 00:39:07.348 [2024-09-27 15:57:47.565277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.348 [2024-09-27 15:57:47.565286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.348 qpair failed and we were unable to recover it. 00:39:07.348 [2024-09-27 15:57:47.565334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.348 [2024-09-27 15:57:47.565342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.348 qpair failed and we were unable to recover it. 00:39:07.349 [2024-09-27 15:57:47.565645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.349 [2024-09-27 15:57:47.565655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.349 qpair failed and we were unable to recover it. 00:39:07.349 [2024-09-27 15:57:47.565967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.349 [2024-09-27 15:57:47.565978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.349 qpair failed and we were unable to recover it. 00:39:07.349 [2024-09-27 15:57:47.566299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.349 [2024-09-27 15:57:47.566309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.349 qpair failed and we were unable to recover it. 00:39:07.349 [2024-09-27 15:57:47.566618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.349 [2024-09-27 15:57:47.566628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.349 qpair failed and we were unable to recover it. 00:39:07.349 [2024-09-27 15:57:47.566799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.349 [2024-09-27 15:57:47.566809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.349 qpair failed and we were unable to recover it. 00:39:07.349 [2024-09-27 15:57:47.566856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.349 [2024-09-27 15:57:47.566863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.349 qpair failed and we were unable to recover it. 00:39:07.349 [2024-09-27 15:57:47.567054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.349 [2024-09-27 15:57:47.567064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.349 qpair failed and we were unable to recover it. 00:39:07.349 [2024-09-27 15:57:47.567344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.349 [2024-09-27 15:57:47.567355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.349 qpair failed and we were unable to recover it. 00:39:07.349 [2024-09-27 15:57:47.567672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.349 [2024-09-27 15:57:47.567681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.349 qpair failed and we were unable to recover it. 00:39:07.349 [2024-09-27 15:57:47.567866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.349 [2024-09-27 15:57:47.567876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.349 qpair failed and we were unable to recover it. 00:39:07.349 [2024-09-27 15:57:47.568042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.349 [2024-09-27 15:57:47.568052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.349 qpair failed and we were unable to recover it. 00:39:07.349 [2024-09-27 15:57:47.568336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.349 [2024-09-27 15:57:47.568346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.349 qpair failed and we were unable to recover it. 00:39:07.349 [2024-09-27 15:57:47.568649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.349 [2024-09-27 15:57:47.568658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.349 qpair failed and we were unable to recover it. 00:39:07.349 [2024-09-27 15:57:47.568809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.349 [2024-09-27 15:57:47.568818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.349 qpair failed and we were unable to recover it. 00:39:07.349 [2024-09-27 15:57:47.569138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.349 [2024-09-27 15:57:47.569148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.349 qpair failed and we were unable to recover it. 00:39:07.349 [2024-09-27 15:57:47.569338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.349 [2024-09-27 15:57:47.569348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.349 qpair failed and we were unable to recover it. 00:39:07.349 [2024-09-27 15:57:47.569632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.349 [2024-09-27 15:57:47.569642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.349 qpair failed and we were unable to recover it. 00:39:07.349 [2024-09-27 15:57:47.569811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.349 [2024-09-27 15:57:47.569822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.349 qpair failed and we were unable to recover it. 00:39:07.349 [2024-09-27 15:57:47.570113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.349 [2024-09-27 15:57:47.570124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.349 qpair failed and we were unable to recover it. 00:39:07.349 [2024-09-27 15:57:47.570457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.349 [2024-09-27 15:57:47.570467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.349 qpair failed and we were unable to recover it. 00:39:07.349 [2024-09-27 15:57:47.570654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.349 [2024-09-27 15:57:47.570665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.349 qpair failed and we were unable to recover it. 00:39:07.349 [2024-09-27 15:57:47.570958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.349 [2024-09-27 15:57:47.570967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.349 qpair failed and we were unable to recover it. 00:39:07.349 [2024-09-27 15:57:47.571366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.349 [2024-09-27 15:57:47.571376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.349 qpair failed and we were unable to recover it. 00:39:07.349 [2024-09-27 15:57:47.571563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.349 [2024-09-27 15:57:47.571572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.349 qpair failed and we were unable to recover it. 00:39:07.349 [2024-09-27 15:57:47.571910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.349 [2024-09-27 15:57:47.571920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.349 qpair failed and we were unable to recover it. 00:39:07.349 [2024-09-27 15:57:47.572235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.349 [2024-09-27 15:57:47.572244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.349 qpair failed and we were unable to recover it. 00:39:07.349 [2024-09-27 15:57:47.572435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.349 [2024-09-27 15:57:47.572443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.349 qpair failed and we were unable to recover it. 00:39:07.349 [2024-09-27 15:57:47.572619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.349 [2024-09-27 15:57:47.572627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.349 qpair failed and we were unable to recover it. 00:39:07.349 [2024-09-27 15:57:47.572963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.349 [2024-09-27 15:57:47.572975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.349 qpair failed and we were unable to recover it. 00:39:07.349 [2024-09-27 15:57:47.573203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.349 [2024-09-27 15:57:47.573211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.349 qpair failed and we were unable to recover it. 00:39:07.349 [2024-09-27 15:57:47.573524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.349 [2024-09-27 15:57:47.573533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.349 qpair failed and we were unable to recover it. 00:39:07.349 [2024-09-27 15:57:47.573729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.349 [2024-09-27 15:57:47.573738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.349 qpair failed and we were unable to recover it. 00:39:07.349 [2024-09-27 15:57:47.573804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.349 [2024-09-27 15:57:47.573812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.349 qpair failed and we were unable to recover it. 00:39:07.349 [2024-09-27 15:57:47.574105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.349 [2024-09-27 15:57:47.574114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.349 qpair failed and we were unable to recover it. 00:39:07.349 [2024-09-27 15:57:47.574283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.349 [2024-09-27 15:57:47.574291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.349 qpair failed and we were unable to recover it. 00:39:07.349 [2024-09-27 15:57:47.574668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.349 [2024-09-27 15:57:47.574677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.349 qpair failed and we were unable to recover it. 00:39:07.349 [2024-09-27 15:57:47.575023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.349 [2024-09-27 15:57:47.575032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.349 qpair failed and we were unable to recover it. 00:39:07.350 [2024-09-27 15:57:47.575361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.350 [2024-09-27 15:57:47.575369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.350 qpair failed and we were unable to recover it. 00:39:07.350 [2024-09-27 15:57:47.575558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.350 [2024-09-27 15:57:47.575566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.350 qpair failed and we were unable to recover it. 00:39:07.350 [2024-09-27 15:57:47.575909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.350 [2024-09-27 15:57:47.575918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.350 qpair failed and we were unable to recover it. 00:39:07.350 [2024-09-27 15:57:47.576351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.350 [2024-09-27 15:57:47.576359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.350 qpair failed and we were unable to recover it. 00:39:07.350 [2024-09-27 15:57:47.576528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.350 [2024-09-27 15:57:47.576537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.350 qpair failed and we were unable to recover it. 00:39:07.350 [2024-09-27 15:57:47.576847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.350 [2024-09-27 15:57:47.576856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.350 qpair failed and we were unable to recover it. 00:39:07.350 [2024-09-27 15:57:47.577146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.350 [2024-09-27 15:57:47.577154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.350 qpair failed and we were unable to recover it. 00:39:07.350 [2024-09-27 15:57:47.577350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.350 [2024-09-27 15:57:47.577359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.350 qpair failed and we were unable to recover it. 00:39:07.350 [2024-09-27 15:57:47.577450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.350 [2024-09-27 15:57:47.577457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.350 qpair failed and we were unable to recover it. 00:39:07.350 [2024-09-27 15:57:47.577709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.350 [2024-09-27 15:57:47.577717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.350 qpair failed and we were unable to recover it. 00:39:07.350 [2024-09-27 15:57:47.578013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.350 [2024-09-27 15:57:47.578022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.350 qpair failed and we were unable to recover it. 00:39:07.350 [2024-09-27 15:57:47.578334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.350 [2024-09-27 15:57:47.578343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.350 qpair failed and we were unable to recover it. 00:39:07.350 [2024-09-27 15:57:47.578619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.350 [2024-09-27 15:57:47.578627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.350 qpair failed and we were unable to recover it. 00:39:07.350 [2024-09-27 15:57:47.578800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.350 [2024-09-27 15:57:47.578810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.350 qpair failed and we were unable to recover it. 00:39:07.350 [2024-09-27 15:57:47.578994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.350 [2024-09-27 15:57:47.579003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.350 qpair failed and we were unable to recover it. 00:39:07.350 [2024-09-27 15:57:47.579334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.350 [2024-09-27 15:57:47.579342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.350 qpair failed and we were unable to recover it. 00:39:07.350 [2024-09-27 15:57:47.579659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.350 [2024-09-27 15:57:47.579669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.350 qpair failed and we were unable to recover it. 00:39:07.350 [2024-09-27 15:57:47.579823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.350 [2024-09-27 15:57:47.579833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.350 qpair failed and we were unable to recover it. 00:39:07.350 [2024-09-27 15:57:47.580033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.350 [2024-09-27 15:57:47.580043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.350 qpair failed and we were unable to recover it. 00:39:07.350 [2024-09-27 15:57:47.580437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.350 [2024-09-27 15:57:47.580446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.350 qpair failed and we were unable to recover it. 00:39:07.350 [2024-09-27 15:57:47.580633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.350 [2024-09-27 15:57:47.580643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.350 qpair failed and we were unable to recover it. 00:39:07.350 [2024-09-27 15:57:47.580967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.350 [2024-09-27 15:57:47.580977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.350 qpair failed and we were unable to recover it. 00:39:07.350 [2024-09-27 15:57:47.581155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.350 [2024-09-27 15:57:47.581163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.350 qpair failed and we were unable to recover it. 00:39:07.350 [2024-09-27 15:57:47.581391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.350 [2024-09-27 15:57:47.581399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.350 qpair failed and we were unable to recover it. 00:39:07.350 [2024-09-27 15:57:47.581589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.350 [2024-09-27 15:57:47.581599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.350 qpair failed and we were unable to recover it. 00:39:07.350 [2024-09-27 15:57:47.581905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.350 [2024-09-27 15:57:47.581915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.350 qpair failed and we were unable to recover it. 00:39:07.350 [2024-09-27 15:57:47.582118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.350 [2024-09-27 15:57:47.582126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.350 qpair failed and we were unable to recover it. 00:39:07.350 [2024-09-27 15:57:47.582500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.350 [2024-09-27 15:57:47.582510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.350 qpair failed and we were unable to recover it. 00:39:07.350 [2024-09-27 15:57:47.582827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.350 [2024-09-27 15:57:47.582836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.350 qpair failed and we were unable to recover it. 00:39:07.350 [2024-09-27 15:57:47.583189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.350 [2024-09-27 15:57:47.583199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.350 qpair failed and we were unable to recover it. 00:39:07.350 [2024-09-27 15:57:47.583481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.350 [2024-09-27 15:57:47.583490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.350 qpair failed and we were unable to recover it. 00:39:07.350 [2024-09-27 15:57:47.583804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.350 [2024-09-27 15:57:47.583812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.350 qpair failed and we were unable to recover it. 00:39:07.350 [2024-09-27 15:57:47.584048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.350 [2024-09-27 15:57:47.584057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.350 qpair failed and we were unable to recover it. 00:39:07.350 [2024-09-27 15:57:47.584228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.350 [2024-09-27 15:57:47.584236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.350 qpair failed and we were unable to recover it. 00:39:07.350 [2024-09-27 15:57:47.584567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.350 [2024-09-27 15:57:47.584575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.350 qpair failed and we were unable to recover it. 00:39:07.350 [2024-09-27 15:57:47.584767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.350 [2024-09-27 15:57:47.584776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.350 qpair failed and we were unable to recover it. 00:39:07.350 [2024-09-27 15:57:47.585066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.350 [2024-09-27 15:57:47.585076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.350 qpair failed and we were unable to recover it. 00:39:07.351 [2024-09-27 15:57:47.585357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.351 [2024-09-27 15:57:47.585367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.351 qpair failed and we were unable to recover it. 00:39:07.351 [2024-09-27 15:57:47.585693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.351 [2024-09-27 15:57:47.585702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.351 qpair failed and we were unable to recover it. 00:39:07.351 [2024-09-27 15:57:47.586011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.351 [2024-09-27 15:57:47.586020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.351 qpair failed and we were unable to recover it. 00:39:07.351 [2024-09-27 15:57:47.586250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.351 [2024-09-27 15:57:47.586258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.351 qpair failed and we were unable to recover it. 00:39:07.351 [2024-09-27 15:57:47.586434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.351 [2024-09-27 15:57:47.586442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.351 qpair failed and we were unable to recover it. 00:39:07.351 [2024-09-27 15:57:47.586767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.351 [2024-09-27 15:57:47.586776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.351 qpair failed and we were unable to recover it. 00:39:07.351 [2024-09-27 15:57:47.587175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.351 [2024-09-27 15:57:47.587187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.351 qpair failed and we were unable to recover it. 00:39:07.351 [2024-09-27 15:57:47.587525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.351 [2024-09-27 15:57:47.587534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.351 qpair failed and we were unable to recover it. 00:39:07.351 [2024-09-27 15:57:47.587749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.351 [2024-09-27 15:57:47.587759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.351 qpair failed and we were unable to recover it. 00:39:07.351 [2024-09-27 15:57:47.588081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.351 [2024-09-27 15:57:47.588090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.351 qpair failed and we were unable to recover it. 00:39:07.351 [2024-09-27 15:57:47.588437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.351 [2024-09-27 15:57:47.588446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.351 qpair failed and we were unable to recover it. 00:39:07.351 [2024-09-27 15:57:47.588636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.351 [2024-09-27 15:57:47.588646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.351 qpair failed and we were unable to recover it. 00:39:07.351 [2024-09-27 15:57:47.588797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.351 [2024-09-27 15:57:47.588807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.351 qpair failed and we were unable to recover it. 00:39:07.351 [2024-09-27 15:57:47.589118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.351 [2024-09-27 15:57:47.589128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.351 qpair failed and we were unable to recover it. 00:39:07.351 [2024-09-27 15:57:47.589317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.351 [2024-09-27 15:57:47.589327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.351 qpair failed and we were unable to recover it. 00:39:07.351 [2024-09-27 15:57:47.589498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.351 [2024-09-27 15:57:47.589508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.351 qpair failed and we were unable to recover it. 00:39:07.351 [2024-09-27 15:57:47.589692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.351 [2024-09-27 15:57:47.589701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.351 qpair failed and we were unable to recover it. 00:39:07.351 [2024-09-27 15:57:47.590005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.351 [2024-09-27 15:57:47.590016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.351 qpair failed and we were unable to recover it. 00:39:07.351 [2024-09-27 15:57:47.590053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.351 [2024-09-27 15:57:47.590063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.351 qpair failed and we were unable to recover it. 00:39:07.351 [2024-09-27 15:57:47.590363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.351 [2024-09-27 15:57:47.590373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.351 qpair failed and we were unable to recover it. 00:39:07.351 [2024-09-27 15:57:47.590538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.351 [2024-09-27 15:57:47.590548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.351 qpair failed and we were unable to recover it. 00:39:07.351 [2024-09-27 15:57:47.590846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.351 [2024-09-27 15:57:47.590856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.351 qpair failed and we were unable to recover it. 00:39:07.351 [2024-09-27 15:57:47.590901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.351 [2024-09-27 15:57:47.590911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.351 qpair failed and we were unable to recover it. 00:39:07.351 [2024-09-27 15:57:47.591085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.351 [2024-09-27 15:57:47.591094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.351 qpair failed and we were unable to recover it. 00:39:07.351 [2024-09-27 15:57:47.591429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.351 [2024-09-27 15:57:47.591440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.351 qpair failed and we were unable to recover it. 00:39:07.351 [2024-09-27 15:57:47.591740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.351 [2024-09-27 15:57:47.591750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.351 qpair failed and we were unable to recover it. 00:39:07.351 [2024-09-27 15:57:47.592033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.351 [2024-09-27 15:57:47.592043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.351 qpair failed and we were unable to recover it. 00:39:07.351 [2024-09-27 15:57:47.592199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.351 [2024-09-27 15:57:47.592210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.351 qpair failed and we were unable to recover it. 00:39:07.351 [2024-09-27 15:57:47.592470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.351 [2024-09-27 15:57:47.592479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.351 qpair failed and we were unable to recover it. 00:39:07.351 [2024-09-27 15:57:47.592794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.351 [2024-09-27 15:57:47.592805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.351 qpair failed and we were unable to recover it. 00:39:07.351 [2024-09-27 15:57:47.592977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.351 [2024-09-27 15:57:47.592987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.351 qpair failed and we were unable to recover it. 00:39:07.351 [2024-09-27 15:57:47.593217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.351 [2024-09-27 15:57:47.593226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.351 qpair failed and we were unable to recover it. 00:39:07.351 [2024-09-27 15:57:47.593533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.351 [2024-09-27 15:57:47.593543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.352 qpair failed and we were unable to recover it. 00:39:07.352 [2024-09-27 15:57:47.593853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.352 [2024-09-27 15:57:47.593862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.352 qpair failed and we were unable to recover it. 00:39:07.352 [2024-09-27 15:57:47.594166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.352 [2024-09-27 15:57:47.594177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.352 qpair failed and we were unable to recover it. 00:39:07.352 [2024-09-27 15:57:47.594485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.352 [2024-09-27 15:57:47.594495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.352 qpair failed and we were unable to recover it. 00:39:07.352 [2024-09-27 15:57:47.594865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.352 [2024-09-27 15:57:47.594875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.352 qpair failed and we were unable to recover it. 00:39:07.352 [2024-09-27 15:57:47.595051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.352 [2024-09-27 15:57:47.595061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.352 qpair failed and we were unable to recover it. 00:39:07.352 [2024-09-27 15:57:47.595376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.352 [2024-09-27 15:57:47.595386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.352 qpair failed and we were unable to recover it. 00:39:07.352 [2024-09-27 15:57:47.595706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.352 [2024-09-27 15:57:47.595715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.352 qpair failed and we were unable to recover it. 00:39:07.352 [2024-09-27 15:57:47.596006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.352 [2024-09-27 15:57:47.596015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.352 qpair failed and we were unable to recover it. 00:39:07.352 [2024-09-27 15:57:47.596344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.352 [2024-09-27 15:57:47.596353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.352 qpair failed and we were unable to recover it. 00:39:07.352 [2024-09-27 15:57:47.596656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.352 [2024-09-27 15:57:47.596665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.352 qpair failed and we were unable to recover it. 00:39:07.352 [2024-09-27 15:57:47.596987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.352 [2024-09-27 15:57:47.596995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.352 qpair failed and we were unable to recover it. 00:39:07.352 [2024-09-27 15:57:47.597318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.352 [2024-09-27 15:57:47.597327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.352 qpair failed and we were unable to recover it. 00:39:07.352 [2024-09-27 15:57:47.597663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.352 [2024-09-27 15:57:47.597673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.352 qpair failed and we were unable to recover it. 00:39:07.352 [2024-09-27 15:57:47.598030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.352 [2024-09-27 15:57:47.598039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.352 qpair failed and we were unable to recover it. 00:39:07.352 [2024-09-27 15:57:47.598354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.352 [2024-09-27 15:57:47.598363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.352 qpair failed and we were unable to recover it. 00:39:07.352 [2024-09-27 15:57:47.598556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.352 [2024-09-27 15:57:47.598564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.352 qpair failed and we were unable to recover it. 00:39:07.352 [2024-09-27 15:57:47.598891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.352 [2024-09-27 15:57:47.598907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.352 qpair failed and we were unable to recover it. 00:39:07.352 [2024-09-27 15:57:47.599210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.352 [2024-09-27 15:57:47.599219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.352 qpair failed and we were unable to recover it. 00:39:07.352 [2024-09-27 15:57:47.599387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.352 [2024-09-27 15:57:47.599395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.352 qpair failed and we were unable to recover it. 00:39:07.352 [2024-09-27 15:57:47.599715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.352 [2024-09-27 15:57:47.599723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.352 qpair failed and we were unable to recover it. 00:39:07.352 [2024-09-27 15:57:47.599906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.352 [2024-09-27 15:57:47.599915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.352 qpair failed and we were unable to recover it. 00:39:07.352 [2024-09-27 15:57:47.600158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.352 [2024-09-27 15:57:47.600167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.352 qpair failed and we were unable to recover it. 00:39:07.352 [2024-09-27 15:57:47.600490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.352 [2024-09-27 15:57:47.600499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.352 qpair failed and we were unable to recover it. 00:39:07.352 [2024-09-27 15:57:47.600817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.352 [2024-09-27 15:57:47.600826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.352 qpair failed and we were unable to recover it. 00:39:07.352 [2024-09-27 15:57:47.600985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.352 [2024-09-27 15:57:47.600994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.352 qpair failed and we were unable to recover it. 00:39:07.352 [2024-09-27 15:57:47.601314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.352 [2024-09-27 15:57:47.601323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.352 qpair failed and we were unable to recover it. 00:39:07.352 [2024-09-27 15:57:47.601494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.352 [2024-09-27 15:57:47.601502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.352 qpair failed and we were unable to recover it. 00:39:07.352 [2024-09-27 15:57:47.601884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.352 [2024-09-27 15:57:47.601898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.352 qpair failed and we were unable to recover it. 00:39:07.352 [2024-09-27 15:57:47.602198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.352 [2024-09-27 15:57:47.602207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.352 qpair failed and we were unable to recover it. 00:39:07.352 [2024-09-27 15:57:47.602532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.352 [2024-09-27 15:57:47.602542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.352 qpair failed and we were unable to recover it. 00:39:07.352 [2024-09-27 15:57:47.602869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.352 [2024-09-27 15:57:47.602878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.352 qpair failed and we were unable to recover it. 00:39:07.352 [2024-09-27 15:57:47.603195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.352 [2024-09-27 15:57:47.603205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.352 qpair failed and we were unable to recover it. 00:39:07.352 [2024-09-27 15:57:47.603518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.352 [2024-09-27 15:57:47.603528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.352 qpair failed and we were unable to recover it. 00:39:07.352 [2024-09-27 15:57:47.603839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.352 [2024-09-27 15:57:47.603849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.352 qpair failed and we were unable to recover it. 00:39:07.352 [2024-09-27 15:57:47.604155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.352 [2024-09-27 15:57:47.604164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.352 qpair failed and we were unable to recover it. 00:39:07.352 [2024-09-27 15:57:47.604339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.352 [2024-09-27 15:57:47.604346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.352 qpair failed and we were unable to recover it. 00:39:07.352 [2024-09-27 15:57:47.604631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.352 [2024-09-27 15:57:47.604640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.352 qpair failed and we were unable to recover it. 00:39:07.353 [2024-09-27 15:57:47.604680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.353 [2024-09-27 15:57:47.604688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.353 qpair failed and we were unable to recover it. 00:39:07.353 [2024-09-27 15:57:47.604968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.353 [2024-09-27 15:57:47.604976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.353 qpair failed and we were unable to recover it. 00:39:07.353 [2024-09-27 15:57:47.605318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.353 [2024-09-27 15:57:47.605326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.353 qpair failed and we were unable to recover it. 00:39:07.353 [2024-09-27 15:57:47.605648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.353 [2024-09-27 15:57:47.605656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.353 qpair failed and we were unable to recover it. 00:39:07.353 [2024-09-27 15:57:47.605812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.353 [2024-09-27 15:57:47.605820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.353 qpair failed and we were unable to recover it. 00:39:07.353 [2024-09-27 15:57:47.606214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.353 [2024-09-27 15:57:47.606224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.353 qpair failed and we were unable to recover it. 00:39:07.353 [2024-09-27 15:57:47.606389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.353 [2024-09-27 15:57:47.606398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.353 qpair failed and we were unable to recover it. 00:39:07.353 [2024-09-27 15:57:47.606724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.353 [2024-09-27 15:57:47.606733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.353 qpair failed and we were unable to recover it. 00:39:07.353 [2024-09-27 15:57:47.607027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.353 [2024-09-27 15:57:47.607035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.353 qpair failed and we were unable to recover it. 00:39:07.353 [2024-09-27 15:57:47.607347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.353 [2024-09-27 15:57:47.607356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.353 qpair failed and we were unable to recover it. 00:39:07.353 [2024-09-27 15:57:47.607527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.353 [2024-09-27 15:57:47.607535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.353 qpair failed and we were unable to recover it. 00:39:07.353 [2024-09-27 15:57:47.607838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.353 [2024-09-27 15:57:47.607847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.353 qpair failed and we were unable to recover it. 00:39:07.353 [2024-09-27 15:57:47.608027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.353 [2024-09-27 15:57:47.608036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.353 qpair failed and we were unable to recover it. 00:39:07.353 [2024-09-27 15:57:47.608243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.353 [2024-09-27 15:57:47.608251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.353 qpair failed and we were unable to recover it. 00:39:07.353 [2024-09-27 15:57:47.608416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.353 [2024-09-27 15:57:47.608425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.353 qpair failed and we were unable to recover it. 00:39:07.353 [2024-09-27 15:57:47.608706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.353 [2024-09-27 15:57:47.608715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.353 qpair failed and we were unable to recover it. 00:39:07.353 [2024-09-27 15:57:47.609027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.353 [2024-09-27 15:57:47.609038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.353 qpair failed and we were unable to recover it. 00:39:07.353 [2024-09-27 15:57:47.609355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.353 [2024-09-27 15:57:47.609365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.353 qpair failed and we were unable to recover it. 00:39:07.353 [2024-09-27 15:57:47.609577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.353 [2024-09-27 15:57:47.609586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.353 qpair failed and we were unable to recover it. 00:39:07.353 [2024-09-27 15:57:47.609748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.353 [2024-09-27 15:57:47.609758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.353 qpair failed and we were unable to recover it. 00:39:07.353 [2024-09-27 15:57:47.610107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.353 [2024-09-27 15:57:47.610116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.353 qpair failed and we were unable to recover it. 00:39:07.353 [2024-09-27 15:57:47.610426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.353 [2024-09-27 15:57:47.610436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.353 qpair failed and we were unable to recover it. 00:39:07.353 [2024-09-27 15:57:47.610750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.353 [2024-09-27 15:57:47.610760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.353 qpair failed and we were unable to recover it. 00:39:07.353 [2024-09-27 15:57:47.611185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.353 [2024-09-27 15:57:47.611194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.353 qpair failed and we were unable to recover it. 00:39:07.353 [2024-09-27 15:57:47.611501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.353 [2024-09-27 15:57:47.611510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.353 qpair failed and we were unable to recover it. 00:39:07.353 [2024-09-27 15:57:47.611827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.353 [2024-09-27 15:57:47.611835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.353 qpair failed and we were unable to recover it. 00:39:07.353 [2024-09-27 15:57:47.612115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.353 [2024-09-27 15:57:47.612124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.353 qpair failed and we were unable to recover it. 00:39:07.353 [2024-09-27 15:57:47.612292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.353 [2024-09-27 15:57:47.612301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.353 qpair failed and we were unable to recover it. 00:39:07.353 [2024-09-27 15:57:47.612621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.353 [2024-09-27 15:57:47.612630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.353 qpair failed and we were unable to recover it. 00:39:07.353 [2024-09-27 15:57:47.612948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.353 [2024-09-27 15:57:47.612957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.353 qpair failed and we were unable to recover it. 00:39:07.353 [2024-09-27 15:57:47.613272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.353 [2024-09-27 15:57:47.613281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.353 qpair failed and we were unable to recover it. 00:39:07.353 [2024-09-27 15:57:47.613598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.353 [2024-09-27 15:57:47.613606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.353 qpair failed and we were unable to recover it. 00:39:07.353 [2024-09-27 15:57:47.613921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.353 [2024-09-27 15:57:47.613930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.353 qpair failed and we were unable to recover it. 00:39:07.353 [2024-09-27 15:57:47.614265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.353 [2024-09-27 15:57:47.614275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.353 qpair failed and we were unable to recover it. 00:39:07.353 [2024-09-27 15:57:47.614590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.353 [2024-09-27 15:57:47.614599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.353 qpair failed and we were unable to recover it. 00:39:07.353 [2024-09-27 15:57:47.614910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.353 [2024-09-27 15:57:47.614918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.353 qpair failed and we were unable to recover it. 00:39:07.353 [2024-09-27 15:57:47.615134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.353 [2024-09-27 15:57:47.615143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.353 qpair failed and we were unable to recover it. 00:39:07.354 [2024-09-27 15:57:47.615450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.354 [2024-09-27 15:57:47.615459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.354 qpair failed and we were unable to recover it. 00:39:07.354 [2024-09-27 15:57:47.615773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.354 [2024-09-27 15:57:47.615781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.354 qpair failed and we were unable to recover it. 00:39:07.354 [2024-09-27 15:57:47.616074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.354 [2024-09-27 15:57:47.616083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.354 qpair failed and we were unable to recover it. 00:39:07.354 [2024-09-27 15:57:47.616392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.354 [2024-09-27 15:57:47.616402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.354 qpair failed and we were unable to recover it. 00:39:07.354 [2024-09-27 15:57:47.616730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.354 [2024-09-27 15:57:47.616740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.354 qpair failed and we were unable to recover it. 00:39:07.354 [2024-09-27 15:57:47.617066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.354 [2024-09-27 15:57:47.617075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.354 qpair failed and we were unable to recover it. 00:39:07.354 [2024-09-27 15:57:47.617386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.354 [2024-09-27 15:57:47.617395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.354 qpair failed and we were unable to recover it. 00:39:07.354 [2024-09-27 15:57:47.617713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.354 [2024-09-27 15:57:47.617723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.354 qpair failed and we were unable to recover it. 00:39:07.354 [2024-09-27 15:57:47.618037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.354 [2024-09-27 15:57:47.618047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.354 qpair failed and we were unable to recover it. 00:39:07.354 [2024-09-27 15:57:47.618208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.354 [2024-09-27 15:57:47.618217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.354 qpair failed and we were unable to recover it. 00:39:07.354 [2024-09-27 15:57:47.618380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.354 [2024-09-27 15:57:47.618390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.354 qpair failed and we were unable to recover it. 00:39:07.354 [2024-09-27 15:57:47.618463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.354 [2024-09-27 15:57:47.618471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.354 qpair failed and we were unable to recover it. 00:39:07.354 [2024-09-27 15:57:47.618806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.354 [2024-09-27 15:57:47.618816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.354 qpair failed and we were unable to recover it. 00:39:07.354 [2024-09-27 15:57:47.619126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.354 [2024-09-27 15:57:47.619135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.354 qpair failed and we were unable to recover it. 00:39:07.354 [2024-09-27 15:57:47.619475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.354 [2024-09-27 15:57:47.619484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.354 qpair failed and we were unable to recover it. 00:39:07.354 [2024-09-27 15:57:47.619795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.354 [2024-09-27 15:57:47.619807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.354 qpair failed and we were unable to recover it. 00:39:07.354 [2024-09-27 15:57:47.620125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.354 [2024-09-27 15:57:47.620134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.354 qpair failed and we were unable to recover it. 00:39:07.354 [2024-09-27 15:57:47.620452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.354 [2024-09-27 15:57:47.620462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.354 qpair failed and we were unable to recover it. 00:39:07.354 [2024-09-27 15:57:47.620773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.354 [2024-09-27 15:57:47.620782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.354 qpair failed and we were unable to recover it. 00:39:07.354 [2024-09-27 15:57:47.620822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.354 [2024-09-27 15:57:47.620828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.354 qpair failed and we were unable to recover it. 00:39:07.354 [2024-09-27 15:57:47.620944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.354 [2024-09-27 15:57:47.620953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.354 qpair failed and we were unable to recover it. 00:39:07.354 [2024-09-27 15:57:47.621275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.354 [2024-09-27 15:57:47.621284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.354 qpair failed and we were unable to recover it. 00:39:07.354 [2024-09-27 15:57:47.621590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.354 [2024-09-27 15:57:47.621600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.354 qpair failed and we were unable to recover it. 00:39:07.354 [2024-09-27 15:57:47.621914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.354 [2024-09-27 15:57:47.621924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.354 qpair failed and we were unable to recover it. 00:39:07.354 [2024-09-27 15:57:47.622253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.354 [2024-09-27 15:57:47.622262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.354 qpair failed and we were unable to recover it. 00:39:07.354 [2024-09-27 15:57:47.622579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.354 [2024-09-27 15:57:47.622589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.354 qpair failed and we were unable to recover it. 00:39:07.354 [2024-09-27 15:57:47.622908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.354 [2024-09-27 15:57:47.622917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.354 qpair failed and we were unable to recover it. 00:39:07.354 [2024-09-27 15:57:47.623233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.354 [2024-09-27 15:57:47.623243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.354 qpair failed and we were unable to recover it. 00:39:07.354 [2024-09-27 15:57:47.623554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.354 [2024-09-27 15:57:47.623563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.354 qpair failed and we were unable to recover it. 00:39:07.354 [2024-09-27 15:57:47.623882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.354 [2024-09-27 15:57:47.623892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.354 qpair failed and we were unable to recover it. 00:39:07.354 [2024-09-27 15:57:47.624262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.354 [2024-09-27 15:57:47.624272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.354 qpair failed and we were unable to recover it. 00:39:07.354 [2024-09-27 15:57:47.624484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.354 [2024-09-27 15:57:47.624492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.354 qpair failed and we were unable to recover it. 00:39:07.354 [2024-09-27 15:57:47.624856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.354 [2024-09-27 15:57:47.624865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.354 qpair failed and we were unable to recover it. 00:39:07.354 [2024-09-27 15:57:47.625138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.354 [2024-09-27 15:57:47.625148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.354 qpair failed and we were unable to recover it. 00:39:07.354 [2024-09-27 15:57:47.625335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.354 [2024-09-27 15:57:47.625344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.354 qpair failed and we were unable to recover it. 00:39:07.354 [2024-09-27 15:57:47.625628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.354 [2024-09-27 15:57:47.625637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.354 qpair failed and we were unable to recover it. 00:39:07.354 [2024-09-27 15:57:47.625810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.354 [2024-09-27 15:57:47.625819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.355 qpair failed and we were unable to recover it. 00:39:07.355 [2024-09-27 15:57:47.626146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.355 [2024-09-27 15:57:47.626157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.355 qpair failed and we were unable to recover it. 00:39:07.355 [2024-09-27 15:57:47.626337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.355 [2024-09-27 15:57:47.626346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.355 qpair failed and we were unable to recover it. 00:39:07.355 [2024-09-27 15:57:47.626697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.355 [2024-09-27 15:57:47.626705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.355 qpair failed and we were unable to recover it. 00:39:07.355 [2024-09-27 15:57:47.627075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.355 [2024-09-27 15:57:47.627085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.355 qpair failed and we were unable to recover it. 00:39:07.355 [2024-09-27 15:57:47.627398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.355 [2024-09-27 15:57:47.627407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.355 qpair failed and we were unable to recover it. 00:39:07.355 [2024-09-27 15:57:47.627720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.355 [2024-09-27 15:57:47.627729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.355 qpair failed and we were unable to recover it. 00:39:07.355 [2024-09-27 15:57:47.628067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.355 [2024-09-27 15:57:47.628076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.355 qpair failed and we were unable to recover it. 00:39:07.355 [2024-09-27 15:57:47.628207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.355 [2024-09-27 15:57:47.628215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.355 qpair failed and we were unable to recover it. 00:39:07.355 [2024-09-27 15:57:47.628529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.355 [2024-09-27 15:57:47.628539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.355 qpair failed and we were unable to recover it. 00:39:07.355 [2024-09-27 15:57:47.628849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.355 [2024-09-27 15:57:47.628858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.355 qpair failed and we were unable to recover it. 00:39:07.355 [2024-09-27 15:57:47.629177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.355 [2024-09-27 15:57:47.629187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.355 qpair failed and we were unable to recover it. 00:39:07.355 [2024-09-27 15:57:47.629469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.355 [2024-09-27 15:57:47.629478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.355 qpair failed and we were unable to recover it. 00:39:07.355 [2024-09-27 15:57:47.629790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.355 [2024-09-27 15:57:47.629801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.355 qpair failed and we were unable to recover it. 00:39:07.355 [2024-09-27 15:57:47.629969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.355 [2024-09-27 15:57:47.629979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.355 qpair failed and we were unable to recover it. 00:39:07.355 [2024-09-27 15:57:47.630019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.355 [2024-09-27 15:57:47.630026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.355 qpair failed and we were unable to recover it. 00:39:07.355 [2024-09-27 15:57:47.630332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.355 [2024-09-27 15:57:47.630342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.355 qpair failed and we were unable to recover it. 00:39:07.355 [2024-09-27 15:57:47.630650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.355 [2024-09-27 15:57:47.630660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.355 qpair failed and we were unable to recover it. 00:39:07.355 [2024-09-27 15:57:47.630704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.355 [2024-09-27 15:57:47.630711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.355 qpair failed and we were unable to recover it. 00:39:07.355 [2024-09-27 15:57:47.630985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.355 [2024-09-27 15:57:47.630994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.355 qpair failed and we were unable to recover it. 00:39:07.355 [2024-09-27 15:57:47.631334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.355 [2024-09-27 15:57:47.631343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.355 qpair failed and we were unable to recover it. 00:39:07.355 [2024-09-27 15:57:47.631675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.355 [2024-09-27 15:57:47.631683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.355 qpair failed and we were unable to recover it. 00:39:07.355 [2024-09-27 15:57:47.632029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.355 [2024-09-27 15:57:47.632038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.355 qpair failed and we were unable to recover it. 00:39:07.355 [2024-09-27 15:57:47.632352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.355 [2024-09-27 15:57:47.632361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.355 qpair failed and we were unable to recover it. 00:39:07.355 [2024-09-27 15:57:47.632718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.355 [2024-09-27 15:57:47.632726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.355 qpair failed and we were unable to recover it. 00:39:07.355 [2024-09-27 15:57:47.633040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.355 [2024-09-27 15:57:47.633050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.355 qpair failed and we were unable to recover it. 00:39:07.355 [2024-09-27 15:57:47.633380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.355 [2024-09-27 15:57:47.633392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.355 qpair failed and we were unable to recover it. 00:39:07.355 [2024-09-27 15:57:47.633589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.355 [2024-09-27 15:57:47.633599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.355 qpair failed and we were unable to recover it. 00:39:07.355 [2024-09-27 15:57:47.633932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.355 [2024-09-27 15:57:47.633943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.355 qpair failed and we were unable to recover it. 00:39:07.355 [2024-09-27 15:57:47.634110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.355 [2024-09-27 15:57:47.634118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.355 qpair failed and we were unable to recover it. 00:39:07.355 [2024-09-27 15:57:47.634288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.355 [2024-09-27 15:57:47.634296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.355 qpair failed and we were unable to recover it. 00:39:07.355 [2024-09-27 15:57:47.634630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.355 [2024-09-27 15:57:47.634640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.355 qpair failed and we were unable to recover it. 00:39:07.355 [2024-09-27 15:57:47.634941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.355 [2024-09-27 15:57:47.634950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.355 qpair failed and we were unable to recover it. 00:39:07.355 [2024-09-27 15:57:47.635287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.355 [2024-09-27 15:57:47.635297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.355 qpair failed and we were unable to recover it. 00:39:07.355 [2024-09-27 15:57:47.635622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.355 [2024-09-27 15:57:47.635633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.355 qpair failed and we were unable to recover it. 00:39:07.355 [2024-09-27 15:57:47.635948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.355 [2024-09-27 15:57:47.635959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.355 qpair failed and we were unable to recover it. 00:39:07.355 [2024-09-27 15:57:47.636172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.355 [2024-09-27 15:57:47.636181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.355 qpair failed and we were unable to recover it. 00:39:07.355 [2024-09-27 15:57:47.636342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.355 [2024-09-27 15:57:47.636351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.356 qpair failed and we were unable to recover it. 00:39:07.356 [2024-09-27 15:57:47.636636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.356 [2024-09-27 15:57:47.636645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.356 qpair failed and we were unable to recover it. 00:39:07.356 [2024-09-27 15:57:47.637004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.356 [2024-09-27 15:57:47.637013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.356 qpair failed and we were unable to recover it. 00:39:07.356 [2024-09-27 15:57:47.637322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.356 [2024-09-27 15:57:47.637332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.356 qpair failed and we were unable to recover it. 00:39:07.356 [2024-09-27 15:57:47.637517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.356 [2024-09-27 15:57:47.637526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.356 qpair failed and we were unable to recover it. 00:39:07.356 [2024-09-27 15:57:47.637570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.356 [2024-09-27 15:57:47.637578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.356 qpair failed and we were unable to recover it. 00:39:07.356 [2024-09-27 15:57:47.637859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.356 [2024-09-27 15:57:47.637869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.356 qpair failed and we were unable to recover it. 00:39:07.356 [2024-09-27 15:57:47.638197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.356 [2024-09-27 15:57:47.638206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.356 qpair failed and we were unable to recover it. 00:39:07.356 [2024-09-27 15:57:47.638242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.356 [2024-09-27 15:57:47.638249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.356 qpair failed and we were unable to recover it. 00:39:07.356 [2024-09-27 15:57:47.638581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.356 [2024-09-27 15:57:47.638591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.356 qpair failed and we were unable to recover it. 00:39:07.356 [2024-09-27 15:57:47.638912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.356 [2024-09-27 15:57:47.638923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.356 qpair failed and we were unable to recover it. 00:39:07.356 [2024-09-27 15:57:47.639108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.356 [2024-09-27 15:57:47.639118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.356 qpair failed and we were unable to recover it. 00:39:07.356 [2024-09-27 15:57:47.639435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.356 [2024-09-27 15:57:47.639444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.356 qpair failed and we were unable to recover it. 00:39:07.356 [2024-09-27 15:57:47.639612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.356 [2024-09-27 15:57:47.639622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.356 qpair failed and we were unable to recover it. 00:39:07.356 [2024-09-27 15:57:47.639962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.356 [2024-09-27 15:57:47.639971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.356 qpair failed and we were unable to recover it. 00:39:07.356 [2024-09-27 15:57:47.640288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.356 [2024-09-27 15:57:47.640297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.356 qpair failed and we were unable to recover it. 00:39:07.356 [2024-09-27 15:57:47.640336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.356 [2024-09-27 15:57:47.640342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.356 qpair failed and we were unable to recover it. 00:39:07.356 [2024-09-27 15:57:47.640625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.356 [2024-09-27 15:57:47.640633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.356 qpair failed and we were unable to recover it. 00:39:07.356 [2024-09-27 15:57:47.640946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.356 [2024-09-27 15:57:47.640956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.356 qpair failed and we were unable to recover it. 00:39:07.356 [2024-09-27 15:57:47.641272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.356 [2024-09-27 15:57:47.641281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.356 qpair failed and we were unable to recover it. 00:39:07.356 [2024-09-27 15:57:47.641611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.356 [2024-09-27 15:57:47.641621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.356 qpair failed and we were unable to recover it. 00:39:07.356 [2024-09-27 15:57:47.641938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.356 [2024-09-27 15:57:47.641949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.356 qpair failed and we were unable to recover it. 00:39:07.356 [2024-09-27 15:57:47.642261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.356 [2024-09-27 15:57:47.642271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.356 qpair failed and we were unable to recover it. 00:39:07.356 [2024-09-27 15:57:47.642553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.356 [2024-09-27 15:57:47.642563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.356 qpair failed and we were unable to recover it. 00:39:07.356 [2024-09-27 15:57:47.642926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.356 [2024-09-27 15:57:47.642937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.356 qpair failed and we were unable to recover it. 00:39:07.356 [2024-09-27 15:57:47.643089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.356 [2024-09-27 15:57:47.643098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.356 qpair failed and we were unable to recover it. 00:39:07.356 [2024-09-27 15:57:47.643419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.356 [2024-09-27 15:57:47.643431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.356 qpair failed and we were unable to recover it. 00:39:07.356 [2024-09-27 15:57:47.643589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.356 [2024-09-27 15:57:47.643598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.356 qpair failed and we were unable to recover it. 00:39:07.356 [2024-09-27 15:57:47.643845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.356 [2024-09-27 15:57:47.643854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.356 qpair failed and we were unable to recover it. 00:39:07.356 [2024-09-27 15:57:47.644013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.356 [2024-09-27 15:57:47.644024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.356 qpair failed and we were unable to recover it. 00:39:07.356 [2024-09-27 15:57:47.644324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.356 [2024-09-27 15:57:47.644333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.356 qpair failed and we were unable to recover it. 00:39:07.356 [2024-09-27 15:57:47.644627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.356 [2024-09-27 15:57:47.644636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.356 qpair failed and we were unable to recover it. 00:39:07.356 [2024-09-27 15:57:47.644702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.356 [2024-09-27 15:57:47.644711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.356 qpair failed and we were unable to recover it. 00:39:07.356 [2024-09-27 15:57:47.645273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.356 [2024-09-27 15:57:47.645373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.356 qpair failed and we were unable to recover it. 00:39:07.356 [2024-09-27 15:57:47.645536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.357 [2024-09-27 15:57:47.645572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.357 qpair failed and we were unable to recover it. 00:39:07.357 [2024-09-27 15:57:47.645966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.357 [2024-09-27 15:57:47.645980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.357 qpair failed and we were unable to recover it. 00:39:07.357 [2024-09-27 15:57:47.646254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.357 [2024-09-27 15:57:47.646263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.357 qpair failed and we were unable to recover it. 00:39:07.357 [2024-09-27 15:57:47.646590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.357 [2024-09-27 15:57:47.646599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.357 qpair failed and we were unable to recover it. 00:39:07.357 [2024-09-27 15:57:47.646948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.357 [2024-09-27 15:57:47.646959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.357 qpair failed and we were unable to recover it. 00:39:07.357 [2024-09-27 15:57:47.647292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.357 [2024-09-27 15:57:47.647302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.357 qpair failed and we were unable to recover it. 00:39:07.357 [2024-09-27 15:57:47.647487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.357 [2024-09-27 15:57:47.647496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.357 qpair failed and we were unable to recover it. 00:39:07.357 [2024-09-27 15:57:47.647823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.357 [2024-09-27 15:57:47.647832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.357 qpair failed and we were unable to recover it. 00:39:07.357 [2024-09-27 15:57:47.648147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.357 [2024-09-27 15:57:47.648156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.357 qpair failed and we were unable to recover it. 00:39:07.357 [2024-09-27 15:57:47.648320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.357 [2024-09-27 15:57:47.648329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.357 qpair failed and we were unable to recover it. 00:39:07.357 [2024-09-27 15:57:47.648598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.357 [2024-09-27 15:57:47.648607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.357 qpair failed and we were unable to recover it. 00:39:07.357 [2024-09-27 15:57:47.648919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.357 [2024-09-27 15:57:47.648928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.357 qpair failed and we were unable to recover it. 00:39:07.357 [2024-09-27 15:57:47.649104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.357 [2024-09-27 15:57:47.649115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.357 qpair failed and we were unable to recover it. 00:39:07.357 [2024-09-27 15:57:47.649463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.357 [2024-09-27 15:57:47.649473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.357 qpair failed and we were unable to recover it. 00:39:07.357 [2024-09-27 15:57:47.649780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.357 [2024-09-27 15:57:47.649791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.357 qpair failed and we were unable to recover it. 00:39:07.357 [2024-09-27 15:57:47.650109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.357 [2024-09-27 15:57:47.650120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.357 qpair failed and we were unable to recover it. 00:39:07.357 [2024-09-27 15:57:47.650433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.357 [2024-09-27 15:57:47.650444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.357 qpair failed and we were unable to recover it. 00:39:07.357 [2024-09-27 15:57:47.650604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.357 [2024-09-27 15:57:47.650614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.357 qpair failed and we were unable to recover it. 00:39:07.357 [2024-09-27 15:57:47.650941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.357 [2024-09-27 15:57:47.650952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.357 qpair failed and we were unable to recover it. 00:39:07.357 [2024-09-27 15:57:47.651257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.357 [2024-09-27 15:57:47.651266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.357 qpair failed and we were unable to recover it. 00:39:07.357 [2024-09-27 15:57:47.651621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.357 [2024-09-27 15:57:47.651630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.357 qpair failed and we were unable to recover it. 00:39:07.357 [2024-09-27 15:57:47.651965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.357 [2024-09-27 15:57:47.651976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.357 qpair failed and we were unable to recover it. 00:39:07.357 [2024-09-27 15:57:47.652179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.357 [2024-09-27 15:57:47.652190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.357 qpair failed and we were unable to recover it. 00:39:07.357 [2024-09-27 15:57:47.652514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.357 [2024-09-27 15:57:47.652524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.357 qpair failed and we were unable to recover it. 00:39:07.357 [2024-09-27 15:57:47.652682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.357 [2024-09-27 15:57:47.652692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.357 qpair failed and we were unable to recover it. 00:39:07.357 [2024-09-27 15:57:47.652965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.357 [2024-09-27 15:57:47.652978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.357 qpair failed and we were unable to recover it. 00:39:07.357 [2024-09-27 15:57:47.653279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.357 [2024-09-27 15:57:47.653288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.357 qpair failed and we were unable to recover it. 00:39:07.357 [2024-09-27 15:57:47.653606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.357 [2024-09-27 15:57:47.653616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.357 qpair failed and we were unable to recover it. 00:39:07.357 [2024-09-27 15:57:47.653930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.357 [2024-09-27 15:57:47.653940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.357 qpair failed and we were unable to recover it. 00:39:07.357 [2024-09-27 15:57:47.654264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.357 [2024-09-27 15:57:47.654274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.357 qpair failed and we were unable to recover it. 00:39:07.357 [2024-09-27 15:57:47.654599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.357 [2024-09-27 15:57:47.654609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.357 qpair failed and we were unable to recover it. 00:39:07.357 [2024-09-27 15:57:47.654906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.357 [2024-09-27 15:57:47.654915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.357 qpair failed and we were unable to recover it. 00:39:07.357 [2024-09-27 15:57:47.655230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.357 [2024-09-27 15:57:47.655240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.357 qpair failed and we were unable to recover it. 00:39:07.357 [2024-09-27 15:57:47.655554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.357 [2024-09-27 15:57:47.655564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.357 qpair failed and we were unable to recover it. 00:39:07.357 [2024-09-27 15:57:47.655882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.357 [2024-09-27 15:57:47.655892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.357 qpair failed and we were unable to recover it. 00:39:07.357 [2024-09-27 15:57:47.656297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.357 [2024-09-27 15:57:47.656307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.357 qpair failed and we were unable to recover it. 00:39:07.357 [2024-09-27 15:57:47.656623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.357 [2024-09-27 15:57:47.656633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.357 qpair failed and we were unable to recover it. 00:39:07.358 [2024-09-27 15:57:47.656948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.358 [2024-09-27 15:57:47.656957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.358 qpair failed and we were unable to recover it. 00:39:07.358 [2024-09-27 15:57:47.657280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.358 [2024-09-27 15:57:47.657289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.358 qpair failed and we were unable to recover it. 00:39:07.358 [2024-09-27 15:57:47.657609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.358 [2024-09-27 15:57:47.657618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.358 qpair failed and we were unable to recover it. 00:39:07.358 [2024-09-27 15:57:47.657995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.358 [2024-09-27 15:57:47.658005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.358 qpair failed and we were unable to recover it. 00:39:07.358 [2024-09-27 15:57:47.658392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.358 [2024-09-27 15:57:47.658403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.358 qpair failed and we were unable to recover it. 00:39:07.358 [2024-09-27 15:57:47.658722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.358 [2024-09-27 15:57:47.658732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.358 qpair failed and we were unable to recover it. 00:39:07.358 [2024-09-27 15:57:47.659058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.358 [2024-09-27 15:57:47.659068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.358 qpair failed and we were unable to recover it. 00:39:07.358 [2024-09-27 15:57:47.659370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.358 [2024-09-27 15:57:47.659379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.358 qpair failed and we were unable to recover it. 00:39:07.358 [2024-09-27 15:57:47.659700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.358 [2024-09-27 15:57:47.659711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.358 qpair failed and we were unable to recover it. 00:39:07.358 [2024-09-27 15:57:47.659888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.358 [2024-09-27 15:57:47.659902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.358 qpair failed and we were unable to recover it. 00:39:07.358 [2024-09-27 15:57:47.660237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.358 [2024-09-27 15:57:47.660247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.358 qpair failed and we were unable to recover it. 00:39:07.358 [2024-09-27 15:57:47.660521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.358 [2024-09-27 15:57:47.660530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.358 qpair failed and we were unable to recover it. 00:39:07.358 [2024-09-27 15:57:47.660865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.358 [2024-09-27 15:57:47.660875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.358 qpair failed and we were unable to recover it. 00:39:07.358 [2024-09-27 15:57:47.661070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.358 [2024-09-27 15:57:47.661079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.358 qpair failed and we were unable to recover it. 00:39:07.358 [2024-09-27 15:57:47.661412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.358 [2024-09-27 15:57:47.661421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.358 qpair failed and we were unable to recover it. 00:39:07.358 [2024-09-27 15:57:47.661605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.358 [2024-09-27 15:57:47.661618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.358 qpair failed and we were unable to recover it. 00:39:07.358 [2024-09-27 15:57:47.661960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.358 [2024-09-27 15:57:47.661969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.358 qpair failed and we were unable to recover it. 00:39:07.358 [2024-09-27 15:57:47.662125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.358 [2024-09-27 15:57:47.662135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.358 qpair failed and we were unable to recover it. 00:39:07.358 [2024-09-27 15:57:47.662450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.358 [2024-09-27 15:57:47.662459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.358 qpair failed and we were unable to recover it. 00:39:07.358 [2024-09-27 15:57:47.662775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.358 [2024-09-27 15:57:47.662784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.358 qpair failed and we were unable to recover it. 00:39:07.358 [2024-09-27 15:57:47.662952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.358 [2024-09-27 15:57:47.662964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.358 qpair failed and we were unable to recover it. 00:39:07.358 [2024-09-27 15:57:47.663047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.358 [2024-09-27 15:57:47.663056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.358 qpair failed and we were unable to recover it. 00:39:07.358 [2024-09-27 15:57:47.663385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.358 [2024-09-27 15:57:47.663395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.358 qpair failed and we were unable to recover it. 00:39:07.358 [2024-09-27 15:57:47.663754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.358 [2024-09-27 15:57:47.663765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.358 qpair failed and we were unable to recover it. 00:39:07.358 [2024-09-27 15:57:47.664097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.358 [2024-09-27 15:57:47.664107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.358 qpair failed and we were unable to recover it. 00:39:07.358 [2024-09-27 15:57:47.664408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.358 [2024-09-27 15:57:47.664417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.358 qpair failed and we were unable to recover it. 00:39:07.358 [2024-09-27 15:57:47.664736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.358 [2024-09-27 15:57:47.664747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.358 qpair failed and we were unable to recover it. 00:39:07.358 [2024-09-27 15:57:47.665059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.358 [2024-09-27 15:57:47.665084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.358 qpair failed and we were unable to recover it. 00:39:07.358 [2024-09-27 15:57:47.665276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.358 [2024-09-27 15:57:47.665289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.358 qpair failed and we were unable to recover it. 00:39:07.358 [2024-09-27 15:57:47.665625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.358 [2024-09-27 15:57:47.665636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.358 qpair failed and we were unable to recover it. 00:39:07.358 [2024-09-27 15:57:47.665960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.358 [2024-09-27 15:57:47.665971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.358 qpair failed and we were unable to recover it. 00:39:07.358 [2024-09-27 15:57:47.666292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.358 [2024-09-27 15:57:47.666303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.358 qpair failed and we were unable to recover it. 00:39:07.358 [2024-09-27 15:57:47.666484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.358 [2024-09-27 15:57:47.666494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.358 qpair failed and we were unable to recover it. 00:39:07.358 [2024-09-27 15:57:47.666822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.358 [2024-09-27 15:57:47.666833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.358 qpair failed and we were unable to recover it. 00:39:07.358 [2024-09-27 15:57:47.667148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.358 [2024-09-27 15:57:47.667160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.358 qpair failed and we were unable to recover it. 00:39:07.358 [2024-09-27 15:57:47.667345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.358 [2024-09-27 15:57:47.667357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.358 qpair failed and we were unable to recover it. 00:39:07.358 [2024-09-27 15:57:47.667679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.358 [2024-09-27 15:57:47.667691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.358 qpair failed and we were unable to recover it. 00:39:07.358 [2024-09-27 15:57:47.667965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.359 [2024-09-27 15:57:47.667976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.359 qpair failed and we were unable to recover it. 00:39:07.359 [2024-09-27 15:57:47.668292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.359 [2024-09-27 15:57:47.668302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.359 qpair failed and we were unable to recover it. 00:39:07.359 [2024-09-27 15:57:47.668622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.359 [2024-09-27 15:57:47.668632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.359 qpair failed and we were unable to recover it. 00:39:07.359 [2024-09-27 15:57:47.668951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.359 [2024-09-27 15:57:47.668961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.359 qpair failed and we were unable to recover it. 00:39:07.359 [2024-09-27 15:57:47.669367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.359 [2024-09-27 15:57:47.669379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.359 qpair failed and we were unable to recover it. 00:39:07.359 [2024-09-27 15:57:47.669572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.359 [2024-09-27 15:57:47.669585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.359 qpair failed and we were unable to recover it. 00:39:07.359 [2024-09-27 15:57:47.669877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.359 [2024-09-27 15:57:47.669887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.359 qpair failed and we were unable to recover it. 00:39:07.359 [2024-09-27 15:57:47.670235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.359 [2024-09-27 15:57:47.670247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.359 qpair failed and we were unable to recover it. 00:39:07.359 [2024-09-27 15:57:47.670561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.359 [2024-09-27 15:57:47.670572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.359 qpair failed and we were unable to recover it. 00:39:07.359 [2024-09-27 15:57:47.670768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.359 [2024-09-27 15:57:47.670778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.359 qpair failed and we were unable to recover it. 00:39:07.359 [2024-09-27 15:57:47.670984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.359 [2024-09-27 15:57:47.670993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.359 qpair failed and we were unable to recover it. 00:39:07.359 [2024-09-27 15:57:47.671339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.359 [2024-09-27 15:57:47.671350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.359 qpair failed and we were unable to recover it. 00:39:07.359 [2024-09-27 15:57:47.671687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.359 [2024-09-27 15:57:47.671696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.359 qpair failed and we were unable to recover it. 00:39:07.359 [2024-09-27 15:57:47.671892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.359 [2024-09-27 15:57:47.671910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.359 qpair failed and we were unable to recover it. 00:39:07.359 [2024-09-27 15:57:47.672109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.359 [2024-09-27 15:57:47.672117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.359 qpair failed and we were unable to recover it. 00:39:07.359 [2024-09-27 15:57:47.672417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.359 [2024-09-27 15:57:47.672428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.359 qpair failed and we were unable to recover it. 00:39:07.359 [2024-09-27 15:57:47.672743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.359 [2024-09-27 15:57:47.672754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.359 qpair failed and we were unable to recover it. 00:39:07.359 [2024-09-27 15:57:47.673101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.359 [2024-09-27 15:57:47.673114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.359 qpair failed and we were unable to recover it. 00:39:07.359 [2024-09-27 15:57:47.673291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.359 [2024-09-27 15:57:47.673300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.359 qpair failed and we were unable to recover it. 00:39:07.359 [2024-09-27 15:57:47.673625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.359 [2024-09-27 15:57:47.673636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.359 qpair failed and we were unable to recover it. 00:39:07.359 [2024-09-27 15:57:47.673922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.359 [2024-09-27 15:57:47.673933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.359 qpair failed and we were unable to recover it. 00:39:07.359 [2024-09-27 15:57:47.674253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.359 [2024-09-27 15:57:47.674263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.359 qpair failed and we were unable to recover it. 00:39:07.359 [2024-09-27 15:57:47.674594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.359 [2024-09-27 15:57:47.674605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.359 qpair failed and we were unable to recover it. 00:39:07.359 [2024-09-27 15:57:47.674929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.359 [2024-09-27 15:57:47.674939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.359 qpair failed and we were unable to recover it. 00:39:07.359 [2024-09-27 15:57:47.675284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.359 [2024-09-27 15:57:47.675293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.359 qpair failed and we were unable to recover it. 00:39:07.359 [2024-09-27 15:57:47.675619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.359 [2024-09-27 15:57:47.675628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.359 qpair failed and we were unable to recover it. 00:39:07.359 [2024-09-27 15:57:47.675944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.359 [2024-09-27 15:57:47.675954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.359 qpair failed and we were unable to recover it. 00:39:07.359 [2024-09-27 15:57:47.676268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.359 [2024-09-27 15:57:47.676277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.359 qpair failed and we were unable to recover it. 00:39:07.359 [2024-09-27 15:57:47.676596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.359 [2024-09-27 15:57:47.676607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.359 qpair failed and we were unable to recover it. 00:39:07.359 [2024-09-27 15:57:47.676763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.359 [2024-09-27 15:57:47.676775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.359 qpair failed and we were unable to recover it. 00:39:07.359 [2024-09-27 15:57:47.676977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.359 [2024-09-27 15:57:47.676988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.359 qpair failed and we were unable to recover it. 00:39:07.359 [2024-09-27 15:57:47.677233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.359 [2024-09-27 15:57:47.677242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.359 qpair failed and we were unable to recover it. 00:39:07.359 [2024-09-27 15:57:47.677432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.359 [2024-09-27 15:57:47.677441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.359 qpair failed and we were unable to recover it. 00:39:07.359 [2024-09-27 15:57:47.677631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.359 [2024-09-27 15:57:47.677642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.359 qpair failed and we were unable to recover it. 00:39:07.359 [2024-09-27 15:57:47.677696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.359 [2024-09-27 15:57:47.677704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.359 qpair failed and we were unable to recover it. 00:39:07.359 [2024-09-27 15:57:47.678020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.359 [2024-09-27 15:57:47.678031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.359 qpair failed and we were unable to recover it. 00:39:07.359 [2024-09-27 15:57:47.678198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.359 [2024-09-27 15:57:47.678208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.359 qpair failed and we were unable to recover it. 00:39:07.360 [2024-09-27 15:57:47.678553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.360 [2024-09-27 15:57:47.678563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.360 qpair failed and we were unable to recover it. 00:39:07.360 [2024-09-27 15:57:47.678757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.360 [2024-09-27 15:57:47.678767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.360 qpair failed and we were unable to recover it. 00:39:07.360 [2024-09-27 15:57:47.678944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.360 [2024-09-27 15:57:47.678953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.360 qpair failed and we were unable to recover it. 00:39:07.360 [2024-09-27 15:57:47.679274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.360 [2024-09-27 15:57:47.679284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.360 qpair failed and we were unable to recover it. 00:39:07.360 [2024-09-27 15:57:47.679476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.360 [2024-09-27 15:57:47.679486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.360 qpair failed and we were unable to recover it. 00:39:07.360 [2024-09-27 15:57:47.679822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.360 [2024-09-27 15:57:47.679832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.360 qpair failed and we were unable to recover it. 00:39:07.360 [2024-09-27 15:57:47.680124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.360 [2024-09-27 15:57:47.680134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.360 qpair failed and we were unable to recover it. 00:39:07.360 [2024-09-27 15:57:47.680448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.360 [2024-09-27 15:57:47.680457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.360 qpair failed and we were unable to recover it. 00:39:07.360 [2024-09-27 15:57:47.680764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.360 [2024-09-27 15:57:47.680773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.360 qpair failed and we were unable to recover it. 00:39:07.360 [2024-09-27 15:57:47.681081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.360 [2024-09-27 15:57:47.681094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.360 qpair failed and we were unable to recover it. 00:39:07.360 [2024-09-27 15:57:47.681424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.360 [2024-09-27 15:57:47.681435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.360 qpair failed and we were unable to recover it. 00:39:07.360 [2024-09-27 15:57:47.681844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.360 [2024-09-27 15:57:47.681855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.360 qpair failed and we were unable to recover it. 00:39:07.360 [2024-09-27 15:57:47.682208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.360 [2024-09-27 15:57:47.682218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.360 qpair failed and we were unable to recover it. 00:39:07.360 [2024-09-27 15:57:47.682400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.360 [2024-09-27 15:57:47.682409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.360 qpair failed and we were unable to recover it. 00:39:07.360 [2024-09-27 15:57:47.682711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.360 [2024-09-27 15:57:47.682721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.360 qpair failed and we were unable to recover it. 00:39:07.360 [2024-09-27 15:57:47.683045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.360 [2024-09-27 15:57:47.683054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.360 qpair failed and we were unable to recover it. 00:39:07.360 [2024-09-27 15:57:47.683392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.360 [2024-09-27 15:57:47.683404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.360 qpair failed and we were unable to recover it. 00:39:07.360 [2024-09-27 15:57:47.683603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.360 [2024-09-27 15:57:47.683613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.360 qpair failed and we were unable to recover it. 00:39:07.360 [2024-09-27 15:57:47.684017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.360 [2024-09-27 15:57:47.684028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.360 qpair failed and we were unable to recover it. 00:39:07.360 [2024-09-27 15:57:47.684217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.360 [2024-09-27 15:57:47.684228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.360 qpair failed and we were unable to recover it. 00:39:07.360 [2024-09-27 15:57:47.684564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.360 [2024-09-27 15:57:47.684574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.360 qpair failed and we were unable to recover it. 00:39:07.360 [2024-09-27 15:57:47.684905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.360 [2024-09-27 15:57:47.684918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.360 qpair failed and we were unable to recover it. 00:39:07.360 [2024-09-27 15:57:47.685235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.360 [2024-09-27 15:57:47.685245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.360 qpair failed and we were unable to recover it. 00:39:07.360 [2024-09-27 15:57:47.685546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.360 [2024-09-27 15:57:47.685555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.360 qpair failed and we were unable to recover it. 00:39:07.360 [2024-09-27 15:57:47.685755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.360 [2024-09-27 15:57:47.685765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.360 qpair failed and we were unable to recover it. 00:39:07.360 [2024-09-27 15:57:47.686090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.360 [2024-09-27 15:57:47.686101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.360 qpair failed and we were unable to recover it. 00:39:07.360 [2024-09-27 15:57:47.686285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.360 [2024-09-27 15:57:47.686295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.360 qpair failed and we were unable to recover it. 00:39:07.360 [2024-09-27 15:57:47.686582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.360 [2024-09-27 15:57:47.686592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.360 qpair failed and we were unable to recover it. 00:39:07.360 [2024-09-27 15:57:47.686914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.360 [2024-09-27 15:57:47.686924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.360 qpair failed and we were unable to recover it. 00:39:07.360 [2024-09-27 15:57:47.687111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.360 [2024-09-27 15:57:47.687120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.360 qpair failed and we were unable to recover it. 00:39:07.360 [2024-09-27 15:57:47.687443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.360 [2024-09-27 15:57:47.687452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.360 qpair failed and we were unable to recover it. 00:39:07.360 [2024-09-27 15:57:47.687839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.360 [2024-09-27 15:57:47.687849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.360 qpair failed and we were unable to recover it. 00:39:07.360 [2024-09-27 15:57:47.688149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.360 [2024-09-27 15:57:47.688158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.360 qpair failed and we were unable to recover it. 00:39:07.361 [2024-09-27 15:57:47.688332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.361 [2024-09-27 15:57:47.688342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.361 qpair failed and we were unable to recover it. 00:39:07.361 [2024-09-27 15:57:47.688511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.361 [2024-09-27 15:57:47.688521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.361 qpair failed and we were unable to recover it. 00:39:07.361 [2024-09-27 15:57:47.688732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.361 [2024-09-27 15:57:47.688741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.361 qpair failed and we were unable to recover it. 00:39:07.361 [2024-09-27 15:57:47.689114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.361 [2024-09-27 15:57:47.689130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.361 qpair failed and we were unable to recover it. 00:39:07.361 [2024-09-27 15:57:47.689445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.361 [2024-09-27 15:57:47.689457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.361 qpair failed and we were unable to recover it. 00:39:07.361 [2024-09-27 15:57:47.689775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.361 [2024-09-27 15:57:47.689785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.361 qpair failed and we were unable to recover it. 00:39:07.361 [2024-09-27 15:57:47.690121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.361 [2024-09-27 15:57:47.690132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.361 qpair failed and we were unable to recover it. 00:39:07.361 [2024-09-27 15:57:47.690455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.361 [2024-09-27 15:57:47.690466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.361 qpair failed and we were unable to recover it. 00:39:07.361 [2024-09-27 15:57:47.690801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.361 [2024-09-27 15:57:47.690811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.361 qpair failed and we were unable to recover it. 00:39:07.361 [2024-09-27 15:57:47.690982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.361 [2024-09-27 15:57:47.690992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.361 qpair failed and we were unable to recover it. 00:39:07.361 [2024-09-27 15:57:47.691193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.361 [2024-09-27 15:57:47.691203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.361 qpair failed and we were unable to recover it. 00:39:07.361 [2024-09-27 15:57:47.691531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.361 [2024-09-27 15:57:47.691541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.361 qpair failed and we were unable to recover it. 00:39:07.361 [2024-09-27 15:57:47.691865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.361 [2024-09-27 15:57:47.691877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.361 qpair failed and we were unable to recover it. 00:39:07.361 [2024-09-27 15:57:47.692199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.361 [2024-09-27 15:57:47.692209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.361 qpair failed and we were unable to recover it. 00:39:07.361 [2024-09-27 15:57:47.692376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.361 [2024-09-27 15:57:47.692385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.361 qpair failed and we were unable to recover it. 00:39:07.361 [2024-09-27 15:57:47.692688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.361 [2024-09-27 15:57:47.692697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.361 qpair failed and we were unable to recover it. 00:39:07.361 [2024-09-27 15:57:47.693016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.361 [2024-09-27 15:57:47.693026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.361 qpair failed and we were unable to recover it. 00:39:07.361 [2024-09-27 15:57:47.693256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.361 [2024-09-27 15:57:47.693266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.361 qpair failed and we were unable to recover it. 00:39:07.361 [2024-09-27 15:57:47.693606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.361 [2024-09-27 15:57:47.693616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.361 qpair failed and we were unable to recover it. 00:39:07.361 [2024-09-27 15:57:47.693781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.361 [2024-09-27 15:57:47.693791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.361 qpair failed and we were unable to recover it. 00:39:07.361 [2024-09-27 15:57:47.693986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.361 [2024-09-27 15:57:47.693996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.361 qpair failed and we were unable to recover it. 00:39:07.361 [2024-09-27 15:57:47.694362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.361 [2024-09-27 15:57:47.694372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.361 qpair failed and we were unable to recover it. 00:39:07.361 [2024-09-27 15:57:47.694694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.361 [2024-09-27 15:57:47.694705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.361 qpair failed and we were unable to recover it. 00:39:07.361 [2024-09-27 15:57:47.694884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.361 [2024-09-27 15:57:47.694899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.361 qpair failed and we were unable to recover it. 00:39:07.361 [2024-09-27 15:57:47.695211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.361 [2024-09-27 15:57:47.695221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.361 qpair failed and we were unable to recover it. 00:39:07.361 [2024-09-27 15:57:47.695569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.361 [2024-09-27 15:57:47.695581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.361 qpair failed and we were unable to recover it. 00:39:07.361 [2024-09-27 15:57:47.695909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.361 [2024-09-27 15:57:47.695922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.361 qpair failed and we were unable to recover it. 00:39:07.361 [2024-09-27 15:57:47.696196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.361 [2024-09-27 15:57:47.696205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.361 qpair failed and we were unable to recover it. 00:39:07.361 [2024-09-27 15:57:47.696621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.361 [2024-09-27 15:57:47.696632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.361 qpair failed and we were unable to recover it. 00:39:07.361 [2024-09-27 15:57:47.696952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.361 [2024-09-27 15:57:47.696962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.361 qpair failed and we were unable to recover it. 00:39:07.361 [2024-09-27 15:57:47.697301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.361 [2024-09-27 15:57:47.697313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.361 qpair failed and we were unable to recover it. 00:39:07.361 [2024-09-27 15:57:47.697520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.361 [2024-09-27 15:57:47.697529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.361 qpair failed and we were unable to recover it. 00:39:07.361 [2024-09-27 15:57:47.697700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.361 [2024-09-27 15:57:47.697708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.361 qpair failed and we were unable to recover it. 00:39:07.361 [2024-09-27 15:57:47.697915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.361 [2024-09-27 15:57:47.697926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.361 qpair failed and we were unable to recover it. 00:39:07.361 [2024-09-27 15:57:47.698284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.361 [2024-09-27 15:57:47.698294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.361 qpair failed and we were unable to recover it. 00:39:07.361 [2024-09-27 15:57:47.698603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.361 [2024-09-27 15:57:47.698614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.361 qpair failed and we were unable to recover it. 00:39:07.361 [2024-09-27 15:57:47.698929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.361 [2024-09-27 15:57:47.698940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.362 qpair failed and we were unable to recover it. 00:39:07.362 [2024-09-27 15:57:47.699112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.362 [2024-09-27 15:57:47.699121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.362 qpair failed and we were unable to recover it. 00:39:07.362 [2024-09-27 15:57:47.699325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.362 [2024-09-27 15:57:47.699336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.362 qpair failed and we were unable to recover it. 00:39:07.362 [2024-09-27 15:57:47.699500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.362 [2024-09-27 15:57:47.699510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.362 qpair failed and we were unable to recover it. 00:39:07.362 [2024-09-27 15:57:47.699799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.362 [2024-09-27 15:57:47.699811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.362 qpair failed and we were unable to recover it. 00:39:07.362 [2024-09-27 15:57:47.699986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.362 [2024-09-27 15:57:47.699997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.362 qpair failed and we were unable to recover it. 00:39:07.362 [2024-09-27 15:57:47.700310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.362 [2024-09-27 15:57:47.700321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.362 qpair failed and we were unable to recover it. 00:39:07.362 [2024-09-27 15:57:47.700639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.362 [2024-09-27 15:57:47.700649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.362 qpair failed and we were unable to recover it. 00:39:07.362 [2024-09-27 15:57:47.700836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.362 [2024-09-27 15:57:47.700846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.362 qpair failed and we were unable to recover it. 00:39:07.362 [2024-09-27 15:57:47.701036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.362 [2024-09-27 15:57:47.701046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.362 qpair failed and we were unable to recover it. 00:39:07.362 [2024-09-27 15:57:47.701453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.362 [2024-09-27 15:57:47.701464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.362 qpair failed and we were unable to recover it. 00:39:07.362 [2024-09-27 15:57:47.701626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.362 [2024-09-27 15:57:47.701635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.362 qpair failed and we were unable to recover it. 00:39:07.362 [2024-09-27 15:57:47.701803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.362 [2024-09-27 15:57:47.701811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.362 qpair failed and we were unable to recover it. 00:39:07.362 [2024-09-27 15:57:47.701990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.362 [2024-09-27 15:57:47.702000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.362 qpair failed and we were unable to recover it. 00:39:07.362 [2024-09-27 15:57:47.702268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.362 [2024-09-27 15:57:47.702279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.362 qpair failed and we were unable to recover it. 00:39:07.362 [2024-09-27 15:57:47.702595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.362 [2024-09-27 15:57:47.702608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.362 qpair failed and we were unable to recover it. 00:39:07.362 [2024-09-27 15:57:47.702917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.362 [2024-09-27 15:57:47.702929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.362 qpair failed and we were unable to recover it. 00:39:07.362 [2024-09-27 15:57:47.703155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.362 [2024-09-27 15:57:47.703164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.362 qpair failed and we were unable to recover it. 00:39:07.362 [2024-09-27 15:57:47.703495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.362 [2024-09-27 15:57:47.703505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.362 qpair failed and we were unable to recover it. 00:39:07.362 [2024-09-27 15:57:47.703850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.362 [2024-09-27 15:57:47.703862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.362 qpair failed and we were unable to recover it. 00:39:07.362 [2024-09-27 15:57:47.704186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.362 [2024-09-27 15:57:47.704197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.362 qpair failed and we were unable to recover it. 00:39:07.362 [2024-09-27 15:57:47.704511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.362 [2024-09-27 15:57:47.704522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.362 qpair failed and we were unable to recover it. 00:39:07.362 [2024-09-27 15:57:47.704798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.362 [2024-09-27 15:57:47.704809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.362 qpair failed and we were unable to recover it. 00:39:07.362 [2024-09-27 15:57:47.705098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.362 [2024-09-27 15:57:47.705110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.362 qpair failed and we were unable to recover it. 00:39:07.362 [2024-09-27 15:57:47.705149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.362 [2024-09-27 15:57:47.705157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.362 qpair failed and we were unable to recover it. 00:39:07.362 [2024-09-27 15:57:47.705353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.362 [2024-09-27 15:57:47.705364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.362 qpair failed and we were unable to recover it. 00:39:07.362 [2024-09-27 15:57:47.705700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.362 [2024-09-27 15:57:47.705713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.362 qpair failed and we were unable to recover it. 00:39:07.362 [2024-09-27 15:57:47.705942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.362 [2024-09-27 15:57:47.705954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.362 qpair failed and we were unable to recover it. 00:39:07.362 [2024-09-27 15:57:47.706257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.362 [2024-09-27 15:57:47.706268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.362 qpair failed and we were unable to recover it. 00:39:07.362 [2024-09-27 15:57:47.706585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.362 [2024-09-27 15:57:47.706594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.362 qpair failed and we were unable to recover it. 00:39:07.362 [2024-09-27 15:57:47.706956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.362 [2024-09-27 15:57:47.706967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.362 qpair failed and we were unable to recover it. 00:39:07.362 [2024-09-27 15:57:47.707151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.362 [2024-09-27 15:57:47.707161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.362 qpair failed and we were unable to recover it. 00:39:07.362 [2024-09-27 15:57:47.707335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.362 [2024-09-27 15:57:47.707344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.362 qpair failed and we were unable to recover it. 00:39:07.362 [2024-09-27 15:57:47.707679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.362 [2024-09-27 15:57:47.707689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.362 qpair failed and we were unable to recover it. 00:39:07.362 [2024-09-27 15:57:47.708017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.362 [2024-09-27 15:57:47.708031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.362 qpair failed and we were unable to recover it. 00:39:07.362 [2024-09-27 15:57:47.708240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.362 [2024-09-27 15:57:47.708250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.362 qpair failed and we were unable to recover it. 00:39:07.362 [2024-09-27 15:57:47.708441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.362 [2024-09-27 15:57:47.708449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.362 qpair failed and we were unable to recover it. 00:39:07.362 [2024-09-27 15:57:47.708852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.362 [2024-09-27 15:57:47.708862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.362 qpair failed and we were unable to recover it. 00:39:07.363 [2024-09-27 15:57:47.709199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.363 [2024-09-27 15:57:47.709211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.363 qpair failed and we were unable to recover it. 00:39:07.363 [2024-09-27 15:57:47.709529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.363 [2024-09-27 15:57:47.709539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.363 qpair failed and we were unable to recover it. 00:39:07.363 [2024-09-27 15:57:47.709860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.363 [2024-09-27 15:57:47.709870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.363 qpair failed and we were unable to recover it. 00:39:07.363 [2024-09-27 15:57:47.710238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.363 [2024-09-27 15:57:47.710248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.363 qpair failed and we were unable to recover it. 00:39:07.363 [2024-09-27 15:57:47.710434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.363 [2024-09-27 15:57:47.710444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.363 qpair failed and we were unable to recover it. 00:39:07.363 [2024-09-27 15:57:47.710771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.363 [2024-09-27 15:57:47.710782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.363 qpair failed and we were unable to recover it. 00:39:07.363 [2024-09-27 15:57:47.710950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.363 [2024-09-27 15:57:47.710961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.363 qpair failed and we were unable to recover it. 00:39:07.363 [2024-09-27 15:57:47.711000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.363 [2024-09-27 15:57:47.711007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.363 qpair failed and we were unable to recover it. 00:39:07.363 [2024-09-27 15:57:47.711052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.363 [2024-09-27 15:57:47.711059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.363 qpair failed and we were unable to recover it. 00:39:07.363 [2024-09-27 15:57:47.711356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.363 [2024-09-27 15:57:47.711368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.363 qpair failed and we were unable to recover it. 00:39:07.363 [2024-09-27 15:57:47.711533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.363 [2024-09-27 15:57:47.711544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.363 qpair failed and we were unable to recover it. 00:39:07.363 [2024-09-27 15:57:47.711883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.363 [2024-09-27 15:57:47.711898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.363 qpair failed and we were unable to recover it. 00:39:07.363 [2024-09-27 15:57:47.712218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.363 [2024-09-27 15:57:47.712227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.363 qpair failed and we were unable to recover it. 00:39:07.363 [2024-09-27 15:57:47.712421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.363 [2024-09-27 15:57:47.712432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.363 qpair failed and we were unable to recover it. 00:39:07.363 [2024-09-27 15:57:47.712740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.363 [2024-09-27 15:57:47.712749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.363 qpair failed and we were unable to recover it. 00:39:07.363 [2024-09-27 15:57:47.713063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.363 [2024-09-27 15:57:47.713074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.363 qpair failed and we were unable to recover it. 00:39:07.363 [2024-09-27 15:57:47.713398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.363 [2024-09-27 15:57:47.713412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.363 qpair failed and we were unable to recover it. 00:39:07.363 [2024-09-27 15:57:47.713734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.363 [2024-09-27 15:57:47.713745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.363 qpair failed and we were unable to recover it. 00:39:07.363 [2024-09-27 15:57:47.713913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.363 [2024-09-27 15:57:47.713925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.363 qpair failed and we were unable to recover it. 00:39:07.363 [2024-09-27 15:57:47.714126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.363 [2024-09-27 15:57:47.714137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.363 qpair failed and we were unable to recover it. 00:39:07.363 [2024-09-27 15:57:47.714566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.363 [2024-09-27 15:57:47.714577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.363 qpair failed and we were unable to recover it. 00:39:07.363 [2024-09-27 15:57:47.714915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.363 [2024-09-27 15:57:47.714926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.363 qpair failed and we were unable to recover it. 00:39:07.363 [2024-09-27 15:57:47.715109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.363 [2024-09-27 15:57:47.715119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.363 qpair failed and we were unable to recover it. 00:39:07.363 [2024-09-27 15:57:47.715461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.363 [2024-09-27 15:57:47.715471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.363 qpair failed and we were unable to recover it. 00:39:07.363 [2024-09-27 15:57:47.715876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.363 [2024-09-27 15:57:47.715888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.363 qpair failed and we were unable to recover it. 00:39:07.363 [2024-09-27 15:57:47.716208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.363 [2024-09-27 15:57:47.716218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.363 qpair failed and we were unable to recover it. 00:39:07.363 [2024-09-27 15:57:47.716542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.363 [2024-09-27 15:57:47.716552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.363 qpair failed and we were unable to recover it. 00:39:07.363 [2024-09-27 15:57:47.716746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.363 [2024-09-27 15:57:47.716755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.363 qpair failed and we were unable to recover it. 00:39:07.363 [2024-09-27 15:57:47.717016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.363 [2024-09-27 15:57:47.717026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.363 qpair failed and we were unable to recover it. 00:39:07.363 [2024-09-27 15:57:47.717335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.363 [2024-09-27 15:57:47.717346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.363 qpair failed and we were unable to recover it. 00:39:07.363 [2024-09-27 15:57:47.717530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.363 [2024-09-27 15:57:47.717540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.363 qpair failed and we were unable to recover it. 00:39:07.363 [2024-09-27 15:57:47.717877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.363 [2024-09-27 15:57:47.717886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.363 qpair failed and we were unable to recover it. 00:39:07.363 [2024-09-27 15:57:47.718213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.363 [2024-09-27 15:57:47.718225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.363 qpair failed and we were unable to recover it. 00:39:07.363 [2024-09-27 15:57:47.718550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.363 [2024-09-27 15:57:47.718561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.363 qpair failed and we were unable to recover it. 00:39:07.363 [2024-09-27 15:57:47.718878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.363 [2024-09-27 15:57:47.718890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.363 qpair failed and we were unable to recover it. 00:39:07.363 [2024-09-27 15:57:47.719241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.363 [2024-09-27 15:57:47.719252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.363 qpair failed and we were unable to recover it. 00:39:07.363 [2024-09-27 15:57:47.719580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.363 [2024-09-27 15:57:47.719589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.363 qpair failed and we were unable to recover it. 00:39:07.363 [2024-09-27 15:57:47.719907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.364 [2024-09-27 15:57:47.719918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.364 qpair failed and we were unable to recover it. 00:39:07.364 [2024-09-27 15:57:47.720107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.364 [2024-09-27 15:57:47.720116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.364 qpair failed and we were unable to recover it. 00:39:07.364 [2024-09-27 15:57:47.720452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.364 [2024-09-27 15:57:47.720464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.364 qpair failed and we were unable to recover it. 00:39:07.364 [2024-09-27 15:57:47.720787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.364 [2024-09-27 15:57:47.720799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.364 qpair failed and we were unable to recover it. 00:39:07.364 [2024-09-27 15:57:47.720996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.364 [2024-09-27 15:57:47.721009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.364 qpair failed and we were unable to recover it. 00:39:07.364 [2024-09-27 15:57:47.721330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.364 [2024-09-27 15:57:47.721341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.364 qpair failed and we were unable to recover it. 00:39:07.364 [2024-09-27 15:57:47.721686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.364 [2024-09-27 15:57:47.721696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.364 qpair failed and we were unable to recover it. 00:39:07.364 [2024-09-27 15:57:47.722020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.364 [2024-09-27 15:57:47.722031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.364 qpair failed and we were unable to recover it. 00:39:07.364 [2024-09-27 15:57:47.722366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.364 [2024-09-27 15:57:47.722377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.364 qpair failed and we were unable to recover it. 00:39:07.364 [2024-09-27 15:57:47.722699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.364 [2024-09-27 15:57:47.722710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.364 qpair failed and we were unable to recover it. 00:39:07.364 [2024-09-27 15:57:47.723038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.364 [2024-09-27 15:57:47.723049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.364 qpair failed and we were unable to recover it. 00:39:07.364 [2024-09-27 15:57:47.723125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.364 [2024-09-27 15:57:47.723134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.364 qpair failed and we were unable to recover it. 00:39:07.364 [2024-09-27 15:57:47.723338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.364 [2024-09-27 15:57:47.723352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.364 qpair failed and we were unable to recover it. 00:39:07.364 [2024-09-27 15:57:47.723646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.364 [2024-09-27 15:57:47.723659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.364 qpair failed and we were unable to recover it. 00:39:07.364 [2024-09-27 15:57:47.723997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.364 [2024-09-27 15:57:47.724011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.364 qpair failed and we were unable to recover it. 00:39:07.364 [2024-09-27 15:57:47.724324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.364 [2024-09-27 15:57:47.724334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.364 qpair failed and we were unable to recover it. 00:39:07.364 [2024-09-27 15:57:47.724551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.364 [2024-09-27 15:57:47.724561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.364 qpair failed and we were unable to recover it. 00:39:07.364 [2024-09-27 15:57:47.724899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.364 [2024-09-27 15:57:47.724911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.364 qpair failed and we were unable to recover it. 00:39:07.364 [2024-09-27 15:57:47.725234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.364 [2024-09-27 15:57:47.725245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.364 qpair failed and we were unable to recover it. 00:39:07.364 [2024-09-27 15:57:47.725413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.364 [2024-09-27 15:57:47.725423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.364 qpair failed and we were unable to recover it. 00:39:07.364 [2024-09-27 15:57:47.725627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.364 [2024-09-27 15:57:47.725636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.364 qpair failed and we were unable to recover it. 00:39:07.364 [2024-09-27 15:57:47.725980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.364 [2024-09-27 15:57:47.725989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.364 qpair failed and we were unable to recover it. 00:39:07.364 [2024-09-27 15:57:47.726314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.364 [2024-09-27 15:57:47.726325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.364 qpair failed and we were unable to recover it. 00:39:07.364 [2024-09-27 15:57:47.726659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.364 [2024-09-27 15:57:47.726669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.364 qpair failed and we were unable to recover it. 00:39:07.364 [2024-09-27 15:57:47.726992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.364 [2024-09-27 15:57:47.727002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.364 qpair failed and we were unable to recover it. 00:39:07.364 [2024-09-27 15:57:47.727327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.364 [2024-09-27 15:57:47.727337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.364 qpair failed and we were unable to recover it. 00:39:07.364 [2024-09-27 15:57:47.727666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.364 [2024-09-27 15:57:47.727677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.364 qpair failed and we were unable to recover it. 00:39:07.364 [2024-09-27 15:57:47.727885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.364 [2024-09-27 15:57:47.727906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.364 qpair failed and we were unable to recover it. 00:39:07.364 [2024-09-27 15:57:47.728123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.364 [2024-09-27 15:57:47.728133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.364 qpair failed and we were unable to recover it. 00:39:07.364 [2024-09-27 15:57:47.728463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.364 [2024-09-27 15:57:47.728473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.364 qpair failed and we were unable to recover it. 00:39:07.364 [2024-09-27 15:57:47.728841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.364 [2024-09-27 15:57:47.728850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.364 qpair failed and we were unable to recover it. 00:39:07.364 [2024-09-27 15:57:47.729181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.364 [2024-09-27 15:57:47.729192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.364 qpair failed and we were unable to recover it. 00:39:07.364 [2024-09-27 15:57:47.729504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.364 [2024-09-27 15:57:47.729515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.364 qpair failed and we were unable to recover it. 00:39:07.364 [2024-09-27 15:57:47.729679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.364 [2024-09-27 15:57:47.729689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.364 qpair failed and we were unable to recover it. 00:39:07.364 [2024-09-27 15:57:47.730022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.364 [2024-09-27 15:57:47.730033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.364 qpair failed and we were unable to recover it. 00:39:07.364 [2024-09-27 15:57:47.730357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.364 [2024-09-27 15:57:47.730368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.364 qpair failed and we were unable to recover it. 00:39:07.364 [2024-09-27 15:57:47.730690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.364 [2024-09-27 15:57:47.730704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.364 qpair failed and we were unable to recover it. 00:39:07.365 [2024-09-27 15:57:47.731034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.365 [2024-09-27 15:57:47.731046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.365 qpair failed and we were unable to recover it. 00:39:07.365 [2024-09-27 15:57:47.731405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.365 [2024-09-27 15:57:47.731418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.365 qpair failed and we were unable to recover it. 00:39:07.365 [2024-09-27 15:57:47.731737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.365 [2024-09-27 15:57:47.731750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.365 qpair failed and we were unable to recover it. 00:39:07.365 [2024-09-27 15:57:47.732064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.365 [2024-09-27 15:57:47.732075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.365 qpair failed and we were unable to recover it. 00:39:07.365 [2024-09-27 15:57:47.732413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.365 [2024-09-27 15:57:47.732426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.365 qpair failed and we were unable to recover it. 00:39:07.365 [2024-09-27 15:57:47.732745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.365 [2024-09-27 15:57:47.732756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.365 qpair failed and we were unable to recover it. 00:39:07.365 [2024-09-27 15:57:47.733076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.365 [2024-09-27 15:57:47.733086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.365 qpair failed and we were unable to recover it. 00:39:07.365 [2024-09-27 15:57:47.733416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.365 [2024-09-27 15:57:47.733427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.365 qpair failed and we were unable to recover it. 00:39:07.365 [2024-09-27 15:57:47.733755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.365 [2024-09-27 15:57:47.733767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.365 qpair failed and we were unable to recover it. 00:39:07.365 [2024-09-27 15:57:47.733946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.365 [2024-09-27 15:57:47.733959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.365 qpair failed and we were unable to recover it. 00:39:07.365 [2024-09-27 15:57:47.734256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.365 [2024-09-27 15:57:47.734267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.365 qpair failed and we were unable to recover it. 00:39:07.365 [2024-09-27 15:57:47.734612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.365 [2024-09-27 15:57:47.734623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.365 qpair failed and we were unable to recover it. 00:39:07.365 [2024-09-27 15:57:47.734953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.365 [2024-09-27 15:57:47.734965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.365 qpair failed and we were unable to recover it. 00:39:07.365 [2024-09-27 15:57:47.735164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.365 [2024-09-27 15:57:47.735175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.365 qpair failed and we were unable to recover it. 00:39:07.365 [2024-09-27 15:57:47.735524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.365 [2024-09-27 15:57:47.735534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.365 qpair failed and we were unable to recover it. 00:39:07.365 [2024-09-27 15:57:47.735728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.365 [2024-09-27 15:57:47.735741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.365 qpair failed and we were unable to recover it. 00:39:07.365 [2024-09-27 15:57:47.735908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.365 [2024-09-27 15:57:47.735918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.365 qpair failed and we were unable to recover it. 00:39:07.365 [2024-09-27 15:57:47.736219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.365 [2024-09-27 15:57:47.736230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.365 qpair failed and we were unable to recover it. 00:39:07.365 [2024-09-27 15:57:47.736395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.365 [2024-09-27 15:57:47.736408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.365 qpair failed and we were unable to recover it. 00:39:07.365 [2024-09-27 15:57:47.736632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.365 [2024-09-27 15:57:47.736644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.365 qpair failed and we were unable to recover it. 00:39:07.365 [2024-09-27 15:57:47.736812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.365 [2024-09-27 15:57:47.736823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.365 qpair failed and we were unable to recover it. 00:39:07.365 [2024-09-27 15:57:47.736910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.365 [2024-09-27 15:57:47.736919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.365 qpair failed and we were unable to recover it. 00:39:07.365 [2024-09-27 15:57:47.737203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.365 [2024-09-27 15:57:47.737215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.365 qpair failed and we were unable to recover it. 00:39:07.365 [2024-09-27 15:57:47.737589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.365 [2024-09-27 15:57:47.737600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.365 qpair failed and we were unable to recover it. 00:39:07.365 [2024-09-27 15:57:47.738020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.365 [2024-09-27 15:57:47.738031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.365 qpair failed and we were unable to recover it. 00:39:07.365 [2024-09-27 15:57:47.738263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.365 [2024-09-27 15:57:47.738272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.365 qpair failed and we were unable to recover it. 00:39:07.365 [2024-09-27 15:57:47.738573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.365 [2024-09-27 15:57:47.738583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.365 qpair failed and we were unable to recover it. 00:39:07.365 [2024-09-27 15:57:47.738936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.365 [2024-09-27 15:57:47.738947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.365 qpair failed and we were unable to recover it. 00:39:07.365 [2024-09-27 15:57:47.739118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.365 [2024-09-27 15:57:47.739129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.365 qpair failed and we were unable to recover it. 00:39:07.365 [2024-09-27 15:57:47.739325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.365 [2024-09-27 15:57:47.739338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.365 qpair failed and we were unable to recover it. 00:39:07.365 [2024-09-27 15:57:47.739508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.365 [2024-09-27 15:57:47.739520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.365 qpair failed and we were unable to recover it. 00:39:07.365 [2024-09-27 15:57:47.739724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.365 [2024-09-27 15:57:47.739735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.365 qpair failed and we were unable to recover it. 00:39:07.365 [2024-09-27 15:57:47.740090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.366 [2024-09-27 15:57:47.740102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.366 qpair failed and we were unable to recover it. 00:39:07.366 [2024-09-27 15:57:47.740426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.366 [2024-09-27 15:57:47.740437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.366 qpair failed and we were unable to recover it. 00:39:07.366 [2024-09-27 15:57:47.740769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.366 [2024-09-27 15:57:47.740781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.366 qpair failed and we were unable to recover it. 00:39:07.366 [2024-09-27 15:57:47.741108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.366 [2024-09-27 15:57:47.741120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.366 qpair failed and we were unable to recover it. 00:39:07.366 [2024-09-27 15:57:47.741296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.366 [2024-09-27 15:57:47.741305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.366 qpair failed and we were unable to recover it. 00:39:07.366 [2024-09-27 15:57:47.741497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.366 [2024-09-27 15:57:47.741508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.366 qpair failed and we were unable to recover it. 00:39:07.366 [2024-09-27 15:57:47.741957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.366 [2024-09-27 15:57:47.742065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.366 qpair failed and we were unable to recover it. 00:39:07.366 [2024-09-27 15:57:47.742371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.366 [2024-09-27 15:57:47.742412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.366 qpair failed and we were unable to recover it. 00:39:07.366 [2024-09-27 15:57:47.742817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.366 [2024-09-27 15:57:47.742849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.366 qpair failed and we were unable to recover it. 00:39:07.366 [2024-09-27 15:57:47.743272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.366 [2024-09-27 15:57:47.743309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.366 qpair failed and we were unable to recover it. 00:39:07.366 [2024-09-27 15:57:47.743665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.366 [2024-09-27 15:57:47.743697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.366 qpair failed and we were unable to recover it. 00:39:07.366 [2024-09-27 15:57:47.744203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.366 [2024-09-27 15:57:47.744261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.366 qpair failed and we were unable to recover it. 00:39:07.366 [2024-09-27 15:57:47.744343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.366 [2024-09-27 15:57:47.744354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.366 qpair failed and we were unable to recover it. 00:39:07.366 [2024-09-27 15:57:47.744572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.366 [2024-09-27 15:57:47.744584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.366 qpair failed and we were unable to recover it. 00:39:07.366 [2024-09-27 15:57:47.744943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.366 [2024-09-27 15:57:47.744955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.366 qpair failed and we were unable to recover it. 00:39:07.366 [2024-09-27 15:57:47.745173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.366 [2024-09-27 15:57:47.745183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.366 qpair failed and we were unable to recover it. 00:39:07.366 [2024-09-27 15:57:47.745481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.366 [2024-09-27 15:57:47.745491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.366 qpair failed and we were unable to recover it. 00:39:07.366 [2024-09-27 15:57:47.745825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.366 [2024-09-27 15:57:47.745836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.366 qpair failed and we were unable to recover it. 00:39:07.366 [2024-09-27 15:57:47.746132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.366 [2024-09-27 15:57:47.746143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.366 qpair failed and we were unable to recover it. 00:39:07.366 [2024-09-27 15:57:47.746469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.366 [2024-09-27 15:57:47.746481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.366 qpair failed and we were unable to recover it. 00:39:07.366 [2024-09-27 15:57:47.746679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.366 [2024-09-27 15:57:47.746690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.366 qpair failed and we were unable to recover it. 00:39:07.366 [2024-09-27 15:57:47.747040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.366 [2024-09-27 15:57:47.747050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.366 qpair failed and we were unable to recover it. 00:39:07.366 [2024-09-27 15:57:47.747430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.366 [2024-09-27 15:57:47.747441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.366 qpair failed and we were unable to recover it. 00:39:07.366 [2024-09-27 15:57:47.747761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.366 [2024-09-27 15:57:47.747773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.366 qpair failed and we were unable to recover it. 00:39:07.366 [2024-09-27 15:57:47.748019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.366 [2024-09-27 15:57:47.748031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.366 qpair failed and we were unable to recover it. 00:39:07.366 [2024-09-27 15:57:47.748214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.366 [2024-09-27 15:57:47.748225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.366 qpair failed and we were unable to recover it. 00:39:07.366 [2024-09-27 15:57:47.748406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.366 [2024-09-27 15:57:47.748415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.366 qpair failed and we were unable to recover it. 00:39:07.366 [2024-09-27 15:57:47.748603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.366 [2024-09-27 15:57:47.748612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.366 qpair failed and we were unable to recover it. 00:39:07.366 [2024-09-27 15:57:47.748939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.366 [2024-09-27 15:57:47.748951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.366 qpair failed and we were unable to recover it. 00:39:07.366 [2024-09-27 15:57:47.749286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.366 [2024-09-27 15:57:47.749298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.366 qpair failed and we were unable to recover it. 00:39:07.366 [2024-09-27 15:57:47.749631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.366 [2024-09-27 15:57:47.749641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.366 qpair failed and we were unable to recover it. 00:39:07.366 [2024-09-27 15:57:47.750006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.366 [2024-09-27 15:57:47.750017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.366 qpair failed and we were unable to recover it. 00:39:07.366 [2024-09-27 15:57:47.750337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.366 [2024-09-27 15:57:47.750348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.366 qpair failed and we were unable to recover it. 00:39:07.366 [2024-09-27 15:57:47.750394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.366 [2024-09-27 15:57:47.750402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.366 qpair failed and we were unable to recover it. 00:39:07.366 [2024-09-27 15:57:47.750583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.366 [2024-09-27 15:57:47.750592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.366 qpair failed and we were unable to recover it. 00:39:07.366 [2024-09-27 15:57:47.750765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.366 [2024-09-27 15:57:47.750776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.366 qpair failed and we were unable to recover it. 00:39:07.366 [2024-09-27 15:57:47.751073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.366 [2024-09-27 15:57:47.751083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.366 qpair failed and we were unable to recover it. 00:39:07.366 [2024-09-27 15:57:47.751424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.366 [2024-09-27 15:57:47.751434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.367 qpair failed and we were unable to recover it. 00:39:07.367 [2024-09-27 15:57:47.751483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.367 [2024-09-27 15:57:47.751489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.367 qpair failed and we were unable to recover it. 00:39:07.367 [2024-09-27 15:57:47.751800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.367 [2024-09-27 15:57:47.751810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.367 qpair failed and we were unable to recover it. 00:39:07.367 [2024-09-27 15:57:47.752109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.367 [2024-09-27 15:57:47.752128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.367 qpair failed and we were unable to recover it. 00:39:07.367 [2024-09-27 15:57:47.752327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.367 [2024-09-27 15:57:47.752337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.367 qpair failed and we were unable to recover it. 00:39:07.367 [2024-09-27 15:57:47.752685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.367 [2024-09-27 15:57:47.752695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.367 qpair failed and we were unable to recover it. 00:39:07.367 [2024-09-27 15:57:47.752864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.367 [2024-09-27 15:57:47.752876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.367 qpair failed and we were unable to recover it. 00:39:07.367 [2024-09-27 15:57:47.753110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.367 [2024-09-27 15:57:47.753120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.367 qpair failed and we were unable to recover it. 00:39:07.367 [2024-09-27 15:57:47.753317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.367 [2024-09-27 15:57:47.753328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.367 qpair failed and we were unable to recover it. 00:39:07.367 [2024-09-27 15:57:47.753379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.367 [2024-09-27 15:57:47.753389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.367 qpair failed and we were unable to recover it. 00:39:07.367 [2024-09-27 15:57:47.753615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.367 [2024-09-27 15:57:47.753627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.367 qpair failed and we were unable to recover it. 00:39:07.367 [2024-09-27 15:57:47.753927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.367 [2024-09-27 15:57:47.753936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.367 qpair failed and we were unable to recover it. 00:39:07.367 [2024-09-27 15:57:47.754242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.367 [2024-09-27 15:57:47.754252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.367 qpair failed and we were unable to recover it. 00:39:07.367 [2024-09-27 15:57:47.754443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.367 [2024-09-27 15:57:47.754453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.367 qpair failed and we were unable to recover it. 00:39:07.367 [2024-09-27 15:57:47.754782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.367 [2024-09-27 15:57:47.754795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.367 qpair failed and we were unable to recover it. 00:39:07.367 [2024-09-27 15:57:47.755221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.367 [2024-09-27 15:57:47.755233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.367 qpair failed and we were unable to recover it. 00:39:07.367 [2024-09-27 15:57:47.755406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.367 [2024-09-27 15:57:47.755416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.367 qpair failed and we were unable to recover it. 00:39:07.367 [2024-09-27 15:57:47.755716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.367 [2024-09-27 15:57:47.755727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.367 qpair failed and we were unable to recover it. 00:39:07.367 [2024-09-27 15:57:47.756072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.367 [2024-09-27 15:57:47.756082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.367 qpair failed and we were unable to recover it. 00:39:07.367 [2024-09-27 15:57:47.756310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.367 [2024-09-27 15:57:47.756319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.367 qpair failed and we were unable to recover it. 00:39:07.367 [2024-09-27 15:57:47.756639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.367 [2024-09-27 15:57:47.756649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.367 qpair failed and we were unable to recover it. 00:39:07.367 [2024-09-27 15:57:47.756985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.367 [2024-09-27 15:57:47.756995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.367 qpair failed and we were unable to recover it. 00:39:07.367 [2024-09-27 15:57:47.757298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.367 [2024-09-27 15:57:47.757309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.367 qpair failed and we were unable to recover it. 00:39:07.367 [2024-09-27 15:57:47.757495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.367 [2024-09-27 15:57:47.757506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.367 qpair failed and we were unable to recover it. 00:39:07.367 [2024-09-27 15:57:47.757847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.367 [2024-09-27 15:57:47.757857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.367 qpair failed and we were unable to recover it. 00:39:07.367 [2024-09-27 15:57:47.758282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.367 [2024-09-27 15:57:47.758294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.367 qpair failed and we were unable to recover it. 00:39:07.367 [2024-09-27 15:57:47.758639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.367 [2024-09-27 15:57:47.758650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.367 qpair failed and we were unable to recover it. 00:39:07.367 [2024-09-27 15:57:47.758987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.367 [2024-09-27 15:57:47.758997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.367 qpair failed and we were unable to recover it. 00:39:07.367 [2024-09-27 15:57:47.759316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.367 [2024-09-27 15:57:47.759326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.367 qpair failed and we were unable to recover it. 00:39:07.367 [2024-09-27 15:57:47.759518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.367 [2024-09-27 15:57:47.759528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.367 qpair failed and we were unable to recover it. 00:39:07.367 [2024-09-27 15:57:47.759817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.367 [2024-09-27 15:57:47.759831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.367 qpair failed and we were unable to recover it. 00:39:07.367 [2024-09-27 15:57:47.760156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.367 [2024-09-27 15:57:47.760166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.367 qpair failed and we were unable to recover it. 00:39:07.367 [2024-09-27 15:57:47.760499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.367 [2024-09-27 15:57:47.760509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.367 qpair failed and we were unable to recover it. 00:39:07.367 [2024-09-27 15:57:47.760863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.367 [2024-09-27 15:57:47.760872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.367 qpair failed and we were unable to recover it. 00:39:07.367 [2024-09-27 15:57:47.761194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.367 [2024-09-27 15:57:47.761203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.367 qpair failed and we were unable to recover it. 00:39:07.367 [2024-09-27 15:57:47.761532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.367 [2024-09-27 15:57:47.761541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.367 qpair failed and we were unable to recover it. 00:39:07.367 [2024-09-27 15:57:47.761755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.367 [2024-09-27 15:57:47.761764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.367 qpair failed and we were unable to recover it. 00:39:07.367 [2024-09-27 15:57:47.762092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.367 [2024-09-27 15:57:47.762102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.367 qpair failed and we were unable to recover it. 00:39:07.368 [2024-09-27 15:57:47.762316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.368 [2024-09-27 15:57:47.762326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.368 qpair failed and we were unable to recover it. 00:39:07.368 [2024-09-27 15:57:47.762710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.368 [2024-09-27 15:57:47.762721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.368 qpair failed and we were unable to recover it. 00:39:07.368 [2024-09-27 15:57:47.763027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.368 [2024-09-27 15:57:47.763038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.368 qpair failed and we were unable to recover it. 00:39:07.368 [2024-09-27 15:57:47.763210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.368 [2024-09-27 15:57:47.763219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.368 qpair failed and we were unable to recover it. 00:39:07.368 [2024-09-27 15:57:47.763502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.368 [2024-09-27 15:57:47.763514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.368 qpair failed and we were unable to recover it. 00:39:07.368 [2024-09-27 15:57:47.763833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.368 [2024-09-27 15:57:47.763845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.368 qpair failed and we were unable to recover it. 00:39:07.368 [2024-09-27 15:57:47.764049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.368 [2024-09-27 15:57:47.764059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.368 qpair failed and we were unable to recover it. 00:39:07.368 [2024-09-27 15:57:47.764264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.368 [2024-09-27 15:57:47.764275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.368 qpair failed and we were unable to recover it. 00:39:07.368 [2024-09-27 15:57:47.764604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.368 [2024-09-27 15:57:47.764615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.368 qpair failed and we were unable to recover it. 00:39:07.368 [2024-09-27 15:57:47.764812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.368 [2024-09-27 15:57:47.764822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.368 qpair failed and we were unable to recover it. 00:39:07.368 [2024-09-27 15:57:47.765011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.368 [2024-09-27 15:57:47.765022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.368 qpair failed and we were unable to recover it. 00:39:07.368 [2024-09-27 15:57:47.765369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.368 [2024-09-27 15:57:47.765378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.368 qpair failed and we were unable to recover it. 00:39:07.368 [2024-09-27 15:57:47.765714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.368 [2024-09-27 15:57:47.765726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.368 qpair failed and we were unable to recover it. 00:39:07.368 [2024-09-27 15:57:47.766057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.368 [2024-09-27 15:57:47.766068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.368 qpair failed and we were unable to recover it. 00:39:07.368 [2024-09-27 15:57:47.766248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.368 [2024-09-27 15:57:47.766259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.368 qpair failed and we were unable to recover it. 00:39:07.368 [2024-09-27 15:57:47.766445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.368 [2024-09-27 15:57:47.766456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.368 qpair failed and we were unable to recover it. 00:39:07.368 [2024-09-27 15:57:47.766787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.368 [2024-09-27 15:57:47.766798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.368 qpair failed and we were unable to recover it. 00:39:07.368 [2024-09-27 15:57:47.766988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.368 [2024-09-27 15:57:47.766999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.368 qpair failed and we were unable to recover it. 00:39:07.368 [2024-09-27 15:57:47.767235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.368 [2024-09-27 15:57:47.767245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.368 qpair failed and we were unable to recover it. 00:39:07.368 [2024-09-27 15:57:47.767672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.368 [2024-09-27 15:57:47.767685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.368 qpair failed and we were unable to recover it. 00:39:07.368 [2024-09-27 15:57:47.767873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.368 [2024-09-27 15:57:47.767884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.368 qpair failed and we were unable to recover it. 00:39:07.368 [2024-09-27 15:57:47.768238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.368 [2024-09-27 15:57:47.768248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.368 qpair failed and we were unable to recover it. 00:39:07.368 [2024-09-27 15:57:47.768299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.368 [2024-09-27 15:57:47.768306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.368 qpair failed and we were unable to recover it. 00:39:07.368 [2024-09-27 15:57:47.768575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.368 [2024-09-27 15:57:47.768584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.368 qpair failed and we were unable to recover it. 00:39:07.368 [2024-09-27 15:57:47.768873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.368 [2024-09-27 15:57:47.768884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.368 qpair failed and we were unable to recover it. 00:39:07.368 [2024-09-27 15:57:47.769097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.368 [2024-09-27 15:57:47.769108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.368 qpair failed and we were unable to recover it. 00:39:07.368 [2024-09-27 15:57:47.769391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.368 [2024-09-27 15:57:47.769402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.368 qpair failed and we were unable to recover it. 00:39:07.368 [2024-09-27 15:57:47.769598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.368 [2024-09-27 15:57:47.769611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.368 qpair failed and we were unable to recover it. 00:39:07.368 [2024-09-27 15:57:47.769953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.368 [2024-09-27 15:57:47.769963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.368 qpair failed and we were unable to recover it. 00:39:07.368 [2024-09-27 15:57:47.770289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.368 [2024-09-27 15:57:47.770300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.368 qpair failed and we were unable to recover it. 00:39:07.368 [2024-09-27 15:57:47.770625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.368 [2024-09-27 15:57:47.770637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.368 qpair failed and we were unable to recover it. 00:39:07.368 [2024-09-27 15:57:47.770966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.368 [2024-09-27 15:57:47.770977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.368 qpair failed and we were unable to recover it. 00:39:07.368 [2024-09-27 15:57:47.771102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.368 [2024-09-27 15:57:47.771109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.368 qpair failed and we were unable to recover it. 00:39:07.368 [2024-09-27 15:57:47.771377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.368 [2024-09-27 15:57:47.771386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.368 qpair failed and we were unable to recover it. 00:39:07.368 [2024-09-27 15:57:47.771718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.368 [2024-09-27 15:57:47.771727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.368 qpair failed and we were unable to recover it. 00:39:07.368 [2024-09-27 15:57:47.771907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.368 [2024-09-27 15:57:47.771918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.368 qpair failed and we were unable to recover it. 00:39:07.368 [2024-09-27 15:57:47.772312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.368 [2024-09-27 15:57:47.772324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.368 qpair failed and we were unable to recover it. 00:39:07.368 [2024-09-27 15:57:47.772550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.369 [2024-09-27 15:57:47.772561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.369 qpair failed and we were unable to recover it. 00:39:07.369 [2024-09-27 15:57:47.772899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.369 [2024-09-27 15:57:47.772910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.369 qpair failed and we were unable to recover it. 00:39:07.369 [2024-09-27 15:57:47.773112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.369 [2024-09-27 15:57:47.773122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.369 qpair failed and we were unable to recover it. 00:39:07.369 [2024-09-27 15:57:47.773413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.369 [2024-09-27 15:57:47.773424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.369 qpair failed and we were unable to recover it. 00:39:07.369 [2024-09-27 15:57:47.773753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.369 [2024-09-27 15:57:47.773763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.369 qpair failed and we were unable to recover it. 00:39:07.369 [2024-09-27 15:57:47.774068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.369 [2024-09-27 15:57:47.774078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.369 qpair failed and we were unable to recover it. 00:39:07.369 [2024-09-27 15:57:47.774382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.369 [2024-09-27 15:57:47.774392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.369 qpair failed and we were unable to recover it. 00:39:07.369 [2024-09-27 15:57:47.774674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.369 [2024-09-27 15:57:47.774684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.369 qpair failed and we were unable to recover it. 00:39:07.369 [2024-09-27 15:57:47.775011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.369 [2024-09-27 15:57:47.775022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.369 qpair failed and we were unable to recover it. 00:39:07.369 [2024-09-27 15:57:47.775327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.369 [2024-09-27 15:57:47.775337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.369 qpair failed and we were unable to recover it. 00:39:07.369 [2024-09-27 15:57:47.775531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.369 [2024-09-27 15:57:47.775542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.369 qpair failed and we were unable to recover it. 00:39:07.369 [2024-09-27 15:57:47.775873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.369 [2024-09-27 15:57:47.775884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.369 qpair failed and we were unable to recover it. 00:39:07.369 [2024-09-27 15:57:47.776217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.369 [2024-09-27 15:57:47.776227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.369 qpair failed and we were unable to recover it. 00:39:07.369 [2024-09-27 15:57:47.776518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.369 [2024-09-27 15:57:47.776530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.369 qpair failed and we were unable to recover it. 00:39:07.369 [2024-09-27 15:57:47.776842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.369 [2024-09-27 15:57:47.776853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.369 qpair failed and we were unable to recover it. 00:39:07.369 [2024-09-27 15:57:47.777140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.369 [2024-09-27 15:57:47.777154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.369 qpair failed and we were unable to recover it. 00:39:07.369 [2024-09-27 15:57:47.777485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.369 [2024-09-27 15:57:47.777496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.369 qpair failed and we were unable to recover it. 00:39:07.369 [2024-09-27 15:57:47.777702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.369 [2024-09-27 15:57:47.777713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.369 qpair failed and we were unable to recover it. 00:39:07.369 [2024-09-27 15:57:47.778038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.369 [2024-09-27 15:57:47.778049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.369 qpair failed and we were unable to recover it. 00:39:07.369 [2024-09-27 15:57:47.778374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.369 [2024-09-27 15:57:47.778386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.369 qpair failed and we were unable to recover it. 00:39:07.369 [2024-09-27 15:57:47.778574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.369 [2024-09-27 15:57:47.778585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.369 qpair failed and we were unable to recover it. 00:39:07.369 [2024-09-27 15:57:47.778874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.369 [2024-09-27 15:57:47.778884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.369 qpair failed and we were unable to recover it. 00:39:07.369 [2024-09-27 15:57:47.779083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.369 [2024-09-27 15:57:47.779095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.369 qpair failed and we were unable to recover it. 00:39:07.369 [2024-09-27 15:57:47.779422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.369 [2024-09-27 15:57:47.779433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.369 qpair failed and we were unable to recover it. 00:39:07.369 [2024-09-27 15:57:47.779779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.369 [2024-09-27 15:57:47.779790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.369 qpair failed and we were unable to recover it. 00:39:07.369 [2024-09-27 15:57:47.779991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.369 [2024-09-27 15:57:47.780003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.369 qpair failed and we were unable to recover it. 00:39:07.369 [2024-09-27 15:57:47.780193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.369 [2024-09-27 15:57:47.780204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.369 qpair failed and we were unable to recover it. 00:39:07.369 [2024-09-27 15:57:47.780402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.369 [2024-09-27 15:57:47.780412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.369 qpair failed and we were unable to recover it. 00:39:07.369 [2024-09-27 15:57:47.780796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.369 [2024-09-27 15:57:47.780808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.369 qpair failed and we were unable to recover it. 00:39:07.369 [2024-09-27 15:57:47.781127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.369 [2024-09-27 15:57:47.781140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.369 qpair failed and we were unable to recover it. 00:39:07.369 [2024-09-27 15:57:47.781477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.369 [2024-09-27 15:57:47.781490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.369 qpair failed and we were unable to recover it. 00:39:07.369 [2024-09-27 15:57:47.781706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.369 [2024-09-27 15:57:47.781717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.369 qpair failed and we were unable to recover it. 00:39:07.369 [2024-09-27 15:57:47.782048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.369 [2024-09-27 15:57:47.782061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.369 qpair failed and we were unable to recover it. 00:39:07.369 [2024-09-27 15:57:47.782409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.369 [2024-09-27 15:57:47.782420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.369 qpair failed and we were unable to recover it. 00:39:07.369 [2024-09-27 15:57:47.782746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.369 [2024-09-27 15:57:47.782758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.369 qpair failed and we were unable to recover it. 00:39:07.369 [2024-09-27 15:57:47.782959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.369 [2024-09-27 15:57:47.782970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.369 qpair failed and we were unable to recover it. 00:39:07.369 [2024-09-27 15:57:47.783209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.369 [2024-09-27 15:57:47.783221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.369 qpair failed and we were unable to recover it. 00:39:07.370 [2024-09-27 15:57:47.783556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.370 [2024-09-27 15:57:47.783567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.370 qpair failed and we were unable to recover it. 00:39:07.370 [2024-09-27 15:57:47.783800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.370 [2024-09-27 15:57:47.783810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.370 qpair failed and we were unable to recover it. 00:39:07.370 [2024-09-27 15:57:47.784146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.370 [2024-09-27 15:57:47.784158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.370 qpair failed and we were unable to recover it. 00:39:07.370 [2024-09-27 15:57:47.784327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.370 [2024-09-27 15:57:47.784337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.370 qpair failed and we were unable to recover it. 00:39:07.370 [2024-09-27 15:57:47.784508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.370 [2024-09-27 15:57:47.784519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.370 qpair failed and we were unable to recover it. 00:39:07.370 [2024-09-27 15:57:47.784871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.370 [2024-09-27 15:57:47.784883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.370 qpair failed and we were unable to recover it. 00:39:07.370 [2024-09-27 15:57:47.785204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.370 [2024-09-27 15:57:47.785216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.370 qpair failed and we were unable to recover it. 00:39:07.370 [2024-09-27 15:57:47.785425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.370 [2024-09-27 15:57:47.785436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.370 qpair failed and we were unable to recover it. 00:39:07.370 [2024-09-27 15:57:47.785776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.370 [2024-09-27 15:57:47.785787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.370 qpair failed and we were unable to recover it. 00:39:07.370 [2024-09-27 15:57:47.786100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.370 [2024-09-27 15:57:47.786111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.370 qpair failed and we were unable to recover it. 00:39:07.370 [2024-09-27 15:57:47.786440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.370 [2024-09-27 15:57:47.786452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.370 qpair failed and we were unable to recover it. 00:39:07.370 [2024-09-27 15:57:47.786636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.370 [2024-09-27 15:57:47.786649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.370 qpair failed and we were unable to recover it. 00:39:07.370 [2024-09-27 15:57:47.787005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.370 [2024-09-27 15:57:47.787016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.370 qpair failed and we were unable to recover it. 00:39:07.370 [2024-09-27 15:57:47.787374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.370 [2024-09-27 15:57:47.787389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.370 qpair failed and we were unable to recover it. 00:39:07.370 [2024-09-27 15:57:47.787583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.370 [2024-09-27 15:57:47.787595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.370 qpair failed and we were unable to recover it. 00:39:07.370 [2024-09-27 15:57:47.787933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.370 [2024-09-27 15:57:47.787943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.370 qpair failed and we were unable to recover it. 00:39:07.370 [2024-09-27 15:57:47.788251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.370 [2024-09-27 15:57:47.788263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.370 qpair failed and we were unable to recover it. 00:39:07.370 [2024-09-27 15:57:47.788572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.370 [2024-09-27 15:57:47.788583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.370 qpair failed and we were unable to recover it. 00:39:07.370 [2024-09-27 15:57:47.788757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.370 [2024-09-27 15:57:47.788767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.370 qpair failed and we were unable to recover it. 00:39:07.370 [2024-09-27 15:57:47.788962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.370 [2024-09-27 15:57:47.788972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.370 qpair failed and we were unable to recover it. 00:39:07.370 [2024-09-27 15:57:47.789023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.370 [2024-09-27 15:57:47.789033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.370 qpair failed and we were unable to recover it. 00:39:07.370 [2024-09-27 15:57:47.789361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.370 [2024-09-27 15:57:47.789372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.370 qpair failed and we were unable to recover it. 00:39:07.370 [2024-09-27 15:57:47.789562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.370 [2024-09-27 15:57:47.789571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.370 qpair failed and we were unable to recover it. 00:39:07.370 [2024-09-27 15:57:47.789910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.370 [2024-09-27 15:57:47.789921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.370 qpair failed and we were unable to recover it. 00:39:07.370 [2024-09-27 15:57:47.790269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.370 [2024-09-27 15:57:47.790278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.370 qpair failed and we were unable to recover it. 00:39:07.370 [2024-09-27 15:57:47.790607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.370 [2024-09-27 15:57:47.790618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.370 qpair failed and we were unable to recover it. 00:39:07.370 [2024-09-27 15:57:47.790801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.370 [2024-09-27 15:57:47.790813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.370 qpair failed and we were unable to recover it. 00:39:07.370 [2024-09-27 15:57:47.791159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.370 [2024-09-27 15:57:47.791170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.370 qpair failed and we were unable to recover it. 00:39:07.370 [2024-09-27 15:57:47.791505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.370 [2024-09-27 15:57:47.791516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.370 qpair failed and we were unable to recover it. 00:39:07.370 [2024-09-27 15:57:47.791712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.370 [2024-09-27 15:57:47.791724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.370 qpair failed and we were unable to recover it. 00:39:07.370 [2024-09-27 15:57:47.792034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.370 [2024-09-27 15:57:47.792044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.370 qpair failed and we were unable to recover it. 00:39:07.370 [2024-09-27 15:57:47.792377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.370 [2024-09-27 15:57:47.792387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.370 qpair failed and we were unable to recover it. 00:39:07.370 [2024-09-27 15:57:47.792578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.370 [2024-09-27 15:57:47.792590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.371 qpair failed and we were unable to recover it. 00:39:07.371 [2024-09-27 15:57:47.792771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.371 [2024-09-27 15:57:47.792782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.371 qpair failed and we were unable to recover it. 00:39:07.371 [2024-09-27 15:57:47.792829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.371 [2024-09-27 15:57:47.792838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.371 qpair failed and we were unable to recover it. 00:39:07.371 [2024-09-27 15:57:47.793145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.371 [2024-09-27 15:57:47.793158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.371 qpair failed and we were unable to recover it. 00:39:07.371 [2024-09-27 15:57:47.793319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.371 [2024-09-27 15:57:47.793331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.371 qpair failed and we were unable to recover it. 00:39:07.371 [2024-09-27 15:57:47.793537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.371 [2024-09-27 15:57:47.793549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.371 qpair failed and we were unable to recover it. 00:39:07.371 [2024-09-27 15:57:47.793720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.371 [2024-09-27 15:57:47.793732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.371 qpair failed and we were unable to recover it. 00:39:07.371 [2024-09-27 15:57:47.793909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.371 [2024-09-27 15:57:47.793921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.371 qpair failed and we were unable to recover it. 00:39:07.371 [2024-09-27 15:57:47.794147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.371 [2024-09-27 15:57:47.794160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.371 qpair failed and we were unable to recover it. 00:39:07.371 [2024-09-27 15:57:47.794468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.371 [2024-09-27 15:57:47.794480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.371 qpair failed and we were unable to recover it. 00:39:07.371 [2024-09-27 15:57:47.794675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.371 [2024-09-27 15:57:47.794686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.371 qpair failed and we were unable to recover it. 00:39:07.371 [2024-09-27 15:57:47.795035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.371 [2024-09-27 15:57:47.795046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.371 qpair failed and we were unable to recover it. 00:39:07.371 [2024-09-27 15:57:47.795254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.371 [2024-09-27 15:57:47.795264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.371 qpair failed and we were unable to recover it. 00:39:07.371 [2024-09-27 15:57:47.795610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.371 [2024-09-27 15:57:47.795621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.371 qpair failed and we were unable to recover it. 00:39:07.371 [2024-09-27 15:57:47.795800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.371 [2024-09-27 15:57:47.795812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.371 qpair failed and we were unable to recover it. 00:39:07.371 [2024-09-27 15:57:47.796029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.371 [2024-09-27 15:57:47.796040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.371 qpair failed and we were unable to recover it. 00:39:07.371 [2024-09-27 15:57:47.796232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.371 [2024-09-27 15:57:47.796242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.371 qpair failed and we were unable to recover it. 00:39:07.371 [2024-09-27 15:57:47.796580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.371 [2024-09-27 15:57:47.796590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.371 qpair failed and we were unable to recover it. 00:39:07.371 [2024-09-27 15:57:47.796791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.371 [2024-09-27 15:57:47.796800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.371 qpair failed and we were unable to recover it. 00:39:07.371 [2024-09-27 15:57:47.797007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.371 [2024-09-27 15:57:47.797018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.371 qpair failed and we were unable to recover it. 00:39:07.371 [2024-09-27 15:57:47.797417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.371 [2024-09-27 15:57:47.797430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.371 qpair failed and we were unable to recover it. 00:39:07.371 [2024-09-27 15:57:47.797725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.371 [2024-09-27 15:57:47.797735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.371 qpair failed and we were unable to recover it. 00:39:07.371 [2024-09-27 15:57:47.798067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.371 [2024-09-27 15:57:47.798078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.371 qpair failed and we were unable to recover it. 00:39:07.371 [2024-09-27 15:57:47.798426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.371 [2024-09-27 15:57:47.798437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.371 qpair failed and we were unable to recover it. 00:39:07.371 [2024-09-27 15:57:47.798636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.371 [2024-09-27 15:57:47.798645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.371 qpair failed and we were unable to recover it. 00:39:07.371 [2024-09-27 15:57:47.798828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.371 [2024-09-27 15:57:47.798837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.371 qpair failed and we were unable to recover it. 00:39:07.371 [2024-09-27 15:57:47.799068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.371 [2024-09-27 15:57:47.799077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.371 qpair failed and we were unable to recover it. 00:39:07.371 [2024-09-27 15:57:47.799287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.371 [2024-09-27 15:57:47.799298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.371 qpair failed and we were unable to recover it. 00:39:07.371 [2024-09-27 15:57:47.799486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.371 [2024-09-27 15:57:47.799497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.371 qpair failed and we were unable to recover it. 00:39:07.371 [2024-09-27 15:57:47.799855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.371 [2024-09-27 15:57:47.799866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.371 qpair failed and we were unable to recover it. 00:39:07.371 [2024-09-27 15:57:47.800005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.371 [2024-09-27 15:57:47.800016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.371 qpair failed and we were unable to recover it. 00:39:07.371 [2024-09-27 15:57:47.800347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.371 [2024-09-27 15:57:47.800359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.371 qpair failed and we were unable to recover it. 00:39:07.371 [2024-09-27 15:57:47.800780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.371 [2024-09-27 15:57:47.800792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.371 qpair failed and we were unable to recover it. 00:39:07.371 [2024-09-27 15:57:47.801118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.371 [2024-09-27 15:57:47.801129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.371 qpair failed and we were unable to recover it. 00:39:07.371 [2024-09-27 15:57:47.801416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.371 [2024-09-27 15:57:47.801426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.371 qpair failed and we were unable to recover it. 00:39:07.371 [2024-09-27 15:57:47.801754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.371 [2024-09-27 15:57:47.801764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.371 qpair failed and we were unable to recover it. 00:39:07.371 [2024-09-27 15:57:47.802074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.371 [2024-09-27 15:57:47.802084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.371 qpair failed and we were unable to recover it. 00:39:07.371 [2024-09-27 15:57:47.802395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.371 [2024-09-27 15:57:47.802405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.371 qpair failed and we were unable to recover it. 00:39:07.371 [2024-09-27 15:57:47.802712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.372 [2024-09-27 15:57:47.802723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.372 qpair failed and we were unable to recover it. 00:39:07.372 [2024-09-27 15:57:47.803033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.372 [2024-09-27 15:57:47.803043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.372 qpair failed and we were unable to recover it. 00:39:07.372 [2024-09-27 15:57:47.803347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.372 [2024-09-27 15:57:47.803358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.372 qpair failed and we were unable to recover it. 00:39:07.372 [2024-09-27 15:57:47.803711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.372 [2024-09-27 15:57:47.803720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.372 qpair failed and we were unable to recover it. 00:39:07.372 [2024-09-27 15:57:47.804031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.372 [2024-09-27 15:57:47.804043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.372 qpair failed and we were unable to recover it. 00:39:07.372 [2024-09-27 15:57:47.804373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.372 [2024-09-27 15:57:47.804384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.372 qpair failed and we were unable to recover it. 00:39:07.372 [2024-09-27 15:57:47.804558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.372 [2024-09-27 15:57:47.804568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.372 qpair failed and we were unable to recover it. 00:39:07.372 [2024-09-27 15:57:47.804781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.372 [2024-09-27 15:57:47.804791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.372 qpair failed and we were unable to recover it. 00:39:07.372 [2024-09-27 15:57:47.805128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.372 [2024-09-27 15:57:47.805140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.372 qpair failed and we were unable to recover it. 00:39:07.372 [2024-09-27 15:57:47.805323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.372 [2024-09-27 15:57:47.805334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.372 qpair failed and we were unable to recover it. 00:39:07.372 [2024-09-27 15:57:47.805618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.372 [2024-09-27 15:57:47.805629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.372 qpair failed and we were unable to recover it. 00:39:07.372 [2024-09-27 15:57:47.805961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.372 [2024-09-27 15:57:47.805972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.372 qpair failed and we were unable to recover it. 00:39:07.372 [2024-09-27 15:57:47.806263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.372 [2024-09-27 15:57:47.806274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.372 qpair failed and we were unable to recover it. 00:39:07.372 [2024-09-27 15:57:47.806569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.372 [2024-09-27 15:57:47.806581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.372 qpair failed and we were unable to recover it. 00:39:07.372 [2024-09-27 15:57:47.806920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.372 [2024-09-27 15:57:47.806930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.372 qpair failed and we were unable to recover it. 00:39:07.372 [2024-09-27 15:57:47.807272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.372 [2024-09-27 15:57:47.807283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.372 qpair failed and we were unable to recover it. 00:39:07.372 [2024-09-27 15:57:47.807477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.372 [2024-09-27 15:57:47.807487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.372 qpair failed and we were unable to recover it. 00:39:07.372 [2024-09-27 15:57:47.807656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.372 [2024-09-27 15:57:47.807665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.372 qpair failed and we were unable to recover it. 00:39:07.372 [2024-09-27 15:57:47.807961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.372 [2024-09-27 15:57:47.807971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.372 qpair failed and we were unable to recover it. 00:39:07.372 [2024-09-27 15:57:47.808154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.372 [2024-09-27 15:57:47.808164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.372 qpair failed and we were unable to recover it. 00:39:07.372 [2024-09-27 15:57:47.808335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.372 [2024-09-27 15:57:47.808344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.372 qpair failed and we were unable to recover it. 00:39:07.372 [2024-09-27 15:57:47.808631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.372 [2024-09-27 15:57:47.808642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.372 qpair failed and we were unable to recover it. 00:39:07.372 [2024-09-27 15:57:47.808932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.372 [2024-09-27 15:57:47.808942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.372 qpair failed and we were unable to recover it. 00:39:07.372 [2024-09-27 15:57:47.809234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.372 [2024-09-27 15:57:47.809245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.372 qpair failed and we were unable to recover it. 00:39:07.372 [2024-09-27 15:57:47.809414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.372 [2024-09-27 15:57:47.809425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.372 qpair failed and we were unable to recover it. 00:39:07.372 [2024-09-27 15:57:47.809722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.372 [2024-09-27 15:57:47.809733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.372 qpair failed and we were unable to recover it. 00:39:07.372 [2024-09-27 15:57:47.810074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.372 [2024-09-27 15:57:47.810084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.372 qpair failed and we were unable to recover it. 00:39:07.372 [2024-09-27 15:57:47.810276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.372 [2024-09-27 15:57:47.810287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.372 qpair failed and we were unable to recover it. 00:39:07.372 [2024-09-27 15:57:47.810580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.372 [2024-09-27 15:57:47.810590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.372 qpair failed and we were unable to recover it. 00:39:07.372 [2024-09-27 15:57:47.810761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.372 [2024-09-27 15:57:47.810770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.372 qpair failed and we were unable to recover it. 00:39:07.372 [2024-09-27 15:57:47.811059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.372 [2024-09-27 15:57:47.811071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.372 qpair failed and we were unable to recover it. 00:39:07.372 [2024-09-27 15:57:47.811254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.372 [2024-09-27 15:57:47.811266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.372 qpair failed and we were unable to recover it. 00:39:07.643 [2024-09-27 15:57:47.811640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.643 [2024-09-27 15:57:47.811653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.643 qpair failed and we were unable to recover it. 00:39:07.643 [2024-09-27 15:57:47.811970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.643 [2024-09-27 15:57:47.811984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.643 qpair failed and we were unable to recover it. 00:39:07.643 [2024-09-27 15:57:47.812326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.643 [2024-09-27 15:57:47.812338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.643 qpair failed and we were unable to recover it. 00:39:07.643 [2024-09-27 15:57:47.812665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.643 [2024-09-27 15:57:47.812676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.643 qpair failed and we were unable to recover it. 00:39:07.643 [2024-09-27 15:57:47.812871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.643 [2024-09-27 15:57:47.812883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.643 qpair failed and we were unable to recover it. 00:39:07.643 [2024-09-27 15:57:47.813085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.643 [2024-09-27 15:57:47.813097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.643 qpair failed and we were unable to recover it. 00:39:07.643 [2024-09-27 15:57:47.813400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.643 [2024-09-27 15:57:47.813415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.643 qpair failed and we were unable to recover it. 00:39:07.643 [2024-09-27 15:57:47.813747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.643 [2024-09-27 15:57:47.813758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.643 qpair failed and we were unable to recover it. 00:39:07.643 [2024-09-27 15:57:47.814070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.643 [2024-09-27 15:57:47.814081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.643 qpair failed and we were unable to recover it. 00:39:07.643 [2024-09-27 15:57:47.814409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.643 [2024-09-27 15:57:47.814421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.643 qpair failed and we were unable to recover it. 00:39:07.643 [2024-09-27 15:57:47.814744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.643 [2024-09-27 15:57:47.814755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.643 qpair failed and we were unable to recover it. 00:39:07.643 [2024-09-27 15:57:47.815071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.643 [2024-09-27 15:57:47.815083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.643 qpair failed and we were unable to recover it. 00:39:07.643 [2024-09-27 15:57:47.815262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.643 [2024-09-27 15:57:47.815274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.643 qpair failed and we were unable to recover it. 00:39:07.643 [2024-09-27 15:57:47.815620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.643 [2024-09-27 15:57:47.815632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.643 qpair failed and we were unable to recover it. 00:39:07.643 [2024-09-27 15:57:47.815955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.643 [2024-09-27 15:57:47.815967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.643 qpair failed and we were unable to recover it. 00:39:07.643 [2024-09-27 15:57:47.816299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.643 [2024-09-27 15:57:47.816309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.643 qpair failed and we were unable to recover it. 00:39:07.643 [2024-09-27 15:57:47.816494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.643 [2024-09-27 15:57:47.816506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.643 qpair failed and we were unable to recover it. 00:39:07.643 [2024-09-27 15:57:47.816680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.643 [2024-09-27 15:57:47.816693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.643 qpair failed and we were unable to recover it. 00:39:07.643 [2024-09-27 15:57:47.817035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.643 [2024-09-27 15:57:47.817048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.643 qpair failed and we were unable to recover it. 00:39:07.643 [2024-09-27 15:57:47.817391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.643 [2024-09-27 15:57:47.817404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.643 qpair failed and we were unable to recover it. 00:39:07.643 [2024-09-27 15:57:47.817454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.643 [2024-09-27 15:57:47.817463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.643 qpair failed and we were unable to recover it. 00:39:07.643 [2024-09-27 15:57:47.817579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.643 [2024-09-27 15:57:47.817589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.643 qpair failed and we were unable to recover it. 00:39:07.643 [2024-09-27 15:57:47.817766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.643 [2024-09-27 15:57:47.817779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.643 qpair failed and we were unable to recover it. 00:39:07.643 [2024-09-27 15:57:47.818043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.643 [2024-09-27 15:57:47.818057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.643 qpair failed and we were unable to recover it. 00:39:07.643 [2024-09-27 15:57:47.818407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.643 [2024-09-27 15:57:47.818419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.643 qpair failed and we were unable to recover it. 00:39:07.643 [2024-09-27 15:57:47.818757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.643 [2024-09-27 15:57:47.818769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.643 qpair failed and we were unable to recover it. 00:39:07.643 [2024-09-27 15:57:47.818929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.643 [2024-09-27 15:57:47.818939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.643 qpair failed and we were unable to recover it. 00:39:07.643 [2024-09-27 15:57:47.819132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.643 [2024-09-27 15:57:47.819142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.643 qpair failed and we were unable to recover it. 00:39:07.643 [2024-09-27 15:57:47.819313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.643 [2024-09-27 15:57:47.819323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.643 qpair failed and we were unable to recover it. 00:39:07.643 [2024-09-27 15:57:47.819646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.643 [2024-09-27 15:57:47.819659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.643 qpair failed and we were unable to recover it. 00:39:07.643 [2024-09-27 15:57:47.819984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.643 [2024-09-27 15:57:47.819995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.643 qpair failed and we were unable to recover it. 00:39:07.643 [2024-09-27 15:57:47.820348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.643 [2024-09-27 15:57:47.820361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.643 qpair failed and we were unable to recover it. 00:39:07.643 [2024-09-27 15:57:47.820653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.643 [2024-09-27 15:57:47.820665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.643 qpair failed and we were unable to recover it. 00:39:07.643 [2024-09-27 15:57:47.821025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.644 [2024-09-27 15:57:47.821041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.644 qpair failed and we were unable to recover it. 00:39:07.644 [2024-09-27 15:57:47.821095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.644 [2024-09-27 15:57:47.821106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.644 qpair failed and we were unable to recover it. 00:39:07.644 [2024-09-27 15:57:47.821411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.644 [2024-09-27 15:57:47.821423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.644 qpair failed and we were unable to recover it. 00:39:07.644 [2024-09-27 15:57:47.821645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.644 [2024-09-27 15:57:47.821657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.644 qpair failed and we were unable to recover it. 00:39:07.644 [2024-09-27 15:57:47.821955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.644 [2024-09-27 15:57:47.821966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.644 qpair failed and we were unable to recover it. 00:39:07.644 [2024-09-27 15:57:47.822007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.644 [2024-09-27 15:57:47.822015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.644 qpair failed and we were unable to recover it. 00:39:07.644 [2024-09-27 15:57:47.822328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.644 [2024-09-27 15:57:47.822339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.644 qpair failed and we were unable to recover it. 00:39:07.644 [2024-09-27 15:57:47.822702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.644 [2024-09-27 15:57:47.822714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.644 qpair failed and we were unable to recover it. 00:39:07.644 [2024-09-27 15:57:47.822912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.644 [2024-09-27 15:57:47.822923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.644 qpair failed and we were unable to recover it. 00:39:07.644 [2024-09-27 15:57:47.823284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.644 [2024-09-27 15:57:47.823295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.644 qpair failed and we were unable to recover it. 00:39:07.644 [2024-09-27 15:57:47.823597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.644 [2024-09-27 15:57:47.823608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.644 qpair failed and we were unable to recover it. 00:39:07.644 [2024-09-27 15:57:47.823791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.644 [2024-09-27 15:57:47.823801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.644 qpair failed and we were unable to recover it. 00:39:07.644 [2024-09-27 15:57:47.824100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.644 [2024-09-27 15:57:47.824112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.644 qpair failed and we were unable to recover it. 00:39:07.644 [2024-09-27 15:57:47.824493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.644 [2024-09-27 15:57:47.824505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.644 qpair failed and we were unable to recover it. 00:39:07.644 [2024-09-27 15:57:47.824862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.644 [2024-09-27 15:57:47.824874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.644 qpair failed and we were unable to recover it. 00:39:07.644 [2024-09-27 15:57:47.825070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.644 [2024-09-27 15:57:47.825082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.644 qpair failed and we were unable to recover it. 00:39:07.644 [2024-09-27 15:57:47.825423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.644 [2024-09-27 15:57:47.825434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.644 qpair failed and we were unable to recover it. 00:39:07.644 [2024-09-27 15:57:47.825765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.644 [2024-09-27 15:57:47.825778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.644 qpair failed and we were unable to recover it. 00:39:07.644 [2024-09-27 15:57:47.826068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.644 [2024-09-27 15:57:47.826081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.644 qpair failed and we were unable to recover it. 00:39:07.644 [2024-09-27 15:57:47.826269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.644 [2024-09-27 15:57:47.826281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.644 qpair failed and we were unable to recover it. 00:39:07.644 [2024-09-27 15:57:47.826628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.644 [2024-09-27 15:57:47.826640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.644 qpair failed and we were unable to recover it. 00:39:07.644 [2024-09-27 15:57:47.826834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.644 [2024-09-27 15:57:47.826846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.644 qpair failed and we were unable to recover it. 00:39:07.644 [2024-09-27 15:57:47.827141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.644 [2024-09-27 15:57:47.827154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.644 qpair failed and we were unable to recover it. 00:39:07.644 [2024-09-27 15:57:47.827342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.644 [2024-09-27 15:57:47.827354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.644 qpair failed and we were unable to recover it. 00:39:07.644 [2024-09-27 15:57:47.827683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.644 [2024-09-27 15:57:47.827695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.644 qpair failed and we were unable to recover it. 00:39:07.644 [2024-09-27 15:57:47.827882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.644 [2024-09-27 15:57:47.827905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.644 qpair failed and we were unable to recover it. 00:39:07.644 [2024-09-27 15:57:47.828090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.644 [2024-09-27 15:57:47.828101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.644 qpair failed and we were unable to recover it. 00:39:07.644 [2024-09-27 15:57:47.828401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.644 [2024-09-27 15:57:47.828416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.644 qpair failed and we were unable to recover it. 00:39:07.644 [2024-09-27 15:57:47.828754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.644 [2024-09-27 15:57:47.828764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.644 qpair failed and we were unable to recover it. 00:39:07.644 [2024-09-27 15:57:47.828961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.644 [2024-09-27 15:57:47.828970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.644 qpair failed and we were unable to recover it. 00:39:07.644 [2024-09-27 15:57:47.829156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.644 [2024-09-27 15:57:47.829166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.644 qpair failed and we were unable to recover it. 00:39:07.644 [2024-09-27 15:57:47.829504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.644 [2024-09-27 15:57:47.829514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.644 qpair failed and we were unable to recover it. 00:39:07.644 [2024-09-27 15:57:47.829730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.644 [2024-09-27 15:57:47.829741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.644 qpair failed and we were unable to recover it. 00:39:07.644 [2024-09-27 15:57:47.829921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.644 [2024-09-27 15:57:47.829930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.644 qpair failed and we were unable to recover it. 00:39:07.644 [2024-09-27 15:57:47.830127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.644 [2024-09-27 15:57:47.830138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.644 qpair failed and we were unable to recover it. 00:39:07.644 [2024-09-27 15:57:47.830299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.644 [2024-09-27 15:57:47.830308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.644 qpair failed and we were unable to recover it. 00:39:07.644 [2024-09-27 15:57:47.830595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.644 [2024-09-27 15:57:47.830606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.644 qpair failed and we were unable to recover it. 00:39:07.644 [2024-09-27 15:57:47.830933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.644 [2024-09-27 15:57:47.830945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.644 qpair failed and we were unable to recover it. 00:39:07.645 [2024-09-27 15:57:47.831025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.645 [2024-09-27 15:57:47.831035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.645 qpair failed and we were unable to recover it. 00:39:07.645 [2024-09-27 15:57:47.831128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.645 [2024-09-27 15:57:47.831137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.645 qpair failed and we were unable to recover it. 00:39:07.645 [2024-09-27 15:57:47.831466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.645 [2024-09-27 15:57:47.831476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.645 qpair failed and we were unable to recover it. 00:39:07.645 [2024-09-27 15:57:47.831806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.645 [2024-09-27 15:57:47.831817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.645 qpair failed and we were unable to recover it. 00:39:07.645 [2024-09-27 15:57:47.832023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.645 [2024-09-27 15:57:47.832038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.645 qpair failed and we were unable to recover it. 00:39:07.645 [2024-09-27 15:57:47.832383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.645 [2024-09-27 15:57:47.832420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.645 qpair failed and we were unable to recover it. 00:39:07.645 [2024-09-27 15:57:47.832811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.645 [2024-09-27 15:57:47.832840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.645 qpair failed and we were unable to recover it. 00:39:07.645 [2024-09-27 15:57:47.832927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.645 [2024-09-27 15:57:47.832938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.645 qpair failed and we were unable to recover it. 00:39:07.645 [2024-09-27 15:57:47.833286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.645 [2024-09-27 15:57:47.833315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.645 qpair failed and we were unable to recover it. 00:39:07.645 [2024-09-27 15:57:47.833647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.645 [2024-09-27 15:57:47.833672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.645 qpair failed and we were unable to recover it. 00:39:07.645 [2024-09-27 15:57:47.834044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.645 [2024-09-27 15:57:47.834069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.645 qpair failed and we were unable to recover it. 00:39:07.645 [2024-09-27 15:57:47.834442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.645 [2024-09-27 15:57:47.834461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.645 qpair failed and we were unable to recover it. 00:39:07.645 [2024-09-27 15:57:47.834804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.645 [2024-09-27 15:57:47.834832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.645 qpair failed and we were unable to recover it. 00:39:07.645 [2024-09-27 15:57:47.835178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.645 [2024-09-27 15:57:47.835215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.645 qpair failed and we were unable to recover it. 00:39:07.645 [2024-09-27 15:57:47.835575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.645 [2024-09-27 15:57:47.835613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.645 qpair failed and we were unable to recover it. 00:39:07.645 [2024-09-27 15:57:47.835974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.645 [2024-09-27 15:57:47.836014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.645 qpair failed and we were unable to recover it. 00:39:07.645 [2024-09-27 15:57:47.836374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.645 [2024-09-27 15:57:47.836415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.645 qpair failed and we were unable to recover it. 00:39:07.645 [2024-09-27 15:57:47.836818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.645 [2024-09-27 15:57:47.836858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.645 qpair failed and we were unable to recover it. 00:39:07.645 [2024-09-27 15:57:47.837317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.645 [2024-09-27 15:57:47.837364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.645 qpair failed and we were unable to recover it. 00:39:07.645 [2024-09-27 15:57:47.837723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.645 [2024-09-27 15:57:47.837768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.645 qpair failed and we were unable to recover it. 00:39:07.645 [2024-09-27 15:57:47.838144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.645 [2024-09-27 15:57:47.838188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.645 qpair failed and we were unable to recover it. 00:39:07.645 [2024-09-27 15:57:47.838525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.645 [2024-09-27 15:57:47.838567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.645 qpair failed and we were unable to recover it. 00:39:07.645 [2024-09-27 15:57:47.838779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.645 [2024-09-27 15:57:47.838798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.645 qpair failed and we were unable to recover it. 00:39:07.645 [2024-09-27 15:57:47.839000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.645 [2024-09-27 15:57:47.839017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.645 qpair failed and we were unable to recover it. 00:39:07.645 [2024-09-27 15:57:47.839366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.645 [2024-09-27 15:57:47.839404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.645 qpair failed and we were unable to recover it. 00:39:07.645 [2024-09-27 15:57:47.839613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.645 [2024-09-27 15:57:47.839638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.645 qpair failed and we were unable to recover it. 00:39:07.645 [2024-09-27 15:57:47.839991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.645 [2024-09-27 15:57:47.840043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.645 qpair failed and we were unable to recover it. 00:39:07.645 [2024-09-27 15:57:47.840147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.645 [2024-09-27 15:57:47.840166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.645 qpair failed and we were unable to recover it. 00:39:07.645 [2024-09-27 15:57:47.840712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.645 [2024-09-27 15:57:47.840842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f0000b90 with addr=10.0.0.2, port=4420 00:39:07.645 qpair failed and we were unable to recover it. 00:39:07.645 [2024-09-27 15:57:47.841227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.645 [2024-09-27 15:57:47.841269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f0000b90 with addr=10.0.0.2, port=4420 00:39:07.645 qpair failed and we were unable to recover it. 00:39:07.645 [2024-09-27 15:57:47.841674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.645 [2024-09-27 15:57:47.841708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f0000b90 with addr=10.0.0.2, port=4420 00:39:07.645 qpair failed and we were unable to recover it. 00:39:07.645 [2024-09-27 15:57:47.841968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.645 [2024-09-27 15:57:47.842008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f0000b90 with addr=10.0.0.2, port=4420 00:39:07.645 qpair failed and we were unable to recover it. 00:39:07.645 [2024-09-27 15:57:47.842128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.645 [2024-09-27 15:57:47.842158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f0000b90 with addr=10.0.0.2, port=4420 00:39:07.645 qpair failed and we were unable to recover it. 00:39:07.645 [2024-09-27 15:57:47.842719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.645 [2024-09-27 15:57:47.842824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.645 qpair failed and we were unable to recover it. 00:39:07.645 [2024-09-27 15:57:47.843423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.645 [2024-09-27 15:57:47.843465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.645 qpair failed and we were unable to recover it. 00:39:07.645 [2024-09-27 15:57:47.843833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.645 [2024-09-27 15:57:47.843867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.645 qpair failed and we were unable to recover it. 00:39:07.645 [2024-09-27 15:57:47.844308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.645 [2024-09-27 15:57:47.844339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.645 qpair failed and we were unable to recover it. 00:39:07.645 [2024-09-27 15:57:47.844696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.646 [2024-09-27 15:57:47.844729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.646 qpair failed and we were unable to recover it. 00:39:07.646 [2024-09-27 15:57:47.845010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.646 [2024-09-27 15:57:47.845047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.646 qpair failed and we were unable to recover it. 00:39:07.646 [2024-09-27 15:57:47.845436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.646 [2024-09-27 15:57:47.845469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.646 qpair failed and we were unable to recover it. 00:39:07.646 [2024-09-27 15:57:47.845591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.646 [2024-09-27 15:57:47.845621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.646 qpair failed and we were unable to recover it. 00:39:07.646 [2024-09-27 15:57:47.846044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.646 [2024-09-27 15:57:47.846077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.646 qpair failed and we were unable to recover it. 00:39:07.646 [2024-09-27 15:57:47.846458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.646 [2024-09-27 15:57:47.846490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.646 qpair failed and we were unable to recover it. 00:39:07.646 [2024-09-27 15:57:47.846700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.646 [2024-09-27 15:57:47.846743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.646 qpair failed and we were unable to recover it. 00:39:07.646 [2024-09-27 15:57:47.847142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.646 [2024-09-27 15:57:47.847177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.646 qpair failed and we were unable to recover it. 00:39:07.646 [2024-09-27 15:57:47.847529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.646 [2024-09-27 15:57:47.847559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.646 qpair failed and we were unable to recover it. 00:39:07.646 [2024-09-27 15:57:47.847786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.646 [2024-09-27 15:57:47.847817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.646 qpair failed and we were unable to recover it. 00:39:07.646 [2024-09-27 15:57:47.848052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.646 [2024-09-27 15:57:47.848085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.646 qpair failed and we were unable to recover it. 00:39:07.646 [2024-09-27 15:57:47.848489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.646 [2024-09-27 15:57:47.848520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.646 qpair failed and we were unable to recover it. 00:39:07.646 [2024-09-27 15:57:47.848882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.646 [2024-09-27 15:57:47.848925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.646 qpair failed and we were unable to recover it. 00:39:07.646 [2024-09-27 15:57:47.849136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.646 [2024-09-27 15:57:47.849168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.646 qpair failed and we were unable to recover it. 00:39:07.646 [2024-09-27 15:57:47.849565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.646 [2024-09-27 15:57:47.849595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.646 qpair failed and we were unable to recover it. 00:39:07.646 [2024-09-27 15:57:47.849965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.646 [2024-09-27 15:57:47.849999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.646 qpair failed and we were unable to recover it. 00:39:07.646 [2024-09-27 15:57:47.850415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.646 [2024-09-27 15:57:47.850447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.646 qpair failed and we were unable to recover it. 00:39:07.646 [2024-09-27 15:57:47.850683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.646 [2024-09-27 15:57:47.850714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.646 qpair failed and we were unable to recover it. 00:39:07.646 [2024-09-27 15:57:47.851062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.646 [2024-09-27 15:57:47.851095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.646 qpair failed and we were unable to recover it. 00:39:07.646 [2024-09-27 15:57:47.851476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.646 [2024-09-27 15:57:47.851508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.646 qpair failed and we were unable to recover it. 00:39:07.646 [2024-09-27 15:57:47.851748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.646 [2024-09-27 15:57:47.851781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.646 qpair failed and we were unable to recover it. 00:39:07.646 [2024-09-27 15:57:47.852204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.646 [2024-09-27 15:57:47.852238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.646 qpair failed and we were unable to recover it. 00:39:07.646 [2024-09-27 15:57:47.852335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.646 [2024-09-27 15:57:47.852362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.646 qpair failed and we were unable to recover it. 00:39:07.646 15:57:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:07.646 15:57:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:39:07.646 15:57:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:39:07.646 15:57:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:07.646 15:57:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:07.646 [2024-09-27 15:57:47.856188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.646 [2024-09-27 15:57:47.856291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.646 qpair failed and we were unable to recover it. 00:39:07.646 [2024-09-27 15:57:47.856647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.646 [2024-09-27 15:57:47.856690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.646 qpair failed and we were unable to recover it. 00:39:07.646 [2024-09-27 15:57:47.857141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.646 [2024-09-27 15:57:47.857177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.646 qpair failed and we were unable to recover it. 00:39:07.646 [2024-09-27 15:57:47.857446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.646 [2024-09-27 15:57:47.857482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.646 qpair failed and we were unable to recover it. 00:39:07.646 [2024-09-27 15:57:47.857733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.646 [2024-09-27 15:57:47.857769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.646 qpair failed and we were unable to recover it. 00:39:07.646 [2024-09-27 15:57:47.857997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.646 [2024-09-27 15:57:47.858030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.646 qpair failed and we were unable to recover it. 00:39:07.646 [2024-09-27 15:57:47.858460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.646 [2024-09-27 15:57:47.858493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.646 qpair failed and we were unable to recover it. 00:39:07.646 [2024-09-27 15:57:47.858861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.646 [2024-09-27 15:57:47.858906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.646 qpair failed and we were unable to recover it. 00:39:07.646 [2024-09-27 15:57:47.859275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.646 [2024-09-27 15:57:47.859309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.646 qpair failed and we were unable to recover it. 00:39:07.646 [2024-09-27 15:57:47.859530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.646 [2024-09-27 15:57:47.859562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.646 qpair failed and we were unable to recover it. 00:39:07.646 [2024-09-27 15:57:47.859826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.646 [2024-09-27 15:57:47.859859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.646 qpair failed and we were unable to recover it. 00:39:07.646 [2024-09-27 15:57:47.860228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.646 [2024-09-27 15:57:47.860264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.646 qpair failed and we were unable to recover it. 00:39:07.646 [2024-09-27 15:57:47.860497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.647 [2024-09-27 15:57:47.860532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.647 qpair failed and we were unable to recover it. 00:39:07.647 [2024-09-27 15:57:47.860903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.647 [2024-09-27 15:57:47.860936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.647 qpair failed and we were unable to recover it. 00:39:07.647 [2024-09-27 15:57:47.861165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.647 [2024-09-27 15:57:47.861199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.647 qpair failed and we were unable to recover it. 00:39:07.647 [2024-09-27 15:57:47.861433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.647 [2024-09-27 15:57:47.861467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.647 qpair failed and we were unable to recover it. 00:39:07.647 [2024-09-27 15:57:47.861831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.647 [2024-09-27 15:57:47.861863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.647 qpair failed and we were unable to recover it. 00:39:07.647 [2024-09-27 15:57:47.862112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.647 [2024-09-27 15:57:47.862147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.647 qpair failed and we were unable to recover it. 00:39:07.647 [2024-09-27 15:57:47.862524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.647 [2024-09-27 15:57:47.862558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.647 qpair failed and we were unable to recover it. 00:39:07.647 [2024-09-27 15:57:47.862917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.647 [2024-09-27 15:57:47.862950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.647 qpair failed and we were unable to recover it. 00:39:07.647 [2024-09-27 15:57:47.863301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.647 [2024-09-27 15:57:47.863334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.647 qpair failed and we were unable to recover it. 00:39:07.647 [2024-09-27 15:57:47.863604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.647 [2024-09-27 15:57:47.863642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.647 qpair failed and we were unable to recover it. 00:39:07.647 [2024-09-27 15:57:47.864043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.647 [2024-09-27 15:57:47.864078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.647 qpair failed and we were unable to recover it. 00:39:07.647 [2024-09-27 15:57:47.864465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.647 [2024-09-27 15:57:47.864499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.647 qpair failed and we were unable to recover it. 00:39:07.647 [2024-09-27 15:57:47.864861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.647 [2024-09-27 15:57:47.864904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.647 qpair failed and we were unable to recover it. 00:39:07.647 [2024-09-27 15:57:47.865249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.647 [2024-09-27 15:57:47.865281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.647 qpair failed and we were unable to recover it. 00:39:07.647 [2024-09-27 15:57:47.865635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.647 [2024-09-27 15:57:47.865668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.647 qpair failed and we were unable to recover it. 00:39:07.647 [2024-09-27 15:57:47.865929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.647 [2024-09-27 15:57:47.865962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.647 qpair failed and we were unable to recover it. 00:39:07.647 [2024-09-27 15:57:47.866074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.647 [2024-09-27 15:57:47.866101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.647 qpair failed and we were unable to recover it. 00:39:07.647 [2024-09-27 15:57:47.866468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.647 [2024-09-27 15:57:47.866500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.647 qpair failed and we were unable to recover it. 00:39:07.647 [2024-09-27 15:57:47.866745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.647 [2024-09-27 15:57:47.866776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.647 qpair failed and we were unable to recover it. 00:39:07.647 [2024-09-27 15:57:47.867145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.647 [2024-09-27 15:57:47.867180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.647 qpair failed and we were unable to recover it. 00:39:07.647 [2024-09-27 15:57:47.867433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.647 [2024-09-27 15:57:47.867469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.647 qpair failed and we were unable to recover it. 00:39:07.647 [2024-09-27 15:57:47.867714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.647 [2024-09-27 15:57:47.867746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.647 qpair failed and we were unable to recover it. 00:39:07.647 [2024-09-27 15:57:47.868001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.647 [2024-09-27 15:57:47.868035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.647 qpair failed and we were unable to recover it. 00:39:07.647 [2024-09-27 15:57:47.868477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.647 [2024-09-27 15:57:47.868511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.647 qpair failed and we were unable to recover it. 00:39:07.647 [2024-09-27 15:57:47.868752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.647 [2024-09-27 15:57:47.868783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.647 qpair failed and we were unable to recover it. 00:39:07.647 [2024-09-27 15:57:47.869175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.647 [2024-09-27 15:57:47.869208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.647 qpair failed and we were unable to recover it. 00:39:07.647 [2024-09-27 15:57:47.869576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.647 [2024-09-27 15:57:47.869608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.647 qpair failed and we were unable to recover it. 00:39:07.647 [2024-09-27 15:57:47.869967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.647 [2024-09-27 15:57:47.870001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.647 qpair failed and we were unable to recover it. 00:39:07.647 [2024-09-27 15:57:47.870250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.647 [2024-09-27 15:57:47.870281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.647 qpair failed and we were unable to recover it. 00:39:07.647 [2024-09-27 15:57:47.870521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.647 [2024-09-27 15:57:47.870555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.647 qpair failed and we were unable to recover it. 00:39:07.647 [2024-09-27 15:57:47.870762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.647 [2024-09-27 15:57:47.870794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.647 qpair failed and we were unable to recover it. 00:39:07.647 [2024-09-27 15:57:47.871154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.647 [2024-09-27 15:57:47.871187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.647 qpair failed and we were unable to recover it. 00:39:07.647 [2024-09-27 15:57:47.871539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.648 [2024-09-27 15:57:47.871574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.648 qpair failed and we were unable to recover it. 00:39:07.648 [2024-09-27 15:57:47.871939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.648 [2024-09-27 15:57:47.871973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.648 qpair failed and we were unable to recover it. 00:39:07.648 [2024-09-27 15:57:47.872359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.648 [2024-09-27 15:57:47.872391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.648 qpair failed and we were unable to recover it. 00:39:07.648 [2024-09-27 15:57:47.872751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.648 [2024-09-27 15:57:47.872782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.648 qpair failed and we were unable to recover it. 00:39:07.648 [2024-09-27 15:57:47.873137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.648 [2024-09-27 15:57:47.873173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.648 qpair failed and we were unable to recover it. 00:39:07.648 [2024-09-27 15:57:47.873386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.648 [2024-09-27 15:57:47.873419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.648 qpair failed and we were unable to recover it. 00:39:07.648 [2024-09-27 15:57:47.873670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.648 [2024-09-27 15:57:47.873701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.648 qpair failed and we were unable to recover it. 00:39:07.648 [2024-09-27 15:57:47.874086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.648 [2024-09-27 15:57:47.874119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.648 qpair failed and we were unable to recover it. 00:39:07.648 [2024-09-27 15:57:47.874520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.648 [2024-09-27 15:57:47.874552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.648 qpair failed and we were unable to recover it. 00:39:07.648 [2024-09-27 15:57:47.874885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.648 [2024-09-27 15:57:47.874924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.648 qpair failed and we were unable to recover it. 00:39:07.648 [2024-09-27 15:57:47.875302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.648 [2024-09-27 15:57:47.875334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.648 qpair failed and we were unable to recover it. 00:39:07.648 [2024-09-27 15:57:47.875707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.648 [2024-09-27 15:57:47.875742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.648 qpair failed and we were unable to recover it. 00:39:07.648 [2024-09-27 15:57:47.876108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.648 [2024-09-27 15:57:47.876141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.648 qpair failed and we were unable to recover it. 00:39:07.648 [2024-09-27 15:57:47.876562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.648 [2024-09-27 15:57:47.876594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.648 qpair failed and we were unable to recover it. 00:39:07.648 [2024-09-27 15:57:47.876813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.648 [2024-09-27 15:57:47.876844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.648 qpair failed and we were unable to recover it. 00:39:07.648 [2024-09-27 15:57:47.877247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.648 [2024-09-27 15:57:47.877280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.648 qpair failed and we were unable to recover it. 00:39:07.648 [2024-09-27 15:57:47.877641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.648 [2024-09-27 15:57:47.877673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.648 qpair failed and we were unable to recover it. 00:39:07.648 [2024-09-27 15:57:47.877935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.648 [2024-09-27 15:57:47.877975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.648 qpair failed and we were unable to recover it. 00:39:07.648 [2024-09-27 15:57:47.878225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.648 [2024-09-27 15:57:47.878257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.648 qpair failed and we were unable to recover it. 00:39:07.648 [2024-09-27 15:57:47.878615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.648 [2024-09-27 15:57:47.878646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.648 qpair failed and we were unable to recover it. 00:39:07.648 [2024-09-27 15:57:47.879025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.648 [2024-09-27 15:57:47.879058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.648 qpair failed and we were unable to recover it. 00:39:07.648 [2024-09-27 15:57:47.879279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.648 [2024-09-27 15:57:47.879309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.648 qpair failed and we were unable to recover it. 00:39:07.648 [2024-09-27 15:57:47.879666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.648 [2024-09-27 15:57:47.879698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.648 qpair failed and we were unable to recover it. 00:39:07.648 [2024-09-27 15:57:47.880068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.648 [2024-09-27 15:57:47.880103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.648 qpair failed and we were unable to recover it. 00:39:07.648 [2024-09-27 15:57:47.880461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.648 [2024-09-27 15:57:47.880493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.648 qpair failed and we were unable to recover it. 00:39:07.648 [2024-09-27 15:57:47.880711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.648 [2024-09-27 15:57:47.880741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.648 qpair failed and we were unable to recover it. 00:39:07.648 [2024-09-27 15:57:47.881095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.648 [2024-09-27 15:57:47.881127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.648 qpair failed and we were unable to recover it. 00:39:07.648 [2024-09-27 15:57:47.881374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.648 [2024-09-27 15:57:47.881405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.648 qpair failed and we were unable to recover it. 00:39:07.648 [2024-09-27 15:57:47.881786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.648 [2024-09-27 15:57:47.881819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.648 qpair failed and we were unable to recover it. 00:39:07.648 [2024-09-27 15:57:47.882199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.648 [2024-09-27 15:57:47.882234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.648 qpair failed and we were unable to recover it. 00:39:07.648 [2024-09-27 15:57:47.882437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.648 [2024-09-27 15:57:47.882468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.648 qpair failed and we were unable to recover it. 00:39:07.648 [2024-09-27 15:57:47.882838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.648 [2024-09-27 15:57:47.882871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.648 qpair failed and we were unable to recover it. 00:39:07.648 [2024-09-27 15:57:47.883126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.648 [2024-09-27 15:57:47.883160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.648 qpair failed and we were unable to recover it. 00:39:07.648 [2024-09-27 15:57:47.883518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.648 [2024-09-27 15:57:47.883551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.648 qpair failed and we were unable to recover it. 00:39:07.648 [2024-09-27 15:57:47.883808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.648 [2024-09-27 15:57:47.883840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.648 qpair failed and we were unable to recover it. 00:39:07.648 [2024-09-27 15:57:47.884250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.648 [2024-09-27 15:57:47.884283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.648 qpair failed and we were unable to recover it. 00:39:07.648 [2024-09-27 15:57:47.884522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.648 [2024-09-27 15:57:47.884555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.648 qpair failed and we were unable to recover it. 00:39:07.648 [2024-09-27 15:57:47.884918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.649 [2024-09-27 15:57:47.884952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.649 qpair failed and we were unable to recover it. 00:39:07.649 [2024-09-27 15:57:47.885313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.649 [2024-09-27 15:57:47.885346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.649 qpair failed and we were unable to recover it. 00:39:07.649 [2024-09-27 15:57:47.885524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.649 [2024-09-27 15:57:47.885556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.649 qpair failed and we were unable to recover it. 00:39:07.649 [2024-09-27 15:57:47.885779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.649 [2024-09-27 15:57:47.885810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.649 qpair failed and we were unable to recover it. 00:39:07.649 [2024-09-27 15:57:47.886121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.649 [2024-09-27 15:57:47.886157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.649 qpair failed and we were unable to recover it. 00:39:07.649 [2024-09-27 15:57:47.886558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.649 [2024-09-27 15:57:47.886589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.649 qpair failed and we were unable to recover it. 00:39:07.649 [2024-09-27 15:57:47.886799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.649 [2024-09-27 15:57:47.886828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.649 qpair failed and we were unable to recover it. 00:39:07.649 [2024-09-27 15:57:47.887218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.649 [2024-09-27 15:57:47.887251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.649 qpair failed and we were unable to recover it. 00:39:07.649 [2024-09-27 15:57:47.887477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.649 [2024-09-27 15:57:47.887506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.649 qpair failed and we were unable to recover it. 00:39:07.649 [2024-09-27 15:57:47.887864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.649 [2024-09-27 15:57:47.887905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.649 qpair failed and we were unable to recover it. 00:39:07.649 [2024-09-27 15:57:47.888274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.649 [2024-09-27 15:57:47.888306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.649 qpair failed and we were unable to recover it. 00:39:07.649 [2024-09-27 15:57:47.888537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.649 [2024-09-27 15:57:47.888569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.649 qpair failed and we were unable to recover it. 00:39:07.649 [2024-09-27 15:57:47.888955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.649 [2024-09-27 15:57:47.888988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.649 qpair failed and we were unable to recover it. 00:39:07.649 [2024-09-27 15:57:47.889252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.649 [2024-09-27 15:57:47.889283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.649 qpair failed and we were unable to recover it. 00:39:07.649 [2024-09-27 15:57:47.889642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.649 [2024-09-27 15:57:47.889674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.649 qpair failed and we were unable to recover it. 00:39:07.649 [2024-09-27 15:57:47.889924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.649 [2024-09-27 15:57:47.889959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.649 qpair failed and we were unable to recover it. 00:39:07.649 [2024-09-27 15:57:47.890176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.649 [2024-09-27 15:57:47.890209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.649 qpair failed and we were unable to recover it. 00:39:07.649 [2024-09-27 15:57:47.890558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.649 [2024-09-27 15:57:47.890589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.649 qpair failed and we were unable to recover it. 00:39:07.649 [2024-09-27 15:57:47.890958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.649 [2024-09-27 15:57:47.890991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.649 qpair failed and we were unable to recover it. 00:39:07.649 [2024-09-27 15:57:47.891361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.649 [2024-09-27 15:57:47.891395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.649 qpair failed and we were unable to recover it. 00:39:07.649 [2024-09-27 15:57:47.891756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.649 [2024-09-27 15:57:47.891794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.649 qpair failed and we were unable to recover it. 00:39:07.649 [2024-09-27 15:57:47.892153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.649 [2024-09-27 15:57:47.892187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.649 qpair failed and we were unable to recover it. 00:39:07.649 [2024-09-27 15:57:47.892558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.649 [2024-09-27 15:57:47.892590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.649 qpair failed and we were unable to recover it. 00:39:07.649 [2024-09-27 15:57:47.892691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.649 [2024-09-27 15:57:47.892722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.649 qpair failed and we were unable to recover it. 00:39:07.649 [2024-09-27 15:57:47.893051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.649 [2024-09-27 15:57:47.893084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.649 qpair failed and we were unable to recover it. 00:39:07.649 [2024-09-27 15:57:47.893448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.649 [2024-09-27 15:57:47.893480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.649 qpair failed and we were unable to recover it. 00:39:07.649 [2024-09-27 15:57:47.893832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.649 [2024-09-27 15:57:47.893866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.649 qpair failed and we were unable to recover it. 00:39:07.649 [2024-09-27 15:57:47.894220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.649 [2024-09-27 15:57:47.894254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.649 qpair failed and we were unable to recover it. 00:39:07.649 [2024-09-27 15:57:47.894612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.649 [2024-09-27 15:57:47.894645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.649 qpair failed and we were unable to recover it. 00:39:07.649 [2024-09-27 15:57:47.894994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.649 [2024-09-27 15:57:47.895026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.649 qpair failed and we were unable to recover it. 00:39:07.649 [2024-09-27 15:57:47.895399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.649 [2024-09-27 15:57:47.895430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.649 qpair failed and we were unable to recover it. 00:39:07.649 [2024-09-27 15:57:47.895826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.649 [2024-09-27 15:57:47.895858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.649 qpair failed and we were unable to recover it. 00:39:07.649 [2024-09-27 15:57:47.896205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.649 [2024-09-27 15:57:47.896236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.649 qpair failed and we were unable to recover it. 00:39:07.649 [2024-09-27 15:57:47.896610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.649 [2024-09-27 15:57:47.896643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.649 qpair failed and we were unable to recover it. 00:39:07.649 15:57:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:07.649 [2024-09-27 15:57:47.896857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.649 [2024-09-27 15:57:47.896889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.649 qpair failed and we were unable to recover it. 00:39:07.649 15:57:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:07.649 [2024-09-27 15:57:47.897136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.649 [2024-09-27 15:57:47.897173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.649 qpair failed and we were unable to recover it. 00:39:07.649 15:57:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:07.649 [2024-09-27 15:57:47.897522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.650 [2024-09-27 15:57:47.897556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.650 qpair failed and we were unable to recover it. 00:39:07.650 15:57:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:07.650 [2024-09-27 15:57:47.897914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.650 [2024-09-27 15:57:47.897950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.650 qpair failed and we were unable to recover it. 00:39:07.650 [2024-09-27 15:57:47.898309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.650 [2024-09-27 15:57:47.898340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.650 qpair failed and we were unable to recover it. 00:39:07.650 [2024-09-27 15:57:47.898563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.650 [2024-09-27 15:57:47.898593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.650 qpair failed and we were unable to recover it. 00:39:07.650 [2024-09-27 15:57:47.898830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.650 [2024-09-27 15:57:47.898864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.650 qpair failed and we were unable to recover it. 00:39:07.650 [2024-09-27 15:57:47.899103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.650 [2024-09-27 15:57:47.899134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.650 qpair failed and we were unable to recover it. 00:39:07.650 [2024-09-27 15:57:47.899549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.650 [2024-09-27 15:57:47.899583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.650 qpair failed and we were unable to recover it. 00:39:07.650 [2024-09-27 15:57:47.899821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.650 [2024-09-27 15:57:47.899854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.650 qpair failed and we were unable to recover it. 00:39:07.650 [2024-09-27 15:57:47.900132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.650 [2024-09-27 15:57:47.900166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.650 qpair failed and we were unable to recover it. 00:39:07.650 [2024-09-27 15:57:47.900526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.650 [2024-09-27 15:57:47.900565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.650 qpair failed and we were unable to recover it. 00:39:07.650 [2024-09-27 15:57:47.900927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.650 [2024-09-27 15:57:47.900960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.650 qpair failed and we were unable to recover it. 00:39:07.650 [2024-09-27 15:57:47.901344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.650 [2024-09-27 15:57:47.901374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.650 qpair failed and we were unable to recover it. 00:39:07.650 [2024-09-27 15:57:47.901612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.650 [2024-09-27 15:57:47.901643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.650 qpair failed and we were unable to recover it. 00:39:07.650 [2024-09-27 15:57:47.902018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.650 [2024-09-27 15:57:47.902053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.650 qpair failed and we were unable to recover it. 00:39:07.650 [2024-09-27 15:57:47.902416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.650 [2024-09-27 15:57:47.902447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.650 qpair failed and we were unable to recover it. 00:39:07.650 [2024-09-27 15:57:47.902672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.650 [2024-09-27 15:57:47.902702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.650 qpair failed and we were unable to recover it. 00:39:07.650 [2024-09-27 15:57:47.903076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.650 [2024-09-27 15:57:47.903108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.650 qpair failed and we were unable to recover it. 00:39:07.650 [2024-09-27 15:57:47.903311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.650 [2024-09-27 15:57:47.903341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.650 qpair failed and we were unable to recover it. 00:39:07.650 [2024-09-27 15:57:47.903679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.650 [2024-09-27 15:57:47.903712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.650 qpair failed and we were unable to recover it. 00:39:07.650 [2024-09-27 15:57:47.903804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.650 [2024-09-27 15:57:47.903832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76f4000b90 with addr=10.0.0.2, port=4420 00:39:07.650 qpair failed and we were unable to recover it. 00:39:07.650 [2024-09-27 15:57:47.904385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.650 [2024-09-27 15:57:47.904491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.650 qpair failed and we were unable to recover it. 00:39:07.650 [2024-09-27 15:57:47.904933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.650 [2024-09-27 15:57:47.904978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.650 qpair failed and we were unable to recover it. 00:39:07.650 [2024-09-27 15:57:47.905378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.650 [2024-09-27 15:57:47.905412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.650 qpair failed and we were unable to recover it. 00:39:07.650 [2024-09-27 15:57:47.905799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.650 [2024-09-27 15:57:47.905832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.650 qpair failed and we were unable to recover it. 00:39:07.650 [2024-09-27 15:57:47.906094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.650 [2024-09-27 15:57:47.906132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.650 qpair failed and we were unable to recover it. 00:39:07.650 [2024-09-27 15:57:47.906526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.650 [2024-09-27 15:57:47.906559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.650 qpair failed and we were unable to recover it. 00:39:07.650 [2024-09-27 15:57:47.906922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.650 [2024-09-27 15:57:47.906956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.650 qpair failed and we were unable to recover it. 00:39:07.650 [2024-09-27 15:57:47.907159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.650 [2024-09-27 15:57:47.907192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.650 qpair failed and we were unable to recover it. 00:39:07.650 [2024-09-27 15:57:47.907559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.650 [2024-09-27 15:57:47.907591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.650 qpair failed and we were unable to recover it. 00:39:07.650 [2024-09-27 15:57:47.907830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.650 [2024-09-27 15:57:47.907860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.650 qpair failed and we were unable to recover it. 00:39:07.650 [2024-09-27 15:57:47.908217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.650 [2024-09-27 15:57:47.908249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.650 qpair failed and we were unable to recover it. 00:39:07.650 [2024-09-27 15:57:47.908643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.650 [2024-09-27 15:57:47.908675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.650 qpair failed and we were unable to recover it. 00:39:07.650 [2024-09-27 15:57:47.909032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.650 [2024-09-27 15:57:47.909067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.650 qpair failed and we were unable to recover it. 00:39:07.650 [2024-09-27 15:57:47.909451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.650 [2024-09-27 15:57:47.909483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.650 qpair failed and we were unable to recover it. 00:39:07.650 [2024-09-27 15:57:47.909883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.650 [2024-09-27 15:57:47.909925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.650 qpair failed and we were unable to recover it. 00:39:07.650 [2024-09-27 15:57:47.910306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.650 [2024-09-27 15:57:47.910338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.650 qpair failed and we were unable to recover it. 00:39:07.650 [2024-09-27 15:57:47.910589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.650 [2024-09-27 15:57:47.910624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.650 qpair failed and we were unable to recover it. 00:39:07.651 [2024-09-27 15:57:47.910872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.651 [2024-09-27 15:57:47.910919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.651 qpair failed and we were unable to recover it. 00:39:07.651 [2024-09-27 15:57:47.911292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.651 [2024-09-27 15:57:47.911325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.651 qpair failed and we were unable to recover it. 00:39:07.651 [2024-09-27 15:57:47.911692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.651 [2024-09-27 15:57:47.911726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.651 qpair failed and we were unable to recover it. 00:39:07.651 [2024-09-27 15:57:47.912073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.651 [2024-09-27 15:57:47.912107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.651 qpair failed and we were unable to recover it. 00:39:07.651 [2024-09-27 15:57:47.912477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.651 [2024-09-27 15:57:47.912508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.651 qpair failed and we were unable to recover it. 00:39:07.651 [2024-09-27 15:57:47.912732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.651 [2024-09-27 15:57:47.912765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.651 qpair failed and we were unable to recover it. 00:39:07.651 [2024-09-27 15:57:47.913113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.651 [2024-09-27 15:57:47.913146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.651 qpair failed and we were unable to recover it. 00:39:07.651 [2024-09-27 15:57:47.913518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.651 [2024-09-27 15:57:47.913551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.651 qpair failed and we were unable to recover it. 00:39:07.651 [2024-09-27 15:57:47.913914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.651 [2024-09-27 15:57:47.913947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.651 qpair failed and we were unable to recover it. 00:39:07.651 [2024-09-27 15:57:47.914303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.651 [2024-09-27 15:57:47.914335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.651 qpair failed and we were unable to recover it. 00:39:07.651 [2024-09-27 15:57:47.914692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.651 [2024-09-27 15:57:47.914726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.651 qpair failed and we were unable to recover it. 00:39:07.651 [2024-09-27 15:57:47.914985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.651 [2024-09-27 15:57:47.915018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.651 qpair failed and we were unable to recover it. 00:39:07.651 [2024-09-27 15:57:47.915313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.651 [2024-09-27 15:57:47.915350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.651 qpair failed and we were unable to recover it. 00:39:07.651 [2024-09-27 15:57:47.915698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.651 [2024-09-27 15:57:47.915730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.651 qpair failed and we were unable to recover it. 00:39:07.651 [2024-09-27 15:57:47.915955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.651 [2024-09-27 15:57:47.915988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.651 qpair failed and we were unable to recover it. 00:39:07.651 [2024-09-27 15:57:47.916341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.651 [2024-09-27 15:57:47.916372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.651 qpair failed and we were unable to recover it. 00:39:07.651 [2024-09-27 15:57:47.916652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.651 [2024-09-27 15:57:47.916684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.651 qpair failed and we were unable to recover it. 00:39:07.651 [2024-09-27 15:57:47.917016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.651 [2024-09-27 15:57:47.917049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.651 qpair failed and we were unable to recover it. 00:39:07.651 [2024-09-27 15:57:47.917431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.651 [2024-09-27 15:57:47.917462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.651 qpair failed and we were unable to recover it. 00:39:07.651 [2024-09-27 15:57:47.917705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.651 [2024-09-27 15:57:47.917736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.651 qpair failed and we were unable to recover it. 00:39:07.651 Malloc0 00:39:07.651 [2024-09-27 15:57:47.918083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.651 [2024-09-27 15:57:47.918116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.651 qpair failed and we were unable to recover it. 00:39:07.651 [2024-09-27 15:57:47.918487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.651 [2024-09-27 15:57:47.918521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.651 qpair failed and we were unable to recover it. 00:39:07.651 15:57:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:07.651 [2024-09-27 15:57:47.918923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.651 [2024-09-27 15:57:47.918956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.651 qpair failed and we were unable to recover it. 00:39:07.651 15:57:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:39:07.651 [2024-09-27 15:57:47.919246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.651 [2024-09-27 15:57:47.919278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.651 qpair failed and we were unable to recover it. 00:39:07.651 [2024-09-27 15:57:47.919403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.651 [2024-09-27 15:57:47.919435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.651 qpair failed and we were unable to recover it. 00:39:07.651 15:57:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:07.651 [2024-09-27 15:57:47.919686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.651 [2024-09-27 15:57:47.919717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.651 qpair failed and we were unable to recover it. 00:39:07.651 15:57:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:07.651 [2024-09-27 15:57:47.919982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.651 [2024-09-27 15:57:47.920015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.651 qpair failed and we were unable to recover it. 00:39:07.651 [2024-09-27 15:57:47.920261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.651 [2024-09-27 15:57:47.920293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.651 qpair failed and we were unable to recover it. 00:39:07.651 [2024-09-27 15:57:47.920647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.651 [2024-09-27 15:57:47.920679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.651 qpair failed and we were unable to recover it. 00:39:07.651 [2024-09-27 15:57:47.920915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.651 [2024-09-27 15:57:47.920946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.651 qpair failed and we were unable to recover it. 00:39:07.651 [2024-09-27 15:57:47.921185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.651 [2024-09-27 15:57:47.921219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.651 qpair failed and we were unable to recover it. 00:39:07.651 [2024-09-27 15:57:47.921431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.651 [2024-09-27 15:57:47.921464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.651 qpair failed and we were unable to recover it. 00:39:07.651 [2024-09-27 15:57:47.921819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.651 [2024-09-27 15:57:47.921854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.651 qpair failed and we were unable to recover it. 00:39:07.651 [2024-09-27 15:57:47.922111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.651 [2024-09-27 15:57:47.922145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.651 qpair failed and we were unable to recover it. 00:39:07.651 [2024-09-27 15:57:47.922507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.651 [2024-09-27 15:57:47.922539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.651 qpair failed and we were unable to recover it. 00:39:07.651 [2024-09-27 15:57:47.922891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.652 [2024-09-27 15:57:47.922934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.652 qpair failed and we were unable to recover it. 00:39:07.652 [2024-09-27 15:57:47.923289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.652 [2024-09-27 15:57:47.923319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.652 qpair failed and we were unable to recover it. 00:39:07.652 [2024-09-27 15:57:47.923486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.652 [2024-09-27 15:57:47.923518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.652 qpair failed and we were unable to recover it. 00:39:07.652 [2024-09-27 15:57:47.923619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.652 [2024-09-27 15:57:47.923647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76fc000b90 with addr=10.0.0.2, port=4420 00:39:07.652 qpair failed and we were unable to recover it. 00:39:07.652 [2024-09-27 15:57:47.924068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.652 [2024-09-27 15:57:47.924174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.652 qpair failed and we were unable to recover it. 00:39:07.652 [2024-09-27 15:57:47.924604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.652 [2024-09-27 15:57:47.924644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.652 qpair failed and we were unable to recover it. 00:39:07.652 [2024-09-27 15:57:47.925052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.652 [2024-09-27 15:57:47.925089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.652 qpair failed and we were unable to recover it. 00:39:07.652 [2024-09-27 15:57:47.925342] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:07.652 [2024-09-27 15:57:47.925485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.652 [2024-09-27 15:57:47.925517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.652 qpair failed and we were unable to recover it. 00:39:07.652 [2024-09-27 15:57:47.925881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.652 [2024-09-27 15:57:47.925932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.652 qpair failed and we were unable to recover it. 00:39:07.652 [2024-09-27 15:57:47.926212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.652 [2024-09-27 15:57:47.926242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.652 qpair failed and we were unable to recover it. 00:39:07.652 [2024-09-27 15:57:47.926598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.652 [2024-09-27 15:57:47.926630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.652 qpair failed and we were unable to recover it. 00:39:07.652 [2024-09-27 15:57:47.927152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.652 [2024-09-27 15:57:47.927261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.652 qpair failed and we were unable to recover it. 00:39:07.652 [2024-09-27 15:57:47.927685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.652 [2024-09-27 15:57:47.927724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.652 qpair failed and we were unable to recover it. 00:39:07.652 [2024-09-27 15:57:47.927969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.652 [2024-09-27 15:57:47.928003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.652 qpair failed and we were unable to recover it. 00:39:07.652 [2024-09-27 15:57:47.928380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.652 [2024-09-27 15:57:47.928412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.652 qpair failed and we were unable to recover it. 00:39:07.652 [2024-09-27 15:57:47.928679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.652 [2024-09-27 15:57:47.928722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.652 qpair failed and we were unable to recover it. 00:39:07.652 [2024-09-27 15:57:47.929092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.652 [2024-09-27 15:57:47.929126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.652 qpair failed and we were unable to recover it. 00:39:07.652 [2024-09-27 15:57:47.929381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.652 [2024-09-27 15:57:47.929419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.652 qpair failed and we were unable to recover it. 00:39:07.652 [2024-09-27 15:57:47.929804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.652 [2024-09-27 15:57:47.929835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.652 qpair failed and we were unable to recover it. 00:39:07.652 [2024-09-27 15:57:47.930180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.652 [2024-09-27 15:57:47.930215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.652 qpair failed and we were unable to recover it. 00:39:07.652 [2024-09-27 15:57:47.930438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.652 [2024-09-27 15:57:47.930471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.652 qpair failed and we were unable to recover it. 00:39:07.652 [2024-09-27 15:57:47.930677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.652 [2024-09-27 15:57:47.930708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.652 qpair failed and we were unable to recover it. 00:39:07.652 [2024-09-27 15:57:47.930919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.652 [2024-09-27 15:57:47.930952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.652 qpair failed and we were unable to recover it. 00:39:07.652 [2024-09-27 15:57:47.931342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.652 [2024-09-27 15:57:47.931373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.652 qpair failed and we were unable to recover it. 00:39:07.652 [2024-09-27 15:57:47.931615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.652 [2024-09-27 15:57:47.931645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.652 qpair failed and we were unable to recover it. 00:39:07.652 [2024-09-27 15:57:47.931924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.652 [2024-09-27 15:57:47.931957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.652 qpair failed and we were unable to recover it. 00:39:07.652 [2024-09-27 15:57:47.932340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.652 [2024-09-27 15:57:47.932374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.652 qpair failed and we were unable to recover it. 00:39:07.652 [2024-09-27 15:57:47.932733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.652 [2024-09-27 15:57:47.932765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.652 qpair failed and we were unable to recover it. 00:39:07.652 [2024-09-27 15:57:47.932919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.652 [2024-09-27 15:57:47.932954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.652 qpair failed and we were unable to recover it. 00:39:07.652 [2024-09-27 15:57:47.933355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.652 [2024-09-27 15:57:47.933388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.652 qpair failed and we were unable to recover it. 00:39:07.652 [2024-09-27 15:57:47.933627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.652 [2024-09-27 15:57:47.933658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.652 qpair failed and we were unable to recover it. 00:39:07.652 [2024-09-27 15:57:47.934039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.652 [2024-09-27 15:57:47.934073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.652 qpair failed and we were unable to recover it. 00:39:07.652 [2024-09-27 15:57:47.934434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.652 [2024-09-27 15:57:47.934467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.652 qpair failed and we were unable to recover it. 00:39:07.652 15:57:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:07.652 [2024-09-27 15:57:47.934686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.653 [2024-09-27 15:57:47.934717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.653 qpair failed and we were unable to recover it. 00:39:07.653 15:57:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:07.653 [2024-09-27 15:57:47.935060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.653 [2024-09-27 15:57:47.935092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.653 qpair failed and we were unable to recover it. 00:39:07.653 15:57:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:07.653 [2024-09-27 15:57:47.935237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.653 [2024-09-27 15:57:47.935267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.653 qpair failed and we were unable to recover it. 00:39:07.653 15:57:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:07.653 [2024-09-27 15:57:47.935520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.653 [2024-09-27 15:57:47.935552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.653 qpair failed and we were unable to recover it. 00:39:07.653 [2024-09-27 15:57:47.935914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.653 [2024-09-27 15:57:47.935948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.653 qpair failed and we were unable to recover it. 00:39:07.653 [2024-09-27 15:57:47.936341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.653 [2024-09-27 15:57:47.936372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.653 qpair failed and we were unable to recover it. 00:39:07.653 [2024-09-27 15:57:47.936788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.653 [2024-09-27 15:57:47.936820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.653 qpair failed and we were unable to recover it. 00:39:07.653 [2024-09-27 15:57:47.937106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.653 [2024-09-27 15:57:47.937138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.653 qpair failed and we were unable to recover it. 00:39:07.653 [2024-09-27 15:57:47.937401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.653 [2024-09-27 15:57:47.937433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.653 qpair failed and we were unable to recover it. 00:39:07.653 [2024-09-27 15:57:47.937669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.653 [2024-09-27 15:57:47.937700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.653 qpair failed and we were unable to recover it. 00:39:07.653 [2024-09-27 15:57:47.937933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.653 [2024-09-27 15:57:47.937966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.653 qpair failed and we were unable to recover it. 00:39:07.653 [2024-09-27 15:57:47.938341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.653 [2024-09-27 15:57:47.938371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.653 qpair failed and we were unable to recover it. 00:39:07.653 [2024-09-27 15:57:47.938590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.653 [2024-09-27 15:57:47.938622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.653 qpair failed and we were unable to recover it. 00:39:07.653 [2024-09-27 15:57:47.938982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.653 [2024-09-27 15:57:47.939015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.653 qpair failed and we were unable to recover it. 00:39:07.653 [2024-09-27 15:57:47.939406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.653 [2024-09-27 15:57:47.939438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.653 qpair failed and we were unable to recover it. 00:39:07.653 [2024-09-27 15:57:47.939800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.653 [2024-09-27 15:57:47.939832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.653 qpair failed and we were unable to recover it. 00:39:07.653 [2024-09-27 15:57:47.940192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.653 [2024-09-27 15:57:47.940224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.653 qpair failed and we were unable to recover it. 00:39:07.653 [2024-09-27 15:57:47.940445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.653 [2024-09-27 15:57:47.940476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.653 qpair failed and we were unable to recover it. 00:39:07.653 [2024-09-27 15:57:47.940731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.653 [2024-09-27 15:57:47.940764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.653 qpair failed and we were unable to recover it. 00:39:07.653 [2024-09-27 15:57:47.941126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.653 [2024-09-27 15:57:47.941161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.653 qpair failed and we were unable to recover it. 00:39:07.653 [2024-09-27 15:57:47.941538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.653 [2024-09-27 15:57:47.941576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.653 qpair failed and we were unable to recover it. 00:39:07.653 [2024-09-27 15:57:47.941940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.653 [2024-09-27 15:57:47.941974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.653 qpair failed and we were unable to recover it. 00:39:07.653 [2024-09-27 15:57:47.942357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.653 [2024-09-27 15:57:47.942390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.653 qpair failed and we were unable to recover it. 00:39:07.653 [2024-09-27 15:57:47.942596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.653 [2024-09-27 15:57:47.942626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.653 qpair failed and we were unable to recover it. 00:39:07.653 [2024-09-27 15:57:47.942964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.653 [2024-09-27 15:57:47.942997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.653 qpair failed and we were unable to recover it. 00:39:07.653 [2024-09-27 15:57:47.943416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.653 [2024-09-27 15:57:47.943448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.653 qpair failed and we were unable to recover it. 00:39:07.653 [2024-09-27 15:57:47.943802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.653 [2024-09-27 15:57:47.943835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.653 qpair failed and we were unable to recover it. 00:39:07.653 [2024-09-27 15:57:47.944213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.653 [2024-09-27 15:57:47.944247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.653 qpair failed and we were unable to recover it. 00:39:07.653 [2024-09-27 15:57:47.944610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.653 [2024-09-27 15:57:47.944644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.653 qpair failed and we were unable to recover it. 00:39:07.653 [2024-09-27 15:57:47.944869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.653 [2024-09-27 15:57:47.944911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.653 qpair failed and we were unable to recover it. 00:39:07.653 [2024-09-27 15:57:47.945256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.653 [2024-09-27 15:57:47.945289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.653 qpair failed and we were unable to recover it. 00:39:07.653 [2024-09-27 15:57:47.945679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.653 [2024-09-27 15:57:47.945711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.653 qpair failed and we were unable to recover it. 00:39:07.653 [2024-09-27 15:57:47.946079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.653 [2024-09-27 15:57:47.946123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.653 qpair failed and we were unable to recover it. 00:39:07.653 15:57:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:07.653 [2024-09-27 15:57:47.946492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.653 [2024-09-27 15:57:47.946525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.653 qpair failed and we were unable to recover it. 00:39:07.653 15:57:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:07.653 [2024-09-27 15:57:47.946916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.653 [2024-09-27 15:57:47.946950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.653 qpair failed and we were unable to recover it. 00:39:07.653 [2024-09-27 15:57:47.947095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.653 [2024-09-27 15:57:47.947125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.653 qpair failed and we were unable to recover it. 00:39:07.654 15:57:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:07.654 [2024-09-27 15:57:47.947359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.654 [2024-09-27 15:57:47.947391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.654 qpair failed and we were unable to recover it. 00:39:07.654 15:57:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:07.654 [2024-09-27 15:57:47.947741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.654 [2024-09-27 15:57:47.947774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.654 qpair failed and we were unable to recover it. 00:39:07.654 [2024-09-27 15:57:47.948108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.654 [2024-09-27 15:57:47.948141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.654 qpair failed and we were unable to recover it. 00:39:07.654 [2024-09-27 15:57:47.948503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.654 [2024-09-27 15:57:47.948534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.654 qpair failed and we were unable to recover it. 00:39:07.654 [2024-09-27 15:57:47.948905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.654 [2024-09-27 15:57:47.948939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.654 qpair failed and we were unable to recover it. 00:39:07.654 [2024-09-27 15:57:47.949104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.654 [2024-09-27 15:57:47.949135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.654 qpair failed and we were unable to recover it. 00:39:07.654 [2024-09-27 15:57:47.949512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.654 [2024-09-27 15:57:47.949544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.654 qpair failed and we were unable to recover it. 00:39:07.654 [2024-09-27 15:57:47.949760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.654 [2024-09-27 15:57:47.949791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.654 qpair failed and we were unable to recover it. 00:39:07.654 [2024-09-27 15:57:47.950156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.654 [2024-09-27 15:57:47.950189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.654 qpair failed and we were unable to recover it. 00:39:07.654 [2024-09-27 15:57:47.950557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.654 [2024-09-27 15:57:47.950588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.654 qpair failed and we were unable to recover it. 00:39:07.654 [2024-09-27 15:57:47.950929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.654 [2024-09-27 15:57:47.950969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.654 qpair failed and we were unable to recover it. 00:39:07.654 [2024-09-27 15:57:47.951182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.654 [2024-09-27 15:57:47.951214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.654 qpair failed and we were unable to recover it. 00:39:07.654 [2024-09-27 15:57:47.951566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.654 [2024-09-27 15:57:47.951600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.654 qpair failed and we were unable to recover it. 00:39:07.654 [2024-09-27 15:57:47.951961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.654 [2024-09-27 15:57:47.951994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.654 qpair failed and we were unable to recover it. 00:39:07.654 [2024-09-27 15:57:47.952354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.654 [2024-09-27 15:57:47.952388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.654 qpair failed and we were unable to recover it. 00:39:07.654 [2024-09-27 15:57:47.952752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.654 [2024-09-27 15:57:47.952783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.654 qpair failed and we were unable to recover it. 00:39:07.654 [2024-09-27 15:57:47.952921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.654 [2024-09-27 15:57:47.952958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.654 qpair failed and we were unable to recover it. 00:39:07.654 [2024-09-27 15:57:47.953342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.654 [2024-09-27 15:57:47.953374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.654 qpair failed and we were unable to recover it. 00:39:07.654 [2024-09-27 15:57:47.953715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.654 [2024-09-27 15:57:47.953747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.654 qpair failed and we were unable to recover it. 00:39:07.654 [2024-09-27 15:57:47.954122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.654 [2024-09-27 15:57:47.954155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.654 qpair failed and we were unable to recover it. 00:39:07.654 [2024-09-27 15:57:47.954456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.654 [2024-09-27 15:57:47.954486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.654 qpair failed and we were unable to recover it. 00:39:07.654 [2024-09-27 15:57:47.954866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.654 [2024-09-27 15:57:47.954905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.654 qpair failed and we were unable to recover it. 00:39:07.654 [2024-09-27 15:57:47.955276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.654 [2024-09-27 15:57:47.955309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.654 qpair failed and we were unable to recover it. 00:39:07.654 [2024-09-27 15:57:47.955676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.654 [2024-09-27 15:57:47.955708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.654 qpair failed and we were unable to recover it. 00:39:07.654 [2024-09-27 15:57:47.956072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.654 [2024-09-27 15:57:47.956106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.654 qpair failed and we were unable to recover it. 00:39:07.654 [2024-09-27 15:57:47.956319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.654 [2024-09-27 15:57:47.956351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.654 qpair failed and we were unable to recover it. 00:39:07.654 [2024-09-27 15:57:47.956682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.654 [2024-09-27 15:57:47.956712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.654 qpair failed and we were unable to recover it. 00:39:07.654 [2024-09-27 15:57:47.956954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.654 [2024-09-27 15:57:47.956987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.654 qpair failed and we were unable to recover it. 00:39:07.654 [2024-09-27 15:57:47.957341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.654 [2024-09-27 15:57:47.957372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.654 qpair failed and we were unable to recover it. 00:39:07.654 [2024-09-27 15:57:47.957680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.654 [2024-09-27 15:57:47.957711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.654 qpair failed and we were unable to recover it. 00:39:07.654 [2024-09-27 15:57:47.958051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.654 [2024-09-27 15:57:47.958085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.654 qpair failed and we were unable to recover it. 00:39:07.654 [2024-09-27 15:57:47.958304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.654 [2024-09-27 15:57:47.958336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.654 qpair failed and we were unable to recover it. 00:39:07.654 15:57:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:07.654 [2024-09-27 15:57:47.958697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.654 [2024-09-27 15:57:47.958731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.654 qpair failed and we were unable to recover it. 00:39:07.654 15:57:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:07.654 [2024-09-27 15:57:47.959100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.654 [2024-09-27 15:57:47.959132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.654 qpair failed and we were unable to recover it. 00:39:07.654 15:57:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:07.654 [2024-09-27 15:57:47.959385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.654 [2024-09-27 15:57:47.959416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.654 qpair failed and we were unable to recover it. 00:39:07.655 15:57:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:07.655 [2024-09-27 15:57:47.959828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.655 [2024-09-27 15:57:47.959861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.655 qpair failed and we were unable to recover it. 00:39:07.655 [2024-09-27 15:57:47.960282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.655 [2024-09-27 15:57:47.960315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.655 qpair failed and we were unable to recover it. 00:39:07.655 [2024-09-27 15:57:47.960534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.655 [2024-09-27 15:57:47.960565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.655 qpair failed and we were unable to recover it. 00:39:07.655 [2024-09-27 15:57:47.960920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.655 [2024-09-27 15:57:47.960953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.655 qpair failed and we were unable to recover it. 00:39:07.655 [2024-09-27 15:57:47.961312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.655 [2024-09-27 15:57:47.961343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.655 qpair failed and we were unable to recover it. 00:39:07.655 [2024-09-27 15:57:47.961589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.655 [2024-09-27 15:57:47.961622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.655 qpair failed and we were unable to recover it. 00:39:07.655 [2024-09-27 15:57:47.961835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.655 [2024-09-27 15:57:47.961869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.655 qpair failed and we were unable to recover it. 00:39:07.655 [2024-09-27 15:57:47.962229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.655 [2024-09-27 15:57:47.962262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.655 qpair failed and we were unable to recover it. 00:39:07.655 [2024-09-27 15:57:47.962481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.655 [2024-09-27 15:57:47.962512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.655 qpair failed and we were unable to recover it. 00:39:07.655 [2024-09-27 15:57:47.962760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.655 [2024-09-27 15:57:47.962793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.655 qpair failed and we were unable to recover it. 00:39:07.655 [2024-09-27 15:57:47.963182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.655 [2024-09-27 15:57:47.963215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.655 qpair failed and we were unable to recover it. 00:39:07.655 [2024-09-27 15:57:47.963457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.655 [2024-09-27 15:57:47.963489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.655 qpair failed and we were unable to recover it. 00:39:07.655 [2024-09-27 15:57:47.963843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.655 [2024-09-27 15:57:47.963875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.655 qpair failed and we were unable to recover it. 00:39:07.655 [2024-09-27 15:57:47.964245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.655 [2024-09-27 15:57:47.964278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.655 qpair failed and we were unable to recover it. 00:39:07.655 [2024-09-27 15:57:47.964503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.655 [2024-09-27 15:57:47.964537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.655 qpair failed and we were unable to recover it. 00:39:07.655 [2024-09-27 15:57:47.964933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.655 [2024-09-27 15:57:47.964967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.655 qpair failed and we were unable to recover it. 00:39:07.655 [2024-09-27 15:57:47.965356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:07.655 [2024-09-27 15:57:47.965387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6eca0 with addr=10.0.0.2, port=4420 00:39:07.655 qpair failed and we were unable to recover it. 00:39:07.655 [2024-09-27 15:57:47.965741] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:07.655 15:57:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:07.655 15:57:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:07.655 15:57:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:07.655 15:57:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:07.655 [2024-09-27 15:57:47.976642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:07.655 [2024-09-27 15:57:47.976808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:07.655 [2024-09-27 15:57:47.976858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:07.655 [2024-09-27 15:57:47.976883] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:07.655 [2024-09-27 15:57:47.976929] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:07.655 [2024-09-27 15:57:47.976982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:07.655 qpair failed and we were unable to recover it. 00:39:07.655 15:57:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:07.655 15:57:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 645729 00:39:07.655 [2024-09-27 15:57:47.986427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:07.655 [2024-09-27 15:57:47.986523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:07.655 [2024-09-27 15:57:47.986555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:07.655 [2024-09-27 15:57:47.986570] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:07.655 [2024-09-27 15:57:47.986583] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:07.655 [2024-09-27 15:57:47.986613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:07.655 qpair failed and we were unable to recover it. 00:39:07.655 [2024-09-27 15:57:47.996559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:07.655 [2024-09-27 15:57:47.996636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:07.655 [2024-09-27 15:57:47.996659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:07.655 [2024-09-27 15:57:47.996677] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:07.655 [2024-09-27 15:57:47.996688] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:07.655 [2024-09-27 15:57:47.996710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:07.655 qpair failed and we were unable to recover it. 00:39:07.655 [2024-09-27 15:57:48.006477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:07.655 [2024-09-27 15:57:48.006559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:07.655 [2024-09-27 15:57:48.006578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:07.655 [2024-09-27 15:57:48.006586] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:07.655 [2024-09-27 15:57:48.006593] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:07.655 [2024-09-27 15:57:48.006609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:07.655 qpair failed and we were unable to recover it. 00:39:07.655 [2024-09-27 15:57:48.016475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:07.655 [2024-09-27 15:57:48.016547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:07.655 [2024-09-27 15:57:48.016567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:07.655 [2024-09-27 15:57:48.016575] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:07.655 [2024-09-27 15:57:48.016583] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:07.655 [2024-09-27 15:57:48.016600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:07.655 qpair failed and we were unable to recover it. 00:39:07.655 [2024-09-27 15:57:48.026611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:07.655 [2024-09-27 15:57:48.026688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:07.655 [2024-09-27 15:57:48.026707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:07.655 [2024-09-27 15:57:48.026716] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:07.655 [2024-09-27 15:57:48.026723] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:07.655 [2024-09-27 15:57:48.026740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:07.655 qpair failed and we were unable to recover it. 00:39:07.655 [2024-09-27 15:57:48.036489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:07.656 [2024-09-27 15:57:48.036588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:07.656 [2024-09-27 15:57:48.036606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:07.656 [2024-09-27 15:57:48.036614] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:07.656 [2024-09-27 15:57:48.036621] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:07.656 [2024-09-27 15:57:48.036637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:07.656 qpair failed and we were unable to recover it. 00:39:07.656 [2024-09-27 15:57:48.046419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:07.656 [2024-09-27 15:57:48.046495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:07.656 [2024-09-27 15:57:48.046514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:07.656 [2024-09-27 15:57:48.046522] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:07.656 [2024-09-27 15:57:48.046529] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:07.656 [2024-09-27 15:57:48.046545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:07.656 qpair failed and we were unable to recover it. 00:39:07.656 [2024-09-27 15:57:48.056605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:07.656 [2024-09-27 15:57:48.056683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:07.656 [2024-09-27 15:57:48.056701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:07.656 [2024-09-27 15:57:48.056709] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:07.656 [2024-09-27 15:57:48.056716] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:07.656 [2024-09-27 15:57:48.056733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:07.656 qpair failed and we were unable to recover it. 00:39:07.656 [2024-09-27 15:57:48.066502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:07.656 [2024-09-27 15:57:48.066586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:07.656 [2024-09-27 15:57:48.066605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:07.656 [2024-09-27 15:57:48.066614] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:07.656 [2024-09-27 15:57:48.066623] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:07.656 [2024-09-27 15:57:48.066640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:07.656 qpair failed and we were unable to recover it. 00:39:07.656 [2024-09-27 15:57:48.076523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:07.656 [2024-09-27 15:57:48.076592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:07.656 [2024-09-27 15:57:48.076614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:07.656 [2024-09-27 15:57:48.076622] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:07.656 [2024-09-27 15:57:48.076630] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:07.656 [2024-09-27 15:57:48.076648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:07.656 qpair failed and we were unable to recover it. 00:39:07.656 [2024-09-27 15:57:48.086644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:07.656 [2024-09-27 15:57:48.086733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:07.656 [2024-09-27 15:57:48.086753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:07.656 [2024-09-27 15:57:48.086768] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:07.656 [2024-09-27 15:57:48.086776] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:07.656 [2024-09-27 15:57:48.086793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:07.656 qpair failed and we were unable to recover it. 00:39:07.656 [2024-09-27 15:57:48.096694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:07.656 [2024-09-27 15:57:48.096773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:07.656 [2024-09-27 15:57:48.096792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:07.656 [2024-09-27 15:57:48.096800] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:07.656 [2024-09-27 15:57:48.096807] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:07.656 [2024-09-27 15:57:48.096824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:07.656 qpair failed and we were unable to recover it. 00:39:07.656 [2024-09-27 15:57:48.106607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:07.656 [2024-09-27 15:57:48.106677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:07.656 [2024-09-27 15:57:48.106696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:07.656 [2024-09-27 15:57:48.106704] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:07.656 [2024-09-27 15:57:48.106711] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:07.656 [2024-09-27 15:57:48.106727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:07.656 qpair failed and we were unable to recover it. 00:39:07.656 [2024-09-27 15:57:48.116870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:07.656 [2024-09-27 15:57:48.116945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:07.656 [2024-09-27 15:57:48.116978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:07.656 [2024-09-27 15:57:48.116988] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:07.656 [2024-09-27 15:57:48.116996] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:07.656 [2024-09-27 15:57:48.117016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:07.656 qpair failed and we were unable to recover it. 00:39:07.920 [2024-09-27 15:57:48.126741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:07.920 [2024-09-27 15:57:48.126812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:07.920 [2024-09-27 15:57:48.126832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:07.920 [2024-09-27 15:57:48.126840] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:07.920 [2024-09-27 15:57:48.126848] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:07.920 [2024-09-27 15:57:48.126865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:07.920 qpair failed and we were unable to recover it. 00:39:07.920 [2024-09-27 15:57:48.136810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:07.920 [2024-09-27 15:57:48.136891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:07.920 [2024-09-27 15:57:48.136916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:07.920 [2024-09-27 15:57:48.136925] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:07.920 [2024-09-27 15:57:48.136932] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:07.920 [2024-09-27 15:57:48.136948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:07.920 qpair failed and we were unable to recover it. 00:39:07.920 [2024-09-27 15:57:48.146809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:07.920 [2024-09-27 15:57:48.146877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:07.920 [2024-09-27 15:57:48.146901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:07.920 [2024-09-27 15:57:48.146910] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:07.920 [2024-09-27 15:57:48.146917] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:07.920 [2024-09-27 15:57:48.146934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:07.920 qpair failed and we were unable to recover it. 00:39:07.920 [2024-09-27 15:57:48.156839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:07.920 [2024-09-27 15:57:48.156918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:07.920 [2024-09-27 15:57:48.156937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:07.920 [2024-09-27 15:57:48.156945] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:07.920 [2024-09-27 15:57:48.156952] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:07.920 [2024-09-27 15:57:48.156970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:07.920 qpair failed and we were unable to recover it. 00:39:07.920 [2024-09-27 15:57:48.166890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:07.920 [2024-09-27 15:57:48.167000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:07.920 [2024-09-27 15:57:48.167019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:07.920 [2024-09-27 15:57:48.167028] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:07.920 [2024-09-27 15:57:48.167036] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:07.920 [2024-09-27 15:57:48.167053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:07.920 qpair failed and we were unable to recover it. 00:39:07.920 [2024-09-27 15:57:48.176901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:07.920 [2024-09-27 15:57:48.176973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:07.920 [2024-09-27 15:57:48.176996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:07.920 [2024-09-27 15:57:48.177004] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:07.920 [2024-09-27 15:57:48.177011] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:07.920 [2024-09-27 15:57:48.177027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:07.920 qpair failed and we were unable to recover it. 00:39:07.920 [2024-09-27 15:57:48.187088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:07.920 [2024-09-27 15:57:48.187164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:07.920 [2024-09-27 15:57:48.187181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:07.920 [2024-09-27 15:57:48.187189] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:07.920 [2024-09-27 15:57:48.187197] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:07.920 [2024-09-27 15:57:48.187213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:07.920 qpair failed and we were unable to recover it. 00:39:07.920 [2024-09-27 15:57:48.197054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:07.920 [2024-09-27 15:57:48.197115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:07.920 [2024-09-27 15:57:48.197132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:07.920 [2024-09-27 15:57:48.197140] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:07.920 [2024-09-27 15:57:48.197147] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:07.920 [2024-09-27 15:57:48.197163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:07.920 qpair failed and we were unable to recover it. 00:39:07.920 [2024-09-27 15:57:48.207001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:07.920 [2024-09-27 15:57:48.207074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:07.920 [2024-09-27 15:57:48.207092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:07.920 [2024-09-27 15:57:48.207100] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:07.920 [2024-09-27 15:57:48.207108] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:07.920 [2024-09-27 15:57:48.207124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:07.920 qpair failed and we were unable to recover it. 00:39:07.920 [2024-09-27 15:57:48.217113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:07.920 [2024-09-27 15:57:48.217179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:07.920 [2024-09-27 15:57:48.217196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:07.920 [2024-09-27 15:57:48.217204] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:07.921 [2024-09-27 15:57:48.217211] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:07.921 [2024-09-27 15:57:48.217227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:07.921 qpair failed and we were unable to recover it. 00:39:07.921 [2024-09-27 15:57:48.227028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:07.921 [2024-09-27 15:57:48.227105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:07.921 [2024-09-27 15:57:48.227125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:07.921 [2024-09-27 15:57:48.227134] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:07.921 [2024-09-27 15:57:48.227142] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:07.921 [2024-09-27 15:57:48.227160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:07.921 qpair failed and we were unable to recover it. 00:39:07.921 [2024-09-27 15:57:48.237041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:07.921 [2024-09-27 15:57:48.237108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:07.921 [2024-09-27 15:57:48.237126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:07.921 [2024-09-27 15:57:48.237135] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:07.921 [2024-09-27 15:57:48.237143] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:07.921 [2024-09-27 15:57:48.237159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:07.921 qpair failed and we were unable to recover it. 00:39:07.921 [2024-09-27 15:57:48.247165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:07.921 [2024-09-27 15:57:48.247243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:07.921 [2024-09-27 15:57:48.247261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:07.921 [2024-09-27 15:57:48.247269] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:07.921 [2024-09-27 15:57:48.247276] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:07.921 [2024-09-27 15:57:48.247293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:07.921 qpair failed and we were unable to recover it. 00:39:07.921 [2024-09-27 15:57:48.257075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:07.921 [2024-09-27 15:57:48.257171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:07.921 [2024-09-27 15:57:48.257189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:07.921 [2024-09-27 15:57:48.257196] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:07.921 [2024-09-27 15:57:48.257205] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:07.921 [2024-09-27 15:57:48.257221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:07.921 qpair failed and we were unable to recover it. 00:39:07.921 [2024-09-27 15:57:48.267218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:07.921 [2024-09-27 15:57:48.267292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:07.921 [2024-09-27 15:57:48.267316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:07.921 [2024-09-27 15:57:48.267325] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:07.921 [2024-09-27 15:57:48.267333] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:07.921 [2024-09-27 15:57:48.267353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:07.921 qpair failed and we were unable to recover it. 00:39:07.921 [2024-09-27 15:57:48.277169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:07.921 [2024-09-27 15:57:48.277267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:07.921 [2024-09-27 15:57:48.277288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:07.921 [2024-09-27 15:57:48.277297] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:07.921 [2024-09-27 15:57:48.277305] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:07.921 [2024-09-27 15:57:48.277322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:07.921 qpair failed and we were unable to recover it. 00:39:07.921 [2024-09-27 15:57:48.287217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:07.921 [2024-09-27 15:57:48.287289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:07.921 [2024-09-27 15:57:48.287307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:07.921 [2024-09-27 15:57:48.287315] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:07.921 [2024-09-27 15:57:48.287322] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:07.921 [2024-09-27 15:57:48.287338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:07.921 qpair failed and we were unable to recover it. 00:39:07.921 [2024-09-27 15:57:48.297282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:07.921 [2024-09-27 15:57:48.297357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:07.921 [2024-09-27 15:57:48.297376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:07.921 [2024-09-27 15:57:48.297384] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:07.921 [2024-09-27 15:57:48.297391] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:07.921 [2024-09-27 15:57:48.297407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:07.921 qpair failed and we were unable to recover it. 00:39:07.921 [2024-09-27 15:57:48.307292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:07.921 [2024-09-27 15:57:48.307357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:07.921 [2024-09-27 15:57:48.307374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:07.921 [2024-09-27 15:57:48.307381] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:07.921 [2024-09-27 15:57:48.307388] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:07.921 [2024-09-27 15:57:48.307404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:07.921 qpair failed and we were unable to recover it. 00:39:07.921 [2024-09-27 15:57:48.317286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:07.921 [2024-09-27 15:57:48.317353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:07.921 [2024-09-27 15:57:48.317370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:07.921 [2024-09-27 15:57:48.317379] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:07.921 [2024-09-27 15:57:48.317386] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:07.921 [2024-09-27 15:57:48.317404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:07.921 qpair failed and we were unable to recover it. 00:39:07.921 [2024-09-27 15:57:48.327313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:07.921 [2024-09-27 15:57:48.327385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:07.921 [2024-09-27 15:57:48.327404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:07.921 [2024-09-27 15:57:48.327412] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:07.921 [2024-09-27 15:57:48.327419] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:07.921 [2024-09-27 15:57:48.327435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:07.921 qpair failed and we were unable to recover it. 00:39:07.921 [2024-09-27 15:57:48.337400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:07.921 [2024-09-27 15:57:48.337480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:07.921 [2024-09-27 15:57:48.337497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:07.921 [2024-09-27 15:57:48.337504] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:07.921 [2024-09-27 15:57:48.337512] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:07.921 [2024-09-27 15:57:48.337528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:07.921 qpair failed and we were unable to recover it. 00:39:07.921 [2024-09-27 15:57:48.347370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:07.921 [2024-09-27 15:57:48.347438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:07.921 [2024-09-27 15:57:48.347456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:07.922 [2024-09-27 15:57:48.347464] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:07.922 [2024-09-27 15:57:48.347472] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:07.922 [2024-09-27 15:57:48.347488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:07.922 qpair failed and we were unable to recover it. 00:39:07.922 [2024-09-27 15:57:48.357410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:07.922 [2024-09-27 15:57:48.357479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:07.922 [2024-09-27 15:57:48.357502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:07.922 [2024-09-27 15:57:48.357510] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:07.922 [2024-09-27 15:57:48.357517] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:07.922 [2024-09-27 15:57:48.357534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:07.922 qpair failed and we were unable to recover it. 00:39:07.922 [2024-09-27 15:57:48.367447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:07.922 [2024-09-27 15:57:48.367520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:07.922 [2024-09-27 15:57:48.367540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:07.922 [2024-09-27 15:57:48.367548] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:07.922 [2024-09-27 15:57:48.367556] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:07.922 [2024-09-27 15:57:48.367572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:07.922 qpair failed and we were unable to recover it. 00:39:07.922 [2024-09-27 15:57:48.377468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:07.922 [2024-09-27 15:57:48.377543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:07.922 [2024-09-27 15:57:48.377560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:07.922 [2024-09-27 15:57:48.377568] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:07.922 [2024-09-27 15:57:48.377575] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:07.922 [2024-09-27 15:57:48.377591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:07.922 qpair failed and we were unable to recover it. 00:39:07.922 [2024-09-27 15:57:48.387480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:07.922 [2024-09-27 15:57:48.387543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:07.922 [2024-09-27 15:57:48.387561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:07.922 [2024-09-27 15:57:48.387569] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:07.922 [2024-09-27 15:57:48.387577] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:07.922 [2024-09-27 15:57:48.387592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:07.922 qpair failed and we were unable to recover it. 00:39:07.922 [2024-09-27 15:57:48.397512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:07.922 [2024-09-27 15:57:48.397577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:07.922 [2024-09-27 15:57:48.397594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:07.922 [2024-09-27 15:57:48.397602] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:07.922 [2024-09-27 15:57:48.397609] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:07.922 [2024-09-27 15:57:48.397631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:07.922 qpair failed and we were unable to recover it. 00:39:08.185 [2024-09-27 15:57:48.407561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.185 [2024-09-27 15:57:48.407636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.185 [2024-09-27 15:57:48.407658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.185 [2024-09-27 15:57:48.407668] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.185 [2024-09-27 15:57:48.407676] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.185 [2024-09-27 15:57:48.407694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.185 qpair failed and we were unable to recover it. 00:39:08.185 [2024-09-27 15:57:48.417622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.185 [2024-09-27 15:57:48.417733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.185 [2024-09-27 15:57:48.417754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.185 [2024-09-27 15:57:48.417762] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.185 [2024-09-27 15:57:48.417769] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.185 [2024-09-27 15:57:48.417787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.185 qpair failed and we were unable to recover it. 00:39:08.185 [2024-09-27 15:57:48.427617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.185 [2024-09-27 15:57:48.427689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.185 [2024-09-27 15:57:48.427727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.185 [2024-09-27 15:57:48.427738] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.185 [2024-09-27 15:57:48.427746] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.185 [2024-09-27 15:57:48.427772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.185 qpair failed and we were unable to recover it. 00:39:08.185 [2024-09-27 15:57:48.437616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.185 [2024-09-27 15:57:48.437692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.185 [2024-09-27 15:57:48.437714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.185 [2024-09-27 15:57:48.437723] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.185 [2024-09-27 15:57:48.437731] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.185 [2024-09-27 15:57:48.437751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.185 qpair failed and we were unable to recover it. 00:39:08.185 [2024-09-27 15:57:48.447687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.185 [2024-09-27 15:57:48.447757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.185 [2024-09-27 15:57:48.447784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.185 [2024-09-27 15:57:48.447793] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.185 [2024-09-27 15:57:48.447800] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.185 [2024-09-27 15:57:48.447819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.185 qpair failed and we were unable to recover it. 00:39:08.185 [2024-09-27 15:57:48.457717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.185 [2024-09-27 15:57:48.457797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.185 [2024-09-27 15:57:48.457815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.185 [2024-09-27 15:57:48.457823] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.185 [2024-09-27 15:57:48.457830] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.185 [2024-09-27 15:57:48.457846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.185 qpair failed and we were unable to recover it. 00:39:08.185 [2024-09-27 15:57:48.467645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.185 [2024-09-27 15:57:48.467711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.185 [2024-09-27 15:57:48.467730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.185 [2024-09-27 15:57:48.467738] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.185 [2024-09-27 15:57:48.467745] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.185 [2024-09-27 15:57:48.467762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.185 qpair failed and we were unable to recover it. 00:39:08.185 [2024-09-27 15:57:48.477762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.185 [2024-09-27 15:57:48.477826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.185 [2024-09-27 15:57:48.477843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.185 [2024-09-27 15:57:48.477851] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.185 [2024-09-27 15:57:48.477858] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.185 [2024-09-27 15:57:48.477875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.185 qpair failed and we were unable to recover it. 00:39:08.185 [2024-09-27 15:57:48.487785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.185 [2024-09-27 15:57:48.487859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.185 [2024-09-27 15:57:48.487880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.185 [2024-09-27 15:57:48.487888] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.185 [2024-09-27 15:57:48.487902] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.185 [2024-09-27 15:57:48.487927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.185 qpair failed and we were unable to recover it. 00:39:08.185 [2024-09-27 15:57:48.497832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.185 [2024-09-27 15:57:48.497920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.185 [2024-09-27 15:57:48.497938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.186 [2024-09-27 15:57:48.497946] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.186 [2024-09-27 15:57:48.497955] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.186 [2024-09-27 15:57:48.497973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.186 qpair failed and we were unable to recover it. 00:39:08.186 [2024-09-27 15:57:48.507820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.186 [2024-09-27 15:57:48.507880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.186 [2024-09-27 15:57:48.507903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.186 [2024-09-27 15:57:48.507911] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.186 [2024-09-27 15:57:48.507919] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.186 [2024-09-27 15:57:48.507935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.186 qpair failed and we were unable to recover it. 00:39:08.186 [2024-09-27 15:57:48.517907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.186 [2024-09-27 15:57:48.518009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.186 [2024-09-27 15:57:48.518026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.186 [2024-09-27 15:57:48.518034] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.186 [2024-09-27 15:57:48.518042] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.186 [2024-09-27 15:57:48.518058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.186 qpair failed and we were unable to recover it. 00:39:08.186 [2024-09-27 15:57:48.527934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.186 [2024-09-27 15:57:48.528006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.186 [2024-09-27 15:57:48.528024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.186 [2024-09-27 15:57:48.528033] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.186 [2024-09-27 15:57:48.528040] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.186 [2024-09-27 15:57:48.528056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.186 qpair failed and we were unable to recover it. 00:39:08.186 [2024-09-27 15:57:48.537984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.186 [2024-09-27 15:57:48.538061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.186 [2024-09-27 15:57:48.538083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.186 [2024-09-27 15:57:48.538091] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.186 [2024-09-27 15:57:48.538099] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.186 [2024-09-27 15:57:48.538115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.186 qpair failed and we were unable to recover it. 00:39:08.186 [2024-09-27 15:57:48.547996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.186 [2024-09-27 15:57:48.548060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.186 [2024-09-27 15:57:48.548079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.186 [2024-09-27 15:57:48.548087] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.186 [2024-09-27 15:57:48.548094] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.186 [2024-09-27 15:57:48.548110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.186 qpair failed and we were unable to recover it. 00:39:08.186 [2024-09-27 15:57:48.557999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.186 [2024-09-27 15:57:48.558066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.186 [2024-09-27 15:57:48.558087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.186 [2024-09-27 15:57:48.558096] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.186 [2024-09-27 15:57:48.558106] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.186 [2024-09-27 15:57:48.558126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.186 qpair failed and we were unable to recover it. 00:39:08.186 [2024-09-27 15:57:48.568015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.186 [2024-09-27 15:57:48.568093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.186 [2024-09-27 15:57:48.568113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.186 [2024-09-27 15:57:48.568122] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.186 [2024-09-27 15:57:48.568130] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.186 [2024-09-27 15:57:48.568149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.186 qpair failed and we were unable to recover it. 00:39:08.186 [2024-09-27 15:57:48.578102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.186 [2024-09-27 15:57:48.578179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.186 [2024-09-27 15:57:48.578198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.186 [2024-09-27 15:57:48.578207] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.186 [2024-09-27 15:57:48.578214] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.186 [2024-09-27 15:57:48.578245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.186 qpair failed and we were unable to recover it. 00:39:08.186 [2024-09-27 15:57:48.588063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.186 [2024-09-27 15:57:48.588129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.186 [2024-09-27 15:57:48.588146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.186 [2024-09-27 15:57:48.588154] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.186 [2024-09-27 15:57:48.588161] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.186 [2024-09-27 15:57:48.588177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.186 qpair failed and we were unable to recover it. 00:39:08.186 [2024-09-27 15:57:48.598030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.186 [2024-09-27 15:57:48.598094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.186 [2024-09-27 15:57:48.598111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.186 [2024-09-27 15:57:48.598119] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.186 [2024-09-27 15:57:48.598126] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.186 [2024-09-27 15:57:48.598142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.186 qpair failed and we were unable to recover it. 00:39:08.186 [2024-09-27 15:57:48.608157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.186 [2024-09-27 15:57:48.608274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.186 [2024-09-27 15:57:48.608292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.186 [2024-09-27 15:57:48.608300] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.186 [2024-09-27 15:57:48.608308] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.186 [2024-09-27 15:57:48.608323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.186 qpair failed and we were unable to recover it. 00:39:08.186 [2024-09-27 15:57:48.618243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.186 [2024-09-27 15:57:48.618359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.186 [2024-09-27 15:57:48.618377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.186 [2024-09-27 15:57:48.618385] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.186 [2024-09-27 15:57:48.618392] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.186 [2024-09-27 15:57:48.618408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.186 qpair failed and we were unable to recover it. 00:39:08.186 [2024-09-27 15:57:48.628197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.186 [2024-09-27 15:57:48.628256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.187 [2024-09-27 15:57:48.628279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.187 [2024-09-27 15:57:48.628288] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.187 [2024-09-27 15:57:48.628295] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.187 [2024-09-27 15:57:48.628311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.187 qpair failed and we were unable to recover it. 00:39:08.187 [2024-09-27 15:57:48.638205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.187 [2024-09-27 15:57:48.638265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.187 [2024-09-27 15:57:48.638282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.187 [2024-09-27 15:57:48.638290] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.187 [2024-09-27 15:57:48.638297] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.187 [2024-09-27 15:57:48.638312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.187 qpair failed and we were unable to recover it. 00:39:08.187 [2024-09-27 15:57:48.648282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.187 [2024-09-27 15:57:48.648346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.187 [2024-09-27 15:57:48.648363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.187 [2024-09-27 15:57:48.648371] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.187 [2024-09-27 15:57:48.648378] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.187 [2024-09-27 15:57:48.648393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.187 qpair failed and we were unable to recover it. 00:39:08.187 [2024-09-27 15:57:48.658329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.187 [2024-09-27 15:57:48.658402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.187 [2024-09-27 15:57:48.658419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.187 [2024-09-27 15:57:48.658427] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.187 [2024-09-27 15:57:48.658434] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.187 [2024-09-27 15:57:48.658449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.187 qpair failed and we were unable to recover it. 00:39:08.187 [2024-09-27 15:57:48.668214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.187 [2024-09-27 15:57:48.668277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.187 [2024-09-27 15:57:48.668300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.187 [2024-09-27 15:57:48.668308] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.187 [2024-09-27 15:57:48.668322] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.187 [2024-09-27 15:57:48.668342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.187 qpair failed and we were unable to recover it. 00:39:08.450 [2024-09-27 15:57:48.678253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.450 [2024-09-27 15:57:48.678316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.450 [2024-09-27 15:57:48.678333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.450 [2024-09-27 15:57:48.678342] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.450 [2024-09-27 15:57:48.678349] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.450 [2024-09-27 15:57:48.678364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.450 qpair failed and we were unable to recover it. 00:39:08.450 [2024-09-27 15:57:48.688441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.450 [2024-09-27 15:57:48.688549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.450 [2024-09-27 15:57:48.688567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.450 [2024-09-27 15:57:48.688575] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.450 [2024-09-27 15:57:48.688581] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.450 [2024-09-27 15:57:48.688597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.450 qpair failed and we were unable to recover it. 00:39:08.450 [2024-09-27 15:57:48.698422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.450 [2024-09-27 15:57:48.698506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.450 [2024-09-27 15:57:48.698527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.450 [2024-09-27 15:57:48.698535] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.450 [2024-09-27 15:57:48.698542] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.450 [2024-09-27 15:57:48.698559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.450 qpair failed and we were unable to recover it. 00:39:08.450 [2024-09-27 15:57:48.708449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.450 [2024-09-27 15:57:48.708539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.450 [2024-09-27 15:57:48.708558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.450 [2024-09-27 15:57:48.708568] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.450 [2024-09-27 15:57:48.708575] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.450 [2024-09-27 15:57:48.708590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.450 qpair failed and we were unable to recover it. 00:39:08.450 [2024-09-27 15:57:48.718477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.450 [2024-09-27 15:57:48.718590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.450 [2024-09-27 15:57:48.718616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.450 [2024-09-27 15:57:48.718624] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.450 [2024-09-27 15:57:48.718631] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.450 [2024-09-27 15:57:48.718648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.450 qpair failed and we were unable to recover it. 00:39:08.450 [2024-09-27 15:57:48.728504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.450 [2024-09-27 15:57:48.728571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.450 [2024-09-27 15:57:48.728588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.450 [2024-09-27 15:57:48.728596] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.450 [2024-09-27 15:57:48.728603] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.450 [2024-09-27 15:57:48.728618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.450 qpair failed and we were unable to recover it. 00:39:08.450 [2024-09-27 15:57:48.738586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.450 [2024-09-27 15:57:48.738661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.450 [2024-09-27 15:57:48.738678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.450 [2024-09-27 15:57:48.738686] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.450 [2024-09-27 15:57:48.738693] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.450 [2024-09-27 15:57:48.738709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.450 qpair failed and we were unable to recover it. 00:39:08.450 [2024-09-27 15:57:48.748515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.450 [2024-09-27 15:57:48.748574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.450 [2024-09-27 15:57:48.748591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.450 [2024-09-27 15:57:48.748598] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.450 [2024-09-27 15:57:48.748606] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.450 [2024-09-27 15:57:48.748621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.450 qpair failed and we were unable to recover it. 00:39:08.450 [2024-09-27 15:57:48.758615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.451 [2024-09-27 15:57:48.758679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.451 [2024-09-27 15:57:48.758696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.451 [2024-09-27 15:57:48.758704] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.451 [2024-09-27 15:57:48.758717] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.451 [2024-09-27 15:57:48.758733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.451 qpair failed and we were unable to recover it. 00:39:08.451 [2024-09-27 15:57:48.768535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.451 [2024-09-27 15:57:48.768603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.451 [2024-09-27 15:57:48.768622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.451 [2024-09-27 15:57:48.768630] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.451 [2024-09-27 15:57:48.768637] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.451 [2024-09-27 15:57:48.768653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.451 qpair failed and we were unable to recover it. 00:39:08.451 [2024-09-27 15:57:48.778735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.451 [2024-09-27 15:57:48.778837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.451 [2024-09-27 15:57:48.778854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.451 [2024-09-27 15:57:48.778864] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.451 [2024-09-27 15:57:48.778871] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.451 [2024-09-27 15:57:48.778886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.451 qpair failed and we were unable to recover it. 00:39:08.451 [2024-09-27 15:57:48.788705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.451 [2024-09-27 15:57:48.788766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.451 [2024-09-27 15:57:48.788783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.451 [2024-09-27 15:57:48.788791] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.451 [2024-09-27 15:57:48.788798] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.451 [2024-09-27 15:57:48.788815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.451 qpair failed and we were unable to recover it. 00:39:08.451 [2024-09-27 15:57:48.798699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.451 [2024-09-27 15:57:48.798772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.451 [2024-09-27 15:57:48.798789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.451 [2024-09-27 15:57:48.798797] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.451 [2024-09-27 15:57:48.798804] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.451 [2024-09-27 15:57:48.798820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.451 qpair failed and we were unable to recover it. 00:39:08.451 [2024-09-27 15:57:48.808746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.451 [2024-09-27 15:57:48.808827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.451 [2024-09-27 15:57:48.808845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.451 [2024-09-27 15:57:48.808853] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.451 [2024-09-27 15:57:48.808861] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.451 [2024-09-27 15:57:48.808876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.451 qpair failed and we were unable to recover it. 00:39:08.451 [2024-09-27 15:57:48.818779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.451 [2024-09-27 15:57:48.818855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.451 [2024-09-27 15:57:48.818872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.451 [2024-09-27 15:57:48.818880] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.451 [2024-09-27 15:57:48.818887] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.451 [2024-09-27 15:57:48.818910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.451 qpair failed and we were unable to recover it. 00:39:08.451 [2024-09-27 15:57:48.828776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.451 [2024-09-27 15:57:48.828845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.451 [2024-09-27 15:57:48.828862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.451 [2024-09-27 15:57:48.828870] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.451 [2024-09-27 15:57:48.828877] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.451 [2024-09-27 15:57:48.828893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.451 qpair failed and we were unable to recover it. 00:39:08.451 [2024-09-27 15:57:48.838823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.451 [2024-09-27 15:57:48.838890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.451 [2024-09-27 15:57:48.838913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.451 [2024-09-27 15:57:48.838921] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.451 [2024-09-27 15:57:48.838928] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.451 [2024-09-27 15:57:48.838945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.451 qpair failed and we were unable to recover it. 00:39:08.451 [2024-09-27 15:57:48.848865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.451 [2024-09-27 15:57:48.848978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.451 [2024-09-27 15:57:48.848999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.451 [2024-09-27 15:57:48.849007] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.451 [2024-09-27 15:57:48.849021] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.451 [2024-09-27 15:57:48.849039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.451 qpair failed and we were unable to recover it. 00:39:08.451 [2024-09-27 15:57:48.858922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.451 [2024-09-27 15:57:48.858992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.451 [2024-09-27 15:57:48.859010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.451 [2024-09-27 15:57:48.859018] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.451 [2024-09-27 15:57:48.859025] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.451 [2024-09-27 15:57:48.859041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.451 qpair failed and we were unable to recover it. 00:39:08.451 [2024-09-27 15:57:48.868946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.451 [2024-09-27 15:57:48.869047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.451 [2024-09-27 15:57:48.869068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.451 [2024-09-27 15:57:48.869076] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.451 [2024-09-27 15:57:48.869083] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.451 [2024-09-27 15:57:48.869100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.451 qpair failed and we were unable to recover it. 00:39:08.451 [2024-09-27 15:57:48.878930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.451 [2024-09-27 15:57:48.878993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.451 [2024-09-27 15:57:48.879010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.451 [2024-09-27 15:57:48.879019] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.451 [2024-09-27 15:57:48.879026] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.451 [2024-09-27 15:57:48.879042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.452 qpair failed and we were unable to recover it. 00:39:08.452 [2024-09-27 15:57:48.888982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.452 [2024-09-27 15:57:48.889053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.452 [2024-09-27 15:57:48.889070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.452 [2024-09-27 15:57:48.889078] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.452 [2024-09-27 15:57:48.889086] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.452 [2024-09-27 15:57:48.889103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.452 qpair failed and we were unable to recover it. 00:39:08.452 [2024-09-27 15:57:48.899046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.452 [2024-09-27 15:57:48.899131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.452 [2024-09-27 15:57:48.899152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.452 [2024-09-27 15:57:48.899160] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.452 [2024-09-27 15:57:48.899167] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.452 [2024-09-27 15:57:48.899187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.452 qpair failed and we were unable to recover it. 00:39:08.452 [2024-09-27 15:57:48.908911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.452 [2024-09-27 15:57:48.908977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.452 [2024-09-27 15:57:48.908994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.452 [2024-09-27 15:57:48.909002] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.452 [2024-09-27 15:57:48.909009] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.452 [2024-09-27 15:57:48.909026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.452 qpair failed and we were unable to recover it. 00:39:08.452 [2024-09-27 15:57:48.918936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.452 [2024-09-27 15:57:48.919009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.452 [2024-09-27 15:57:48.919027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.452 [2024-09-27 15:57:48.919035] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.452 [2024-09-27 15:57:48.919042] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.452 [2024-09-27 15:57:48.919058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.452 qpair failed and we were unable to recover it. 00:39:08.452 [2024-09-27 15:57:48.929094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.452 [2024-09-27 15:57:48.929168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.452 [2024-09-27 15:57:48.929186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.452 [2024-09-27 15:57:48.929195] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.452 [2024-09-27 15:57:48.929202] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.452 [2024-09-27 15:57:48.929218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.452 qpair failed and we were unable to recover it. 00:39:08.716 [2024-09-27 15:57:48.939128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.716 [2024-09-27 15:57:48.939196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.716 [2024-09-27 15:57:48.939217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.716 [2024-09-27 15:57:48.939225] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.716 [2024-09-27 15:57:48.939239] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.716 [2024-09-27 15:57:48.939256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.716 qpair failed and we were unable to recover it. 00:39:08.716 [2024-09-27 15:57:48.949139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.716 [2024-09-27 15:57:48.949215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.716 [2024-09-27 15:57:48.949233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.716 [2024-09-27 15:57:48.949241] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.716 [2024-09-27 15:57:48.949248] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.716 [2024-09-27 15:57:48.949264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.716 qpair failed and we were unable to recover it. 00:39:08.716 [2024-09-27 15:57:48.959069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.716 [2024-09-27 15:57:48.959148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.716 [2024-09-27 15:57:48.959166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.716 [2024-09-27 15:57:48.959174] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.716 [2024-09-27 15:57:48.959181] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.716 [2024-09-27 15:57:48.959198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.716 qpair failed and we were unable to recover it. 00:39:08.716 [2024-09-27 15:57:48.969221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.716 [2024-09-27 15:57:48.969329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.716 [2024-09-27 15:57:48.969348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.716 [2024-09-27 15:57:48.969356] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.716 [2024-09-27 15:57:48.969363] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.716 [2024-09-27 15:57:48.969381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.716 qpair failed and we were unable to recover it. 00:39:08.716 [2024-09-27 15:57:48.979276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.716 [2024-09-27 15:57:48.979361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.716 [2024-09-27 15:57:48.979378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.716 [2024-09-27 15:57:48.979386] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.716 [2024-09-27 15:57:48.979394] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.716 [2024-09-27 15:57:48.979409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.716 qpair failed and we were unable to recover it. 00:39:08.716 [2024-09-27 15:57:48.989246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.716 [2024-09-27 15:57:48.989315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.716 [2024-09-27 15:57:48.989333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.716 [2024-09-27 15:57:48.989340] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.716 [2024-09-27 15:57:48.989348] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.716 [2024-09-27 15:57:48.989363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.716 qpair failed and we were unable to recover it. 00:39:08.716 [2024-09-27 15:57:48.999298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.716 [2024-09-27 15:57:48.999360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.716 [2024-09-27 15:57:48.999376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.716 [2024-09-27 15:57:48.999384] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.716 [2024-09-27 15:57:48.999392] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.716 [2024-09-27 15:57:48.999408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.716 qpair failed and we were unable to recover it. 00:39:08.716 [2024-09-27 15:57:49.009328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.716 [2024-09-27 15:57:49.009391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.716 [2024-09-27 15:57:49.009408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.716 [2024-09-27 15:57:49.009416] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.716 [2024-09-27 15:57:49.009423] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.716 [2024-09-27 15:57:49.009438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.716 qpair failed and we were unable to recover it. 00:39:08.716 [2024-09-27 15:57:49.019354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.716 [2024-09-27 15:57:49.019428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.716 [2024-09-27 15:57:49.019445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.716 [2024-09-27 15:57:49.019453] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.716 [2024-09-27 15:57:49.019461] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.716 [2024-09-27 15:57:49.019476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.716 qpair failed and we were unable to recover it. 00:39:08.716 [2024-09-27 15:57:49.029363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.716 [2024-09-27 15:57:49.029428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.716 [2024-09-27 15:57:49.029445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.716 [2024-09-27 15:57:49.029458] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.716 [2024-09-27 15:57:49.029465] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.716 [2024-09-27 15:57:49.029481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.716 qpair failed and we were unable to recover it. 00:39:08.716 [2024-09-27 15:57:49.039404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.716 [2024-09-27 15:57:49.039498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.716 [2024-09-27 15:57:49.039515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.716 [2024-09-27 15:57:49.039525] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.716 [2024-09-27 15:57:49.039532] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.716 [2024-09-27 15:57:49.039548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.716 qpair failed and we were unable to recover it. 00:39:08.716 [2024-09-27 15:57:49.049423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.716 [2024-09-27 15:57:49.049496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.716 [2024-09-27 15:57:49.049512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.716 [2024-09-27 15:57:49.049520] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.716 [2024-09-27 15:57:49.049527] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.716 [2024-09-27 15:57:49.049543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.716 qpair failed and we were unable to recover it. 00:39:08.716 [2024-09-27 15:57:49.059480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.716 [2024-09-27 15:57:49.059546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.717 [2024-09-27 15:57:49.059564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.717 [2024-09-27 15:57:49.059572] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.717 [2024-09-27 15:57:49.059579] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.717 [2024-09-27 15:57:49.059594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.717 qpair failed and we were unable to recover it. 00:39:08.717 [2024-09-27 15:57:49.069495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.717 [2024-09-27 15:57:49.069598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.717 [2024-09-27 15:57:49.069617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.717 [2024-09-27 15:57:49.069626] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.717 [2024-09-27 15:57:49.069632] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.717 [2024-09-27 15:57:49.069649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.717 qpair failed and we were unable to recover it. 00:39:08.717 [2024-09-27 15:57:49.079552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.717 [2024-09-27 15:57:49.079652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.717 [2024-09-27 15:57:49.079690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.717 [2024-09-27 15:57:49.079700] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.717 [2024-09-27 15:57:49.079708] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.717 [2024-09-27 15:57:49.079730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.717 qpair failed and we were unable to recover it. 00:39:08.717 [2024-09-27 15:57:49.089571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.717 [2024-09-27 15:57:49.089654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.717 [2024-09-27 15:57:49.089693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.717 [2024-09-27 15:57:49.089704] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.717 [2024-09-27 15:57:49.089711] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.717 [2024-09-27 15:57:49.089735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.717 qpair failed and we were unable to recover it. 00:39:08.717 [2024-09-27 15:57:49.099591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.717 [2024-09-27 15:57:49.099665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.717 [2024-09-27 15:57:49.099702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.717 [2024-09-27 15:57:49.099714] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.717 [2024-09-27 15:57:49.099723] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.717 [2024-09-27 15:57:49.099746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.717 qpair failed and we were unable to recover it. 00:39:08.717 [2024-09-27 15:57:49.109508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.717 [2024-09-27 15:57:49.109584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.717 [2024-09-27 15:57:49.109608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.717 [2024-09-27 15:57:49.109616] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.717 [2024-09-27 15:57:49.109625] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.717 [2024-09-27 15:57:49.109644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.717 qpair failed and we were unable to recover it. 00:39:08.717 [2024-09-27 15:57:49.119612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.717 [2024-09-27 15:57:49.119676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.717 [2024-09-27 15:57:49.119694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.717 [2024-09-27 15:57:49.119709] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.717 [2024-09-27 15:57:49.119716] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.717 [2024-09-27 15:57:49.119733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.717 qpair failed and we were unable to recover it. 00:39:08.717 [2024-09-27 15:57:49.129668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.717 [2024-09-27 15:57:49.129737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.717 [2024-09-27 15:57:49.129755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.717 [2024-09-27 15:57:49.129763] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.717 [2024-09-27 15:57:49.129771] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.717 [2024-09-27 15:57:49.129787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.717 qpair failed and we were unable to recover it. 00:39:08.717 [2024-09-27 15:57:49.139785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.717 [2024-09-27 15:57:49.139880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.717 [2024-09-27 15:57:49.139903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.717 [2024-09-27 15:57:49.139912] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.717 [2024-09-27 15:57:49.139919] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.717 [2024-09-27 15:57:49.139936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.717 qpair failed and we were unable to recover it. 00:39:08.717 [2024-09-27 15:57:49.149701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.717 [2024-09-27 15:57:49.149769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.717 [2024-09-27 15:57:49.149786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.717 [2024-09-27 15:57:49.149795] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.717 [2024-09-27 15:57:49.149802] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.717 [2024-09-27 15:57:49.149817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.717 qpair failed and we were unable to recover it. 00:39:08.717 [2024-09-27 15:57:49.159700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.717 [2024-09-27 15:57:49.159762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.717 [2024-09-27 15:57:49.159779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.717 [2024-09-27 15:57:49.159787] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.717 [2024-09-27 15:57:49.159795] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.717 [2024-09-27 15:57:49.159811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.717 qpair failed and we were unable to recover it. 00:39:08.717 [2024-09-27 15:57:49.169781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.717 [2024-09-27 15:57:49.169855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.717 [2024-09-27 15:57:49.169874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.717 [2024-09-27 15:57:49.169882] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.717 [2024-09-27 15:57:49.169889] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.717 [2024-09-27 15:57:49.169911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.717 qpair failed and we were unable to recover it. 00:39:08.717 [2024-09-27 15:57:49.179838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.717 [2024-09-27 15:57:49.179950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.717 [2024-09-27 15:57:49.179968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.717 [2024-09-27 15:57:49.179977] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.717 [2024-09-27 15:57:49.179984] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.717 [2024-09-27 15:57:49.180001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.717 qpair failed and we were unable to recover it. 00:39:08.717 [2024-09-27 15:57:49.189822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.717 [2024-09-27 15:57:49.189887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.718 [2024-09-27 15:57:49.189907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.718 [2024-09-27 15:57:49.189916] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.718 [2024-09-27 15:57:49.189923] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.718 [2024-09-27 15:57:49.189939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.718 qpair failed and we were unable to recover it. 00:39:08.718 [2024-09-27 15:57:49.199871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.718 [2024-09-27 15:57:49.199954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.718 [2024-09-27 15:57:49.199972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.718 [2024-09-27 15:57:49.199979] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.718 [2024-09-27 15:57:49.199986] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.718 [2024-09-27 15:57:49.200003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.718 qpair failed and we were unable to recover it. 00:39:08.980 [2024-09-27 15:57:49.209928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.980 [2024-09-27 15:57:49.209998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.980 [2024-09-27 15:57:49.210015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.980 [2024-09-27 15:57:49.210029] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.980 [2024-09-27 15:57:49.210036] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.980 [2024-09-27 15:57:49.210052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.980 qpair failed and we were unable to recover it. 00:39:08.980 [2024-09-27 15:57:49.219979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.980 [2024-09-27 15:57:49.220052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.980 [2024-09-27 15:57:49.220070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.980 [2024-09-27 15:57:49.220078] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.980 [2024-09-27 15:57:49.220085] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.980 [2024-09-27 15:57:49.220101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.980 qpair failed and we were unable to recover it. 00:39:08.980 [2024-09-27 15:57:49.229974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.980 [2024-09-27 15:57:49.230039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.980 [2024-09-27 15:57:49.230057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.980 [2024-09-27 15:57:49.230064] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.980 [2024-09-27 15:57:49.230071] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.980 [2024-09-27 15:57:49.230087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.980 qpair failed and we were unable to recover it. 00:39:08.980 [2024-09-27 15:57:49.239880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.980 [2024-09-27 15:57:49.239991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.980 [2024-09-27 15:57:49.240010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.980 [2024-09-27 15:57:49.240018] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.980 [2024-09-27 15:57:49.240025] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.980 [2024-09-27 15:57:49.240041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.980 qpair failed and we were unable to recover it. 00:39:08.980 [2024-09-27 15:57:49.249934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.980 [2024-09-27 15:57:49.250007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.980 [2024-09-27 15:57:49.250024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.980 [2024-09-27 15:57:49.250031] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.980 [2024-09-27 15:57:49.250038] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.980 [2024-09-27 15:57:49.250054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.980 qpair failed and we were unable to recover it. 00:39:08.980 [2024-09-27 15:57:49.260093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.980 [2024-09-27 15:57:49.260172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.980 [2024-09-27 15:57:49.260188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.980 [2024-09-27 15:57:49.260196] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.980 [2024-09-27 15:57:49.260203] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.980 [2024-09-27 15:57:49.260219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.980 qpair failed and we were unable to recover it. 00:39:08.980 [2024-09-27 15:57:49.270073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.980 [2024-09-27 15:57:49.270151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.980 [2024-09-27 15:57:49.270168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.980 [2024-09-27 15:57:49.270176] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.980 [2024-09-27 15:57:49.270184] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.980 [2024-09-27 15:57:49.270201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.980 qpair failed and we were unable to recover it. 00:39:08.980 [2024-09-27 15:57:49.280057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.980 [2024-09-27 15:57:49.280116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.980 [2024-09-27 15:57:49.280133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.980 [2024-09-27 15:57:49.280142] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.980 [2024-09-27 15:57:49.280149] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.980 [2024-09-27 15:57:49.280164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.980 qpair failed and we were unable to recover it. 00:39:08.980 [2024-09-27 15:57:49.290155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.980 [2024-09-27 15:57:49.290234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.980 [2024-09-27 15:57:49.290253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.980 [2024-09-27 15:57:49.290261] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.980 [2024-09-27 15:57:49.290269] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.980 [2024-09-27 15:57:49.290285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.980 qpair failed and we were unable to recover it. 00:39:08.980 [2024-09-27 15:57:49.300193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.980 [2024-09-27 15:57:49.300267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.980 [2024-09-27 15:57:49.300284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.980 [2024-09-27 15:57:49.300298] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.980 [2024-09-27 15:57:49.300305] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.980 [2024-09-27 15:57:49.300323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.980 qpair failed and we were unable to recover it. 00:39:08.980 [2024-09-27 15:57:49.310231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.980 [2024-09-27 15:57:49.310325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.980 [2024-09-27 15:57:49.310342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.980 [2024-09-27 15:57:49.310352] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.980 [2024-09-27 15:57:49.310359] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.980 [2024-09-27 15:57:49.310375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.980 qpair failed and we were unable to recover it. 00:39:08.980 [2024-09-27 15:57:49.320228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.980 [2024-09-27 15:57:49.320298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.980 [2024-09-27 15:57:49.320318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.980 [2024-09-27 15:57:49.320327] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.981 [2024-09-27 15:57:49.320334] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.981 [2024-09-27 15:57:49.320354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.981 qpair failed and we were unable to recover it. 00:39:08.981 [2024-09-27 15:57:49.330329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.981 [2024-09-27 15:57:49.330421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.981 [2024-09-27 15:57:49.330439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.981 [2024-09-27 15:57:49.330447] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.981 [2024-09-27 15:57:49.330456] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.981 [2024-09-27 15:57:49.330472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.981 qpair failed and we were unable to recover it. 00:39:08.981 [2024-09-27 15:57:49.340234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.981 [2024-09-27 15:57:49.340318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.981 [2024-09-27 15:57:49.340335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.981 [2024-09-27 15:57:49.340344] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.981 [2024-09-27 15:57:49.340352] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.981 [2024-09-27 15:57:49.340369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.981 qpair failed and we were unable to recover it. 00:39:08.981 [2024-09-27 15:57:49.350298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.981 [2024-09-27 15:57:49.350361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.981 [2024-09-27 15:57:49.350378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.981 [2024-09-27 15:57:49.350387] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.981 [2024-09-27 15:57:49.350394] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.981 [2024-09-27 15:57:49.350409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.981 qpair failed and we were unable to recover it. 00:39:08.981 [2024-09-27 15:57:49.360312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.981 [2024-09-27 15:57:49.360363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.981 [2024-09-27 15:57:49.360378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.981 [2024-09-27 15:57:49.360386] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.981 [2024-09-27 15:57:49.360392] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.981 [2024-09-27 15:57:49.360407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.981 qpair failed and we were unable to recover it. 00:39:08.981 [2024-09-27 15:57:49.370243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.981 [2024-09-27 15:57:49.370313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.981 [2024-09-27 15:57:49.370332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.981 [2024-09-27 15:57:49.370341] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.981 [2024-09-27 15:57:49.370349] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.981 [2024-09-27 15:57:49.370366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.981 qpair failed and we were unable to recover it. 00:39:08.981 [2024-09-27 15:57:49.380307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.981 [2024-09-27 15:57:49.380369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.981 [2024-09-27 15:57:49.380386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.981 [2024-09-27 15:57:49.380394] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.981 [2024-09-27 15:57:49.380401] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.981 [2024-09-27 15:57:49.380416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.981 qpair failed and we were unable to recover it. 00:39:08.981 [2024-09-27 15:57:49.390294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.981 [2024-09-27 15:57:49.390349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.981 [2024-09-27 15:57:49.390373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.981 [2024-09-27 15:57:49.390381] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.981 [2024-09-27 15:57:49.390388] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.981 [2024-09-27 15:57:49.390402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.981 qpair failed and we were unable to recover it. 00:39:08.981 [2024-09-27 15:57:49.400398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.981 [2024-09-27 15:57:49.400449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.981 [2024-09-27 15:57:49.400464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.981 [2024-09-27 15:57:49.400471] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.981 [2024-09-27 15:57:49.400478] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.981 [2024-09-27 15:57:49.400492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.981 qpair failed and we were unable to recover it. 00:39:08.981 [2024-09-27 15:57:49.410463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.981 [2024-09-27 15:57:49.410529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.981 [2024-09-27 15:57:49.410543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.981 [2024-09-27 15:57:49.410551] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.981 [2024-09-27 15:57:49.410558] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.981 [2024-09-27 15:57:49.410573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.981 qpair failed and we were unable to recover it. 00:39:08.981 [2024-09-27 15:57:49.420512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.981 [2024-09-27 15:57:49.420571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.981 [2024-09-27 15:57:49.420587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.981 [2024-09-27 15:57:49.420594] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.981 [2024-09-27 15:57:49.420601] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.981 [2024-09-27 15:57:49.420615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.981 qpair failed and we were unable to recover it. 00:39:08.981 [2024-09-27 15:57:49.430508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.981 [2024-09-27 15:57:49.430565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.981 [2024-09-27 15:57:49.430580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.981 [2024-09-27 15:57:49.430587] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.981 [2024-09-27 15:57:49.430594] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.981 [2024-09-27 15:57:49.430608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.981 qpair failed and we were unable to recover it. 00:39:08.981 [2024-09-27 15:57:49.440393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.981 [2024-09-27 15:57:49.440460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.981 [2024-09-27 15:57:49.440475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.981 [2024-09-27 15:57:49.440483] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.981 [2024-09-27 15:57:49.440490] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.981 [2024-09-27 15:57:49.440504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.981 qpair failed and we were unable to recover it. 00:39:08.981 [2024-09-27 15:57:49.450580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.981 [2024-09-27 15:57:49.450637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.981 [2024-09-27 15:57:49.450652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.981 [2024-09-27 15:57:49.450659] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.981 [2024-09-27 15:57:49.450666] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.981 [2024-09-27 15:57:49.450680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.981 qpair failed and we were unable to recover it. 00:39:08.981 [2024-09-27 15:57:49.460495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:08.981 [2024-09-27 15:57:49.460556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:08.981 [2024-09-27 15:57:49.460571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:08.981 [2024-09-27 15:57:49.460578] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:08.981 [2024-09-27 15:57:49.460584] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:08.981 [2024-09-27 15:57:49.460598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:08.981 qpair failed and we were unable to recover it. 00:39:09.243 [2024-09-27 15:57:49.470631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.244 [2024-09-27 15:57:49.470695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.244 [2024-09-27 15:57:49.470709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.244 [2024-09-27 15:57:49.470716] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.244 [2024-09-27 15:57:49.470722] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.244 [2024-09-27 15:57:49.470736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.244 qpair failed and we were unable to recover it. 00:39:09.244 [2024-09-27 15:57:49.480619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.244 [2024-09-27 15:57:49.480670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.244 [2024-09-27 15:57:49.480688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.244 [2024-09-27 15:57:49.480695] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.244 [2024-09-27 15:57:49.480702] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.244 [2024-09-27 15:57:49.480715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.244 qpair failed and we were unable to recover it. 00:39:09.244 [2024-09-27 15:57:49.490695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.244 [2024-09-27 15:57:49.490789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.244 [2024-09-27 15:57:49.490803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.244 [2024-09-27 15:57:49.490811] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.244 [2024-09-27 15:57:49.490818] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.244 [2024-09-27 15:57:49.490832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.244 qpair failed and we were unable to recover it. 00:39:09.244 [2024-09-27 15:57:49.500607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.244 [2024-09-27 15:57:49.500666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.244 [2024-09-27 15:57:49.500680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.244 [2024-09-27 15:57:49.500687] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.244 [2024-09-27 15:57:49.500694] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.244 [2024-09-27 15:57:49.500707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.244 qpair failed and we were unable to recover it. 00:39:09.244 [2024-09-27 15:57:49.510776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.244 [2024-09-27 15:57:49.510830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.244 [2024-09-27 15:57:49.510843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.244 [2024-09-27 15:57:49.510851] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.244 [2024-09-27 15:57:49.510857] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.244 [2024-09-27 15:57:49.510870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.244 qpair failed and we were unable to recover it. 00:39:09.244 [2024-09-27 15:57:49.520705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.244 [2024-09-27 15:57:49.520753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.244 [2024-09-27 15:57:49.520766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.244 [2024-09-27 15:57:49.520773] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.244 [2024-09-27 15:57:49.520780] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.244 [2024-09-27 15:57:49.520793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.244 qpair failed and we were unable to recover it. 00:39:09.244 [2024-09-27 15:57:49.530794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.244 [2024-09-27 15:57:49.530902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.244 [2024-09-27 15:57:49.530917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.244 [2024-09-27 15:57:49.530925] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.244 [2024-09-27 15:57:49.530932] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.244 [2024-09-27 15:57:49.530947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.244 qpair failed and we were unable to recover it. 00:39:09.244 [2024-09-27 15:57:49.540831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.244 [2024-09-27 15:57:49.540885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.244 [2024-09-27 15:57:49.540902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.244 [2024-09-27 15:57:49.540910] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.244 [2024-09-27 15:57:49.540916] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.244 [2024-09-27 15:57:49.540930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.244 qpair failed and we were unable to recover it. 00:39:09.244 [2024-09-27 15:57:49.550799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.244 [2024-09-27 15:57:49.550847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.244 [2024-09-27 15:57:49.550861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.244 [2024-09-27 15:57:49.550868] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.244 [2024-09-27 15:57:49.550874] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.244 [2024-09-27 15:57:49.550888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.244 qpair failed and we were unable to recover it. 00:39:09.244 [2024-09-27 15:57:49.560825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.244 [2024-09-27 15:57:49.560876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.244 [2024-09-27 15:57:49.560889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.244 [2024-09-27 15:57:49.560907] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.244 [2024-09-27 15:57:49.560915] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.244 [2024-09-27 15:57:49.560929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.244 qpair failed and we were unable to recover it. 00:39:09.244 [2024-09-27 15:57:49.570909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.244 [2024-09-27 15:57:49.570964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.244 [2024-09-27 15:57:49.570980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.244 [2024-09-27 15:57:49.570987] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.244 [2024-09-27 15:57:49.570994] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.244 [2024-09-27 15:57:49.571007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.244 qpair failed and we were unable to recover it. 00:39:09.244 [2024-09-27 15:57:49.580919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.244 [2024-09-27 15:57:49.580978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.244 [2024-09-27 15:57:49.580991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.244 [2024-09-27 15:57:49.580998] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.244 [2024-09-27 15:57:49.581004] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.244 [2024-09-27 15:57:49.581018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.244 qpair failed and we were unable to recover it. 00:39:09.244 [2024-09-27 15:57:49.590919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.244 [2024-09-27 15:57:49.590965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.244 [2024-09-27 15:57:49.590979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.244 [2024-09-27 15:57:49.590987] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.244 [2024-09-27 15:57:49.590994] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.244 [2024-09-27 15:57:49.591007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.244 qpair failed and we were unable to recover it. 00:39:09.245 [2024-09-27 15:57:49.600917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.245 [2024-09-27 15:57:49.600966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.245 [2024-09-27 15:57:49.600979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.245 [2024-09-27 15:57:49.600986] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.245 [2024-09-27 15:57:49.600993] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.245 [2024-09-27 15:57:49.601006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.245 qpair failed and we were unable to recover it. 00:39:09.245 [2024-09-27 15:57:49.610919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.245 [2024-09-27 15:57:49.610976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.245 [2024-09-27 15:57:49.610989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.245 [2024-09-27 15:57:49.610996] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.245 [2024-09-27 15:57:49.611003] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.245 [2024-09-27 15:57:49.611019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.245 qpair failed and we were unable to recover it. 00:39:09.245 [2024-09-27 15:57:49.621054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.245 [2024-09-27 15:57:49.621112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.245 [2024-09-27 15:57:49.621126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.245 [2024-09-27 15:57:49.621133] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.245 [2024-09-27 15:57:49.621139] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.245 [2024-09-27 15:57:49.621153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.245 qpair failed and we were unable to recover it. 00:39:09.245 [2024-09-27 15:57:49.631021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.245 [2024-09-27 15:57:49.631072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.245 [2024-09-27 15:57:49.631085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.245 [2024-09-27 15:57:49.631092] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.245 [2024-09-27 15:57:49.631099] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.245 [2024-09-27 15:57:49.631112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.245 qpair failed and we were unable to recover it. 00:39:09.245 [2024-09-27 15:57:49.641057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.245 [2024-09-27 15:57:49.641107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.245 [2024-09-27 15:57:49.641121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.245 [2024-09-27 15:57:49.641128] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.245 [2024-09-27 15:57:49.641134] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.245 [2024-09-27 15:57:49.641148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.245 qpair failed and we were unable to recover it. 00:39:09.245 [2024-09-27 15:57:49.651177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.245 [2024-09-27 15:57:49.651232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.245 [2024-09-27 15:57:49.651245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.245 [2024-09-27 15:57:49.651252] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.245 [2024-09-27 15:57:49.651259] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.245 [2024-09-27 15:57:49.651272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.245 qpair failed and we were unable to recover it. 00:39:09.245 [2024-09-27 15:57:49.661181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.245 [2024-09-27 15:57:49.661251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.245 [2024-09-27 15:57:49.661267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.245 [2024-09-27 15:57:49.661274] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.245 [2024-09-27 15:57:49.661281] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.245 [2024-09-27 15:57:49.661294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.245 qpair failed and we were unable to recover it. 00:39:09.245 [2024-09-27 15:57:49.671144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.245 [2024-09-27 15:57:49.671196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.245 [2024-09-27 15:57:49.671212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.245 [2024-09-27 15:57:49.671220] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.245 [2024-09-27 15:57:49.671226] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.245 [2024-09-27 15:57:49.671240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.245 qpair failed and we were unable to recover it. 00:39:09.245 [2024-09-27 15:57:49.681075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.245 [2024-09-27 15:57:49.681126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.245 [2024-09-27 15:57:49.681140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.245 [2024-09-27 15:57:49.681147] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.245 [2024-09-27 15:57:49.681153] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.245 [2024-09-27 15:57:49.681167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.245 qpair failed and we were unable to recover it. 00:39:09.245 [2024-09-27 15:57:49.691223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.245 [2024-09-27 15:57:49.691278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.245 [2024-09-27 15:57:49.691292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.245 [2024-09-27 15:57:49.691299] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.245 [2024-09-27 15:57:49.691305] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.245 [2024-09-27 15:57:49.691318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.245 qpair failed and we were unable to recover it. 00:39:09.245 [2024-09-27 15:57:49.701233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.245 [2024-09-27 15:57:49.701285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.245 [2024-09-27 15:57:49.701299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.245 [2024-09-27 15:57:49.701306] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.245 [2024-09-27 15:57:49.701312] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.245 [2024-09-27 15:57:49.701329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.245 qpair failed and we were unable to recover it. 00:39:09.245 [2024-09-27 15:57:49.711238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.245 [2024-09-27 15:57:49.711294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.245 [2024-09-27 15:57:49.711307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.245 [2024-09-27 15:57:49.711314] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.245 [2024-09-27 15:57:49.711321] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.245 [2024-09-27 15:57:49.711334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.245 qpair failed and we were unable to recover it. 00:39:09.245 [2024-09-27 15:57:49.721269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.245 [2024-09-27 15:57:49.721320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.245 [2024-09-27 15:57:49.721333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.245 [2024-09-27 15:57:49.721340] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.245 [2024-09-27 15:57:49.721347] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.246 [2024-09-27 15:57:49.721361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.246 qpair failed and we were unable to recover it. 00:39:09.509 [2024-09-27 15:57:49.731232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.509 [2024-09-27 15:57:49.731289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.509 [2024-09-27 15:57:49.731303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.509 [2024-09-27 15:57:49.731310] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.509 [2024-09-27 15:57:49.731317] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.509 [2024-09-27 15:57:49.731330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.509 qpair failed and we were unable to recover it. 00:39:09.509 [2024-09-27 15:57:49.741249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.509 [2024-09-27 15:57:49.741306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.509 [2024-09-27 15:57:49.741320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.509 [2024-09-27 15:57:49.741328] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.509 [2024-09-27 15:57:49.741334] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.509 [2024-09-27 15:57:49.741347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.509 qpair failed and we were unable to recover it. 00:39:09.509 [2024-09-27 15:57:49.751320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.509 [2024-09-27 15:57:49.751385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.509 [2024-09-27 15:57:49.751402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.509 [2024-09-27 15:57:49.751409] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.509 [2024-09-27 15:57:49.751416] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.509 [2024-09-27 15:57:49.751429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.509 qpair failed and we were unable to recover it. 00:39:09.509 [2024-09-27 15:57:49.761386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.509 [2024-09-27 15:57:49.761437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.509 [2024-09-27 15:57:49.761450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.509 [2024-09-27 15:57:49.761457] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.509 [2024-09-27 15:57:49.761464] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.509 [2024-09-27 15:57:49.761477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.509 qpair failed and we were unable to recover it. 00:39:09.509 [2024-09-27 15:57:49.771496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.509 [2024-09-27 15:57:49.771565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.509 [2024-09-27 15:57:49.771579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.509 [2024-09-27 15:57:49.771586] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.509 [2024-09-27 15:57:49.771592] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.509 [2024-09-27 15:57:49.771605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.509 qpair failed and we were unable to recover it. 00:39:09.509 [2024-09-27 15:57:49.781532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.509 [2024-09-27 15:57:49.781584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.509 [2024-09-27 15:57:49.781598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.509 [2024-09-27 15:57:49.781605] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.509 [2024-09-27 15:57:49.781612] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.509 [2024-09-27 15:57:49.781624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.509 qpair failed and we were unable to recover it. 00:39:09.509 [2024-09-27 15:57:49.791462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.509 [2024-09-27 15:57:49.791564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.509 [2024-09-27 15:57:49.791577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.509 [2024-09-27 15:57:49.791585] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.509 [2024-09-27 15:57:49.791591] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.509 [2024-09-27 15:57:49.791608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.509 qpair failed and we were unable to recover it. 00:39:09.509 [2024-09-27 15:57:49.801492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.509 [2024-09-27 15:57:49.801537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.509 [2024-09-27 15:57:49.801550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.509 [2024-09-27 15:57:49.801557] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.509 [2024-09-27 15:57:49.801565] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.509 [2024-09-27 15:57:49.801579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.509 qpair failed and we were unable to recover it. 00:39:09.509 [2024-09-27 15:57:49.811563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.509 [2024-09-27 15:57:49.811619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.509 [2024-09-27 15:57:49.811633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.509 [2024-09-27 15:57:49.811641] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.509 [2024-09-27 15:57:49.811647] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.509 [2024-09-27 15:57:49.811661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.509 qpair failed and we were unable to recover it. 00:39:09.509 [2024-09-27 15:57:49.821571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.509 [2024-09-27 15:57:49.821628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.509 [2024-09-27 15:57:49.821641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.509 [2024-09-27 15:57:49.821649] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.509 [2024-09-27 15:57:49.821655] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.509 [2024-09-27 15:57:49.821669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.509 qpair failed and we were unable to recover it. 00:39:09.509 [2024-09-27 15:57:49.831564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.509 [2024-09-27 15:57:49.831627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.509 [2024-09-27 15:57:49.831640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.509 [2024-09-27 15:57:49.831647] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.509 [2024-09-27 15:57:49.831654] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.509 [2024-09-27 15:57:49.831668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.509 qpair failed and we were unable to recover it. 00:39:09.509 [2024-09-27 15:57:49.841602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.509 [2024-09-27 15:57:49.841650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.510 [2024-09-27 15:57:49.841667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.510 [2024-09-27 15:57:49.841674] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.510 [2024-09-27 15:57:49.841680] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.510 [2024-09-27 15:57:49.841694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.510 qpair failed and we were unable to recover it. 00:39:09.510 [2024-09-27 15:57:49.851669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.510 [2024-09-27 15:57:49.851723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.510 [2024-09-27 15:57:49.851736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.510 [2024-09-27 15:57:49.851744] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.510 [2024-09-27 15:57:49.851750] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.510 [2024-09-27 15:57:49.851763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.510 qpair failed and we were unable to recover it. 00:39:09.510 [2024-09-27 15:57:49.861663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.510 [2024-09-27 15:57:49.861717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.510 [2024-09-27 15:57:49.861731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.510 [2024-09-27 15:57:49.861738] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.510 [2024-09-27 15:57:49.861745] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.510 [2024-09-27 15:57:49.861758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.510 qpair failed and we were unable to recover it. 00:39:09.510 [2024-09-27 15:57:49.871668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.510 [2024-09-27 15:57:49.871714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.510 [2024-09-27 15:57:49.871728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.510 [2024-09-27 15:57:49.871735] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.510 [2024-09-27 15:57:49.871742] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.510 [2024-09-27 15:57:49.871755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.510 qpair failed and we were unable to recover it. 00:39:09.510 [2024-09-27 15:57:49.881580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.510 [2024-09-27 15:57:49.881628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.510 [2024-09-27 15:57:49.881643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.510 [2024-09-27 15:57:49.881650] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.510 [2024-09-27 15:57:49.881660] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.510 [2024-09-27 15:57:49.881674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.510 qpair failed and we were unable to recover it. 00:39:09.510 [2024-09-27 15:57:49.891768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.510 [2024-09-27 15:57:49.891824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.510 [2024-09-27 15:57:49.891838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.510 [2024-09-27 15:57:49.891845] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.510 [2024-09-27 15:57:49.891852] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.510 [2024-09-27 15:57:49.891866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.510 qpair failed and we were unable to recover it. 00:39:09.510 [2024-09-27 15:57:49.901781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.510 [2024-09-27 15:57:49.901839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.510 [2024-09-27 15:57:49.901853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.510 [2024-09-27 15:57:49.901860] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.510 [2024-09-27 15:57:49.901867] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.510 [2024-09-27 15:57:49.901881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.510 qpair failed and we were unable to recover it. 00:39:09.510 [2024-09-27 15:57:49.911770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.510 [2024-09-27 15:57:49.911818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.510 [2024-09-27 15:57:49.911831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.510 [2024-09-27 15:57:49.911839] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.510 [2024-09-27 15:57:49.911845] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.510 [2024-09-27 15:57:49.911859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.510 qpair failed and we were unable to recover it. 00:39:09.510 [2024-09-27 15:57:49.921792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.510 [2024-09-27 15:57:49.921847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.510 [2024-09-27 15:57:49.921861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.510 [2024-09-27 15:57:49.921868] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.510 [2024-09-27 15:57:49.921874] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.510 [2024-09-27 15:57:49.921887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.510 qpair failed and we were unable to recover it. 00:39:09.510 [2024-09-27 15:57:49.931878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.510 [2024-09-27 15:57:49.931960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.510 [2024-09-27 15:57:49.931974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.510 [2024-09-27 15:57:49.931982] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.510 [2024-09-27 15:57:49.931988] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.510 [2024-09-27 15:57:49.932001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.510 qpair failed and we were unable to recover it. 00:39:09.510 [2024-09-27 15:57:49.941928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.510 [2024-09-27 15:57:49.942020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.510 [2024-09-27 15:57:49.942034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.510 [2024-09-27 15:57:49.942041] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.510 [2024-09-27 15:57:49.942048] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.510 [2024-09-27 15:57:49.942062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.510 qpair failed and we were unable to recover it. 00:39:09.510 [2024-09-27 15:57:49.951779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.510 [2024-09-27 15:57:49.951828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.510 [2024-09-27 15:57:49.951841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.510 [2024-09-27 15:57:49.951848] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.510 [2024-09-27 15:57:49.951855] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.510 [2024-09-27 15:57:49.951868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.510 qpair failed and we were unable to recover it. 00:39:09.510 [2024-09-27 15:57:49.961913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.510 [2024-09-27 15:57:49.962000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.510 [2024-09-27 15:57:49.962014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.510 [2024-09-27 15:57:49.962021] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.510 [2024-09-27 15:57:49.962027] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.510 [2024-09-27 15:57:49.962041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.510 qpair failed and we were unable to recover it. 00:39:09.510 [2024-09-27 15:57:49.971849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.510 [2024-09-27 15:57:49.971915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.511 [2024-09-27 15:57:49.971929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.511 [2024-09-27 15:57:49.971936] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.511 [2024-09-27 15:57:49.971946] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.511 [2024-09-27 15:57:49.971959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.511 qpair failed and we were unable to recover it. 00:39:09.511 [2024-09-27 15:57:49.982049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.511 [2024-09-27 15:57:49.982134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.511 [2024-09-27 15:57:49.982148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.511 [2024-09-27 15:57:49.982156] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.511 [2024-09-27 15:57:49.982163] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.511 [2024-09-27 15:57:49.982178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.511 qpair failed and we were unable to recover it. 00:39:09.511 [2024-09-27 15:57:49.992001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.511 [2024-09-27 15:57:49.992046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.511 [2024-09-27 15:57:49.992059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.511 [2024-09-27 15:57:49.992066] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.511 [2024-09-27 15:57:49.992073] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.511 [2024-09-27 15:57:49.992086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.511 qpair failed and we were unable to recover it. 00:39:09.773 [2024-09-27 15:57:50.002028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.773 [2024-09-27 15:57:50.002078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.773 [2024-09-27 15:57:50.002093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.773 [2024-09-27 15:57:50.002100] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.773 [2024-09-27 15:57:50.002107] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.773 [2024-09-27 15:57:50.002122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.773 qpair failed and we were unable to recover it. 00:39:09.773 [2024-09-27 15:57:50.012180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.773 [2024-09-27 15:57:50.012236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.773 [2024-09-27 15:57:50.012250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.773 [2024-09-27 15:57:50.012257] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.773 [2024-09-27 15:57:50.012264] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.773 [2024-09-27 15:57:50.012278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.773 qpair failed and we were unable to recover it. 00:39:09.773 [2024-09-27 15:57:50.022250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.773 [2024-09-27 15:57:50.022310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.773 [2024-09-27 15:57:50.022323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.773 [2024-09-27 15:57:50.022330] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.773 [2024-09-27 15:57:50.022337] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.774 [2024-09-27 15:57:50.022350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.774 qpair failed and we were unable to recover it. 00:39:09.774 [2024-09-27 15:57:50.032238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.774 [2024-09-27 15:57:50.032286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.774 [2024-09-27 15:57:50.032299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.774 [2024-09-27 15:57:50.032306] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.774 [2024-09-27 15:57:50.032313] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.774 [2024-09-27 15:57:50.032326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.774 qpair failed and we were unable to recover it. 00:39:09.774 [2024-09-27 15:57:50.042111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.774 [2024-09-27 15:57:50.042160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.774 [2024-09-27 15:57:50.042173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.774 [2024-09-27 15:57:50.042181] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.774 [2024-09-27 15:57:50.042187] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.774 [2024-09-27 15:57:50.042200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.774 qpair failed and we were unable to recover it. 00:39:09.774 [2024-09-27 15:57:50.052310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.774 [2024-09-27 15:57:50.052365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.774 [2024-09-27 15:57:50.052379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.774 [2024-09-27 15:57:50.052386] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.774 [2024-09-27 15:57:50.052393] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.774 [2024-09-27 15:57:50.052406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.774 qpair failed and we were unable to recover it. 00:39:09.774 [2024-09-27 15:57:50.062334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.774 [2024-09-27 15:57:50.062385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.774 [2024-09-27 15:57:50.062399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.774 [2024-09-27 15:57:50.062407] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.774 [2024-09-27 15:57:50.062417] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.774 [2024-09-27 15:57:50.062430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.774 qpair failed and we were unable to recover it. 00:39:09.774 [2024-09-27 15:57:50.072307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.774 [2024-09-27 15:57:50.072405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.774 [2024-09-27 15:57:50.072421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.774 [2024-09-27 15:57:50.072428] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.774 [2024-09-27 15:57:50.072434] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.774 [2024-09-27 15:57:50.072448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.774 qpair failed and we were unable to recover it. 00:39:09.774 [2024-09-27 15:57:50.082209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.774 [2024-09-27 15:57:50.082257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.774 [2024-09-27 15:57:50.082271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.774 [2024-09-27 15:57:50.082278] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.774 [2024-09-27 15:57:50.082285] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.774 [2024-09-27 15:57:50.082298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.774 qpair failed and we were unable to recover it. 00:39:09.774 [2024-09-27 15:57:50.092410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.774 [2024-09-27 15:57:50.092467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.774 [2024-09-27 15:57:50.092481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.774 [2024-09-27 15:57:50.092488] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.774 [2024-09-27 15:57:50.092495] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.774 [2024-09-27 15:57:50.092508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.774 qpair failed and we were unable to recover it. 00:39:09.774 [2024-09-27 15:57:50.102419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.774 [2024-09-27 15:57:50.102489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.774 [2024-09-27 15:57:50.102502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.774 [2024-09-27 15:57:50.102509] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.774 [2024-09-27 15:57:50.102517] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.774 [2024-09-27 15:57:50.102530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.774 qpair failed and we were unable to recover it. 00:39:09.774 [2024-09-27 15:57:50.112413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.774 [2024-09-27 15:57:50.112507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.774 [2024-09-27 15:57:50.112522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.774 [2024-09-27 15:57:50.112529] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.774 [2024-09-27 15:57:50.112535] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.774 [2024-09-27 15:57:50.112549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.774 qpair failed and we were unable to recover it. 00:39:09.774 [2024-09-27 15:57:50.122328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.774 [2024-09-27 15:57:50.122389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.774 [2024-09-27 15:57:50.122403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.774 [2024-09-27 15:57:50.122410] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.774 [2024-09-27 15:57:50.122416] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.774 [2024-09-27 15:57:50.122429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.774 qpair failed and we were unable to recover it. 00:39:09.774 [2024-09-27 15:57:50.132423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.774 [2024-09-27 15:57:50.132476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.774 [2024-09-27 15:57:50.132489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.774 [2024-09-27 15:57:50.132496] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.774 [2024-09-27 15:57:50.132503] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.774 [2024-09-27 15:57:50.132516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.774 qpair failed and we were unable to recover it. 00:39:09.774 [2024-09-27 15:57:50.142572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.774 [2024-09-27 15:57:50.142648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.774 [2024-09-27 15:57:50.142661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.774 [2024-09-27 15:57:50.142668] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.774 [2024-09-27 15:57:50.142675] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.774 [2024-09-27 15:57:50.142689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.774 qpair failed and we were unable to recover it. 00:39:09.774 [2024-09-27 15:57:50.152529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.774 [2024-09-27 15:57:50.152581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.774 [2024-09-27 15:57:50.152597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.774 [2024-09-27 15:57:50.152605] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.774 [2024-09-27 15:57:50.152617] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.774 [2024-09-27 15:57:50.152632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.774 qpair failed and we were unable to recover it. 00:39:09.775 [2024-09-27 15:57:50.162534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.775 [2024-09-27 15:57:50.162581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.775 [2024-09-27 15:57:50.162597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.775 [2024-09-27 15:57:50.162604] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.775 [2024-09-27 15:57:50.162610] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.775 [2024-09-27 15:57:50.162624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.775 qpair failed and we were unable to recover it. 00:39:09.775 [2024-09-27 15:57:50.172509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.775 [2024-09-27 15:57:50.172564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.775 [2024-09-27 15:57:50.172578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.775 [2024-09-27 15:57:50.172585] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.775 [2024-09-27 15:57:50.172592] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.775 [2024-09-27 15:57:50.172605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.775 qpair failed and we were unable to recover it. 00:39:09.775 [2024-09-27 15:57:50.182665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.775 [2024-09-27 15:57:50.182727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.775 [2024-09-27 15:57:50.182740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.775 [2024-09-27 15:57:50.182747] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.775 [2024-09-27 15:57:50.182754] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.775 [2024-09-27 15:57:50.182767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.775 qpair failed and we were unable to recover it. 00:39:09.775 [2024-09-27 15:57:50.192728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.775 [2024-09-27 15:57:50.192791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.775 [2024-09-27 15:57:50.192816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.775 [2024-09-27 15:57:50.192825] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.775 [2024-09-27 15:57:50.192832] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.775 [2024-09-27 15:57:50.192851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.775 qpair failed and we were unable to recover it. 00:39:09.775 [2024-09-27 15:57:50.202670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.775 [2024-09-27 15:57:50.202723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.775 [2024-09-27 15:57:50.202738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.775 [2024-09-27 15:57:50.202746] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.775 [2024-09-27 15:57:50.202753] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.775 [2024-09-27 15:57:50.202767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.775 qpair failed and we were unable to recover it. 00:39:09.775 [2024-09-27 15:57:50.212824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.775 [2024-09-27 15:57:50.212904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.775 [2024-09-27 15:57:50.212918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.775 [2024-09-27 15:57:50.212926] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.775 [2024-09-27 15:57:50.212933] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.775 [2024-09-27 15:57:50.212948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.775 qpair failed and we were unable to recover it. 00:39:09.775 [2024-09-27 15:57:50.222826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.775 [2024-09-27 15:57:50.222878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.775 [2024-09-27 15:57:50.222891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.775 [2024-09-27 15:57:50.222903] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.775 [2024-09-27 15:57:50.222909] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.775 [2024-09-27 15:57:50.222923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.775 qpair failed and we were unable to recover it. 00:39:09.775 [2024-09-27 15:57:50.232736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.775 [2024-09-27 15:57:50.232807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.775 [2024-09-27 15:57:50.232820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.775 [2024-09-27 15:57:50.232827] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.775 [2024-09-27 15:57:50.232833] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.775 [2024-09-27 15:57:50.232847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.775 qpair failed and we were unable to recover it. 00:39:09.775 [2024-09-27 15:57:50.242652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.775 [2024-09-27 15:57:50.242698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.775 [2024-09-27 15:57:50.242711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.775 [2024-09-27 15:57:50.242722] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.775 [2024-09-27 15:57:50.242729] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.775 [2024-09-27 15:57:50.242742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.775 qpair failed and we were unable to recover it. 00:39:09.775 [2024-09-27 15:57:50.252867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:09.775 [2024-09-27 15:57:50.252942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:09.775 [2024-09-27 15:57:50.252956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:09.775 [2024-09-27 15:57:50.252963] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:09.775 [2024-09-27 15:57:50.252969] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:09.775 [2024-09-27 15:57:50.252984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:09.775 qpair failed and we were unable to recover it. 00:39:10.038 [2024-09-27 15:57:50.262874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.038 [2024-09-27 15:57:50.262928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.038 [2024-09-27 15:57:50.262942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.038 [2024-09-27 15:57:50.262949] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.038 [2024-09-27 15:57:50.262956] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.038 [2024-09-27 15:57:50.262969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.038 qpair failed and we were unable to recover it. 00:39:10.038 [2024-09-27 15:57:50.272874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.038 [2024-09-27 15:57:50.272931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.038 [2024-09-27 15:57:50.272945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.038 [2024-09-27 15:57:50.272953] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.038 [2024-09-27 15:57:50.272959] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.038 [2024-09-27 15:57:50.272973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.038 qpair failed and we were unable to recover it. 00:39:10.038 [2024-09-27 15:57:50.282870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.038 [2024-09-27 15:57:50.282917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.038 [2024-09-27 15:57:50.282931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.038 [2024-09-27 15:57:50.282938] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.038 [2024-09-27 15:57:50.282945] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.038 [2024-09-27 15:57:50.282958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.038 qpair failed and we were unable to recover it. 00:39:10.038 [2024-09-27 15:57:50.292953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.038 [2024-09-27 15:57:50.293007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.038 [2024-09-27 15:57:50.293021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.038 [2024-09-27 15:57:50.293028] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.038 [2024-09-27 15:57:50.293035] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.038 [2024-09-27 15:57:50.293048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.038 qpair failed and we were unable to recover it. 00:39:10.038 [2024-09-27 15:57:50.302968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.038 [2024-09-27 15:57:50.303025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.038 [2024-09-27 15:57:50.303039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.038 [2024-09-27 15:57:50.303046] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.038 [2024-09-27 15:57:50.303052] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.038 [2024-09-27 15:57:50.303065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.038 qpair failed and we were unable to recover it. 00:39:10.038 [2024-09-27 15:57:50.312959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.038 [2024-09-27 15:57:50.313006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.038 [2024-09-27 15:57:50.313019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.038 [2024-09-27 15:57:50.313027] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.038 [2024-09-27 15:57:50.313033] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.038 [2024-09-27 15:57:50.313047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.038 qpair failed and we were unable to recover it. 00:39:10.038 [2024-09-27 15:57:50.322964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.038 [2024-09-27 15:57:50.323026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.038 [2024-09-27 15:57:50.323039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.038 [2024-09-27 15:57:50.323047] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.038 [2024-09-27 15:57:50.323053] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.038 [2024-09-27 15:57:50.323066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.038 qpair failed and we were unable to recover it. 00:39:10.038 [2024-09-27 15:57:50.332956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.038 [2024-09-27 15:57:50.333051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.038 [2024-09-27 15:57:50.333064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.038 [2024-09-27 15:57:50.333075] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.038 [2024-09-27 15:57:50.333082] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.038 [2024-09-27 15:57:50.333096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.038 qpair failed and we were unable to recover it. 00:39:10.038 [2024-09-27 15:57:50.343093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.038 [2024-09-27 15:57:50.343149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.039 [2024-09-27 15:57:50.343162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.039 [2024-09-27 15:57:50.343169] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.039 [2024-09-27 15:57:50.343176] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.039 [2024-09-27 15:57:50.343188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.039 qpair failed and we were unable to recover it. 00:39:10.039 [2024-09-27 15:57:50.353108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.039 [2024-09-27 15:57:50.353179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.039 [2024-09-27 15:57:50.353192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.039 [2024-09-27 15:57:50.353200] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.039 [2024-09-27 15:57:50.353206] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.039 [2024-09-27 15:57:50.353219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.039 qpair failed and we were unable to recover it. 00:39:10.039 [2024-09-27 15:57:50.363071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.039 [2024-09-27 15:57:50.363123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.039 [2024-09-27 15:57:50.363139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.039 [2024-09-27 15:57:50.363146] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.039 [2024-09-27 15:57:50.363152] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.039 [2024-09-27 15:57:50.363166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.039 qpair failed and we were unable to recover it. 00:39:10.039 [2024-09-27 15:57:50.373170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.039 [2024-09-27 15:57:50.373227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.039 [2024-09-27 15:57:50.373240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.039 [2024-09-27 15:57:50.373247] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.039 [2024-09-27 15:57:50.373253] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.039 [2024-09-27 15:57:50.373267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.039 qpair failed and we were unable to recover it. 00:39:10.039 [2024-09-27 15:57:50.383191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.039 [2024-09-27 15:57:50.383245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.039 [2024-09-27 15:57:50.383259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.039 [2024-09-27 15:57:50.383266] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.039 [2024-09-27 15:57:50.383272] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.039 [2024-09-27 15:57:50.383285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.039 qpair failed and we were unable to recover it. 00:39:10.039 [2024-09-27 15:57:50.393159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.039 [2024-09-27 15:57:50.393207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.039 [2024-09-27 15:57:50.393220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.039 [2024-09-27 15:57:50.393228] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.039 [2024-09-27 15:57:50.393234] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.039 [2024-09-27 15:57:50.393248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.039 qpair failed and we were unable to recover it. 00:39:10.039 [2024-09-27 15:57:50.403185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.039 [2024-09-27 15:57:50.403240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.039 [2024-09-27 15:57:50.403253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.039 [2024-09-27 15:57:50.403261] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.039 [2024-09-27 15:57:50.403267] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.039 [2024-09-27 15:57:50.403280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.039 qpair failed and we were unable to recover it. 00:39:10.039 [2024-09-27 15:57:50.413331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.039 [2024-09-27 15:57:50.413387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.039 [2024-09-27 15:57:50.413400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.039 [2024-09-27 15:57:50.413407] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.039 [2024-09-27 15:57:50.413414] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.039 [2024-09-27 15:57:50.413427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.039 qpair failed and we were unable to recover it. 00:39:10.039 [2024-09-27 15:57:50.423305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.039 [2024-09-27 15:57:50.423362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.039 [2024-09-27 15:57:50.423375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.039 [2024-09-27 15:57:50.423385] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.039 [2024-09-27 15:57:50.423393] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.039 [2024-09-27 15:57:50.423406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.039 qpair failed and we were unable to recover it. 00:39:10.039 [2024-09-27 15:57:50.433264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.039 [2024-09-27 15:57:50.433318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.039 [2024-09-27 15:57:50.433331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.039 [2024-09-27 15:57:50.433339] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.039 [2024-09-27 15:57:50.433346] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.039 [2024-09-27 15:57:50.433359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.039 qpair failed and we were unable to recover it. 00:39:10.039 [2024-09-27 15:57:50.443365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.039 [2024-09-27 15:57:50.443434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.039 [2024-09-27 15:57:50.443447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.039 [2024-09-27 15:57:50.443454] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.039 [2024-09-27 15:57:50.443461] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.039 [2024-09-27 15:57:50.443474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.039 qpair failed and we were unable to recover it. 00:39:10.039 [2024-09-27 15:57:50.453260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.039 [2024-09-27 15:57:50.453334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.039 [2024-09-27 15:57:50.453346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.039 [2024-09-27 15:57:50.453354] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.039 [2024-09-27 15:57:50.453360] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.039 [2024-09-27 15:57:50.453373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.039 qpair failed and we were unable to recover it. 00:39:10.039 [2024-09-27 15:57:50.463421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.039 [2024-09-27 15:57:50.463474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.039 [2024-09-27 15:57:50.463488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.039 [2024-09-27 15:57:50.463495] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.039 [2024-09-27 15:57:50.463501] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.039 [2024-09-27 15:57:50.463514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.039 qpair failed and we were unable to recover it. 00:39:10.039 [2024-09-27 15:57:50.473404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.039 [2024-09-27 15:57:50.473455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.039 [2024-09-27 15:57:50.473468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.040 [2024-09-27 15:57:50.473475] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.040 [2024-09-27 15:57:50.473482] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.040 [2024-09-27 15:57:50.473495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.040 qpair failed and we were unable to recover it. 00:39:10.040 [2024-09-27 15:57:50.483433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.040 [2024-09-27 15:57:50.483488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.040 [2024-09-27 15:57:50.483501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.040 [2024-09-27 15:57:50.483508] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.040 [2024-09-27 15:57:50.483514] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.040 [2024-09-27 15:57:50.483527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.040 qpair failed and we were unable to recover it. 00:39:10.040 [2024-09-27 15:57:50.493511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.040 [2024-09-27 15:57:50.493569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.040 [2024-09-27 15:57:50.493582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.040 [2024-09-27 15:57:50.493590] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.040 [2024-09-27 15:57:50.493596] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.040 [2024-09-27 15:57:50.493609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.040 qpair failed and we were unable to recover it. 00:39:10.040 [2024-09-27 15:57:50.503521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.040 [2024-09-27 15:57:50.503576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.040 [2024-09-27 15:57:50.503590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.040 [2024-09-27 15:57:50.503597] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.040 [2024-09-27 15:57:50.503604] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.040 [2024-09-27 15:57:50.503617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.040 qpair failed and we were unable to recover it. 00:39:10.040 [2024-09-27 15:57:50.513493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.040 [2024-09-27 15:57:50.513582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.040 [2024-09-27 15:57:50.513595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.040 [2024-09-27 15:57:50.513605] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.040 [2024-09-27 15:57:50.513612] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.040 [2024-09-27 15:57:50.513625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.040 qpair failed and we were unable to recover it. 00:39:10.040 [2024-09-27 15:57:50.523530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.040 [2024-09-27 15:57:50.523575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.040 [2024-09-27 15:57:50.523589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.040 [2024-09-27 15:57:50.523596] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.040 [2024-09-27 15:57:50.523603] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.040 [2024-09-27 15:57:50.523615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.040 qpair failed and we were unable to recover it. 00:39:10.301 [2024-09-27 15:57:50.533496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.301 [2024-09-27 15:57:50.533589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.301 [2024-09-27 15:57:50.533602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.301 [2024-09-27 15:57:50.533609] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.301 [2024-09-27 15:57:50.533616] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.301 [2024-09-27 15:57:50.533629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.301 qpair failed and we were unable to recover it. 00:39:10.301 [2024-09-27 15:57:50.543639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.301 [2024-09-27 15:57:50.543689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.301 [2024-09-27 15:57:50.543702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.301 [2024-09-27 15:57:50.543710] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.301 [2024-09-27 15:57:50.543716] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.301 [2024-09-27 15:57:50.543729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.301 qpair failed and we were unable to recover it. 00:39:10.301 [2024-09-27 15:57:50.553631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.301 [2024-09-27 15:57:50.553683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.301 [2024-09-27 15:57:50.553696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.301 [2024-09-27 15:57:50.553704] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.301 [2024-09-27 15:57:50.553710] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.301 [2024-09-27 15:57:50.553724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.301 qpair failed and we were unable to recover it. 00:39:10.302 [2024-09-27 15:57:50.563639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.302 [2024-09-27 15:57:50.563685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.302 [2024-09-27 15:57:50.563700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.302 [2024-09-27 15:57:50.563708] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.302 [2024-09-27 15:57:50.563714] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.302 [2024-09-27 15:57:50.563727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.302 qpair failed and we were unable to recover it. 00:39:10.302 [2024-09-27 15:57:50.573709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.302 [2024-09-27 15:57:50.573762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.302 [2024-09-27 15:57:50.573775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.302 [2024-09-27 15:57:50.573782] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.302 [2024-09-27 15:57:50.573789] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.302 [2024-09-27 15:57:50.573802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.302 qpair failed and we were unable to recover it. 00:39:10.302 [2024-09-27 15:57:50.583707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.302 [2024-09-27 15:57:50.583758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.302 [2024-09-27 15:57:50.583772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.302 [2024-09-27 15:57:50.583779] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.302 [2024-09-27 15:57:50.583785] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.302 [2024-09-27 15:57:50.583798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.302 qpair failed and we were unable to recover it. 00:39:10.302 [2024-09-27 15:57:50.593728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.302 [2024-09-27 15:57:50.593774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.302 [2024-09-27 15:57:50.593788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.302 [2024-09-27 15:57:50.593795] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.302 [2024-09-27 15:57:50.593801] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.302 [2024-09-27 15:57:50.593814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.302 qpair failed and we were unable to recover it. 00:39:10.302 [2024-09-27 15:57:50.603760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.302 [2024-09-27 15:57:50.603808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.302 [2024-09-27 15:57:50.603825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.302 [2024-09-27 15:57:50.603832] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.302 [2024-09-27 15:57:50.603839] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.302 [2024-09-27 15:57:50.603852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.302 qpair failed and we were unable to recover it. 00:39:10.302 [2024-09-27 15:57:50.613822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.302 [2024-09-27 15:57:50.613872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.302 [2024-09-27 15:57:50.613885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.302 [2024-09-27 15:57:50.613892] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.302 [2024-09-27 15:57:50.613903] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.302 [2024-09-27 15:57:50.613916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.302 qpair failed and we were unable to recover it. 00:39:10.302 [2024-09-27 15:57:50.623808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.302 [2024-09-27 15:57:50.623857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.302 [2024-09-27 15:57:50.623870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.302 [2024-09-27 15:57:50.623877] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.302 [2024-09-27 15:57:50.623884] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.302 [2024-09-27 15:57:50.623900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.302 qpair failed and we were unable to recover it. 00:39:10.302 [2024-09-27 15:57:50.633836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.302 [2024-09-27 15:57:50.633886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.302 [2024-09-27 15:57:50.633902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.302 [2024-09-27 15:57:50.633910] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.302 [2024-09-27 15:57:50.633916] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.302 [2024-09-27 15:57:50.633929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.302 qpair failed and we were unable to recover it. 00:39:10.302 [2024-09-27 15:57:50.643842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.302 [2024-09-27 15:57:50.643891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.302 [2024-09-27 15:57:50.643908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.302 [2024-09-27 15:57:50.643916] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.302 [2024-09-27 15:57:50.643923] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.302 [2024-09-27 15:57:50.643936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.302 qpair failed and we were unable to recover it. 00:39:10.302 [2024-09-27 15:57:50.653940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.302 [2024-09-27 15:57:50.653996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.302 [2024-09-27 15:57:50.654009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.302 [2024-09-27 15:57:50.654017] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.302 [2024-09-27 15:57:50.654023] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.302 [2024-09-27 15:57:50.654036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.302 qpair failed and we were unable to recover it. 00:39:10.302 [2024-09-27 15:57:50.663948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.302 [2024-09-27 15:57:50.664027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.302 [2024-09-27 15:57:50.664040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.302 [2024-09-27 15:57:50.664048] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.302 [2024-09-27 15:57:50.664054] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.302 [2024-09-27 15:57:50.664068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.302 qpair failed and we were unable to recover it. 00:39:10.302 [2024-09-27 15:57:50.673936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.302 [2024-09-27 15:57:50.673986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.302 [2024-09-27 15:57:50.674002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.302 [2024-09-27 15:57:50.674009] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.302 [2024-09-27 15:57:50.674015] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.302 [2024-09-27 15:57:50.674030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.302 qpair failed and we were unable to recover it. 00:39:10.302 [2024-09-27 15:57:50.683992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.302 [2024-09-27 15:57:50.684040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.302 [2024-09-27 15:57:50.684053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.302 [2024-09-27 15:57:50.684060] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.302 [2024-09-27 15:57:50.684067] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.302 [2024-09-27 15:57:50.684080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.302 qpair failed and we were unable to recover it. 00:39:10.302 [2024-09-27 15:57:50.694113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.303 [2024-09-27 15:57:50.694173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.303 [2024-09-27 15:57:50.694189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.303 [2024-09-27 15:57:50.694196] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.303 [2024-09-27 15:57:50.694202] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.303 [2024-09-27 15:57:50.694215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.303 qpair failed and we were unable to recover it. 00:39:10.303 [2024-09-27 15:57:50.703926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.303 [2024-09-27 15:57:50.703975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.303 [2024-09-27 15:57:50.703989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.303 [2024-09-27 15:57:50.703996] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.303 [2024-09-27 15:57:50.704002] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.303 [2024-09-27 15:57:50.704016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.303 qpair failed and we were unable to recover it. 00:39:10.303 [2024-09-27 15:57:50.714035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.303 [2024-09-27 15:57:50.714130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.303 [2024-09-27 15:57:50.714143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.303 [2024-09-27 15:57:50.714150] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.303 [2024-09-27 15:57:50.714157] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.303 [2024-09-27 15:57:50.714170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.303 qpair failed and we were unable to recover it. 00:39:10.303 [2024-09-27 15:57:50.724065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.303 [2024-09-27 15:57:50.724117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.303 [2024-09-27 15:57:50.724130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.303 [2024-09-27 15:57:50.724137] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.303 [2024-09-27 15:57:50.724143] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.303 [2024-09-27 15:57:50.724156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.303 qpair failed and we were unable to recover it. 00:39:10.303 [2024-09-27 15:57:50.734143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.303 [2024-09-27 15:57:50.734193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.303 [2024-09-27 15:57:50.734206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.303 [2024-09-27 15:57:50.734213] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.303 [2024-09-27 15:57:50.734220] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.303 [2024-09-27 15:57:50.734236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.303 qpair failed and we were unable to recover it. 00:39:10.303 [2024-09-27 15:57:50.744151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.303 [2024-09-27 15:57:50.744202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.303 [2024-09-27 15:57:50.744215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.303 [2024-09-27 15:57:50.744222] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.303 [2024-09-27 15:57:50.744228] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.303 [2024-09-27 15:57:50.744241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.303 qpair failed and we were unable to recover it. 00:39:10.303 [2024-09-27 15:57:50.754179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.303 [2024-09-27 15:57:50.754227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.303 [2024-09-27 15:57:50.754240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.303 [2024-09-27 15:57:50.754247] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.303 [2024-09-27 15:57:50.754253] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.303 [2024-09-27 15:57:50.754266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.303 qpair failed and we were unable to recover it. 00:39:10.303 [2024-09-27 15:57:50.764202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.303 [2024-09-27 15:57:50.764248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.303 [2024-09-27 15:57:50.764262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.303 [2024-09-27 15:57:50.764268] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.303 [2024-09-27 15:57:50.764275] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.303 [2024-09-27 15:57:50.764288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.303 qpair failed and we were unable to recover it. 00:39:10.303 [2024-09-27 15:57:50.774270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.303 [2024-09-27 15:57:50.774324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.303 [2024-09-27 15:57:50.774339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.303 [2024-09-27 15:57:50.774346] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.303 [2024-09-27 15:57:50.774352] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.303 [2024-09-27 15:57:50.774366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.303 qpair failed and we were unable to recover it. 00:39:10.303 [2024-09-27 15:57:50.784126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.303 [2024-09-27 15:57:50.784173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.303 [2024-09-27 15:57:50.784189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.303 [2024-09-27 15:57:50.784196] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.303 [2024-09-27 15:57:50.784203] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.303 [2024-09-27 15:57:50.784216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.303 qpair failed and we were unable to recover it. 00:39:10.565 [2024-09-27 15:57:50.794291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.566 [2024-09-27 15:57:50.794343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.566 [2024-09-27 15:57:50.794357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.566 [2024-09-27 15:57:50.794364] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.566 [2024-09-27 15:57:50.794370] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.566 [2024-09-27 15:57:50.794383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.566 qpair failed and we were unable to recover it. 00:39:10.566 [2024-09-27 15:57:50.804344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.566 [2024-09-27 15:57:50.804417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.566 [2024-09-27 15:57:50.804430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.566 [2024-09-27 15:57:50.804437] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.566 [2024-09-27 15:57:50.804444] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.566 [2024-09-27 15:57:50.804456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.566 qpair failed and we were unable to recover it. 00:39:10.566 [2024-09-27 15:57:50.814357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.566 [2024-09-27 15:57:50.814415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.566 [2024-09-27 15:57:50.814428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.566 [2024-09-27 15:57:50.814435] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.566 [2024-09-27 15:57:50.814441] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.566 [2024-09-27 15:57:50.814454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.566 qpair failed and we were unable to recover it. 00:39:10.566 [2024-09-27 15:57:50.824239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.566 [2024-09-27 15:57:50.824295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.566 [2024-09-27 15:57:50.824311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.566 [2024-09-27 15:57:50.824318] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.566 [2024-09-27 15:57:50.824326] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.566 [2024-09-27 15:57:50.824347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.566 qpair failed and we were unable to recover it. 00:39:10.566 [2024-09-27 15:57:50.834282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.566 [2024-09-27 15:57:50.834369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.566 [2024-09-27 15:57:50.834384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.566 [2024-09-27 15:57:50.834392] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.566 [2024-09-27 15:57:50.834399] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.566 [2024-09-27 15:57:50.834413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.566 qpair failed and we were unable to recover it. 00:39:10.566 [2024-09-27 15:57:50.844412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.566 [2024-09-27 15:57:50.844500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.566 [2024-09-27 15:57:50.844514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.566 [2024-09-27 15:57:50.844521] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.566 [2024-09-27 15:57:50.844528] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.566 [2024-09-27 15:57:50.844540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.566 qpair failed and we were unable to recover it. 00:39:10.566 [2024-09-27 15:57:50.854449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.566 [2024-09-27 15:57:50.854503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.566 [2024-09-27 15:57:50.854516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.566 [2024-09-27 15:57:50.854523] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.566 [2024-09-27 15:57:50.854530] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.566 [2024-09-27 15:57:50.854543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.566 qpair failed and we were unable to recover it. 00:39:10.566 [2024-09-27 15:57:50.864373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.566 [2024-09-27 15:57:50.864474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.566 [2024-09-27 15:57:50.864489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.566 [2024-09-27 15:57:50.864496] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.566 [2024-09-27 15:57:50.864503] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.566 [2024-09-27 15:57:50.864516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.566 qpair failed and we were unable to recover it. 00:39:10.566 [2024-09-27 15:57:50.874495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.566 [2024-09-27 15:57:50.874546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.566 [2024-09-27 15:57:50.874568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.566 [2024-09-27 15:57:50.874575] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.566 [2024-09-27 15:57:50.874582] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.566 [2024-09-27 15:57:50.874596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.566 qpair failed and we were unable to recover it. 00:39:10.566 [2024-09-27 15:57:50.884507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.566 [2024-09-27 15:57:50.884554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.566 [2024-09-27 15:57:50.884568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.566 [2024-09-27 15:57:50.884575] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.566 [2024-09-27 15:57:50.884582] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.566 [2024-09-27 15:57:50.884595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.566 qpair failed and we were unable to recover it. 00:39:10.566 [2024-09-27 15:57:50.894589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.566 [2024-09-27 15:57:50.894642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.566 [2024-09-27 15:57:50.894655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.566 [2024-09-27 15:57:50.894662] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.566 [2024-09-27 15:57:50.894669] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.566 [2024-09-27 15:57:50.894682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.566 qpair failed and we were unable to recover it. 00:39:10.566 [2024-09-27 15:57:50.904569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.566 [2024-09-27 15:57:50.904672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.566 [2024-09-27 15:57:50.904686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.566 [2024-09-27 15:57:50.904693] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.566 [2024-09-27 15:57:50.904700] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.566 [2024-09-27 15:57:50.904713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.566 qpair failed and we were unable to recover it. 00:39:10.566 [2024-09-27 15:57:50.914576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.566 [2024-09-27 15:57:50.914628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.566 [2024-09-27 15:57:50.914641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.566 [2024-09-27 15:57:50.914649] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.567 [2024-09-27 15:57:50.914655] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.567 [2024-09-27 15:57:50.914672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.567 qpair failed and we were unable to recover it. 00:39:10.567 [2024-09-27 15:57:50.924617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.567 [2024-09-27 15:57:50.924662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.567 [2024-09-27 15:57:50.924676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.567 [2024-09-27 15:57:50.924683] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.567 [2024-09-27 15:57:50.924689] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.567 [2024-09-27 15:57:50.924703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.567 qpair failed and we were unable to recover it. 00:39:10.567 [2024-09-27 15:57:50.934561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.567 [2024-09-27 15:57:50.934623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.567 [2024-09-27 15:57:50.934638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.567 [2024-09-27 15:57:50.934645] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.567 [2024-09-27 15:57:50.934652] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.567 [2024-09-27 15:57:50.934666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.567 qpair failed and we were unable to recover it. 00:39:10.567 [2024-09-27 15:57:50.944681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.567 [2024-09-27 15:57:50.944735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.567 [2024-09-27 15:57:50.944749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.567 [2024-09-27 15:57:50.944757] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.567 [2024-09-27 15:57:50.944763] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.567 [2024-09-27 15:57:50.944776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.567 qpair failed and we were unable to recover it. 00:39:10.567 [2024-09-27 15:57:50.954687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.567 [2024-09-27 15:57:50.954731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.567 [2024-09-27 15:57:50.954744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.567 [2024-09-27 15:57:50.954751] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.567 [2024-09-27 15:57:50.954758] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.567 [2024-09-27 15:57:50.954771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.567 qpair failed and we were unable to recover it. 00:39:10.567 [2024-09-27 15:57:50.964721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.567 [2024-09-27 15:57:50.964768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.567 [2024-09-27 15:57:50.964785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.567 [2024-09-27 15:57:50.964792] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.567 [2024-09-27 15:57:50.964799] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.567 [2024-09-27 15:57:50.964812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.567 qpair failed and we were unable to recover it. 00:39:10.567 [2024-09-27 15:57:50.974785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.567 [2024-09-27 15:57:50.974873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.567 [2024-09-27 15:57:50.974887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.567 [2024-09-27 15:57:50.974898] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.567 [2024-09-27 15:57:50.974905] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.567 [2024-09-27 15:57:50.974920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.567 qpair failed and we were unable to recover it. 00:39:10.567 [2024-09-27 15:57:50.984840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.567 [2024-09-27 15:57:50.984906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.567 [2024-09-27 15:57:50.984921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.567 [2024-09-27 15:57:50.984929] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.567 [2024-09-27 15:57:50.984936] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.567 [2024-09-27 15:57:50.984950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.567 qpair failed and we were unable to recover it. 00:39:10.567 [2024-09-27 15:57:50.994772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.567 [2024-09-27 15:57:50.994820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.567 [2024-09-27 15:57:50.994833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.567 [2024-09-27 15:57:50.994840] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.567 [2024-09-27 15:57:50.994847] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.567 [2024-09-27 15:57:50.994860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.567 qpair failed and we were unable to recover it. 00:39:10.567 [2024-09-27 15:57:51.004846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.567 [2024-09-27 15:57:51.004922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.567 [2024-09-27 15:57:51.004936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.567 [2024-09-27 15:57:51.004943] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.567 [2024-09-27 15:57:51.004951] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.567 [2024-09-27 15:57:51.004967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.567 qpair failed and we were unable to recover it. 00:39:10.567 [2024-09-27 15:57:51.014915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.567 [2024-09-27 15:57:51.014969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.567 [2024-09-27 15:57:51.014982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.567 [2024-09-27 15:57:51.014989] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.567 [2024-09-27 15:57:51.014996] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.567 [2024-09-27 15:57:51.015009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.567 qpair failed and we were unable to recover it. 00:39:10.567 [2024-09-27 15:57:51.024906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.567 [2024-09-27 15:57:51.024956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.567 [2024-09-27 15:57:51.024969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.567 [2024-09-27 15:57:51.024977] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.567 [2024-09-27 15:57:51.024984] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.567 [2024-09-27 15:57:51.024997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.567 qpair failed and we were unable to recover it. 00:39:10.567 [2024-09-27 15:57:51.034887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.567 [2024-09-27 15:57:51.034937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.567 [2024-09-27 15:57:51.034951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.567 [2024-09-27 15:57:51.034958] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.567 [2024-09-27 15:57:51.034964] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.567 [2024-09-27 15:57:51.034978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.567 qpair failed and we were unable to recover it. 00:39:10.567 [2024-09-27 15:57:51.044970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.567 [2024-09-27 15:57:51.045064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.567 [2024-09-27 15:57:51.045078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.567 [2024-09-27 15:57:51.045085] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.567 [2024-09-27 15:57:51.045092] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.568 [2024-09-27 15:57:51.045105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.568 qpair failed and we were unable to recover it. 00:39:10.830 [2024-09-27 15:57:51.055025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.830 [2024-09-27 15:57:51.055078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.830 [2024-09-27 15:57:51.055095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.830 [2024-09-27 15:57:51.055102] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.830 [2024-09-27 15:57:51.055108] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.830 [2024-09-27 15:57:51.055122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.830 qpair failed and we were unable to recover it. 00:39:10.830 [2024-09-27 15:57:51.065039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.830 [2024-09-27 15:57:51.065106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.830 [2024-09-27 15:57:51.065120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.830 [2024-09-27 15:57:51.065127] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.830 [2024-09-27 15:57:51.065134] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.830 [2024-09-27 15:57:51.065147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.830 qpair failed and we were unable to recover it. 00:39:10.830 [2024-09-27 15:57:51.074913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.830 [2024-09-27 15:57:51.074966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.830 [2024-09-27 15:57:51.074979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.830 [2024-09-27 15:57:51.074987] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.830 [2024-09-27 15:57:51.074993] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.830 [2024-09-27 15:57:51.075006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.830 qpair failed and we were unable to recover it. 00:39:10.830 [2024-09-27 15:57:51.085055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.830 [2024-09-27 15:57:51.085130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.830 [2024-09-27 15:57:51.085143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.830 [2024-09-27 15:57:51.085150] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.830 [2024-09-27 15:57:51.085156] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.830 [2024-09-27 15:57:51.085170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.830 qpair failed and we were unable to recover it. 00:39:10.830 [2024-09-27 15:57:51.095122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.830 [2024-09-27 15:57:51.095173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.830 [2024-09-27 15:57:51.095186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.830 [2024-09-27 15:57:51.095193] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.830 [2024-09-27 15:57:51.095203] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.831 [2024-09-27 15:57:51.095217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.831 qpair failed and we were unable to recover it. 00:39:10.831 [2024-09-27 15:57:51.105136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.831 [2024-09-27 15:57:51.105186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.831 [2024-09-27 15:57:51.105200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.831 [2024-09-27 15:57:51.105207] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.831 [2024-09-27 15:57:51.105214] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.831 [2024-09-27 15:57:51.105227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.831 qpair failed and we were unable to recover it. 00:39:10.831 [2024-09-27 15:57:51.115120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.831 [2024-09-27 15:57:51.115170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.831 [2024-09-27 15:57:51.115183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.831 [2024-09-27 15:57:51.115190] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.831 [2024-09-27 15:57:51.115196] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.831 [2024-09-27 15:57:51.115210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.831 qpair failed and we were unable to recover it. 00:39:10.831 [2024-09-27 15:57:51.125188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.831 [2024-09-27 15:57:51.125234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.831 [2024-09-27 15:57:51.125247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.831 [2024-09-27 15:57:51.125254] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.831 [2024-09-27 15:57:51.125261] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.831 [2024-09-27 15:57:51.125275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.831 qpair failed and we were unable to recover it. 00:39:10.831 [2024-09-27 15:57:51.135260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.831 [2024-09-27 15:57:51.135334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.831 [2024-09-27 15:57:51.135347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.831 [2024-09-27 15:57:51.135354] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.831 [2024-09-27 15:57:51.135361] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.831 [2024-09-27 15:57:51.135375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.831 qpair failed and we were unable to recover it. 00:39:10.831 [2024-09-27 15:57:51.145245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.831 [2024-09-27 15:57:51.145305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.831 [2024-09-27 15:57:51.145319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.831 [2024-09-27 15:57:51.145325] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.831 [2024-09-27 15:57:51.145332] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.831 [2024-09-27 15:57:51.145345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.831 qpair failed and we were unable to recover it. 00:39:10.831 [2024-09-27 15:57:51.155250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.831 [2024-09-27 15:57:51.155297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.831 [2024-09-27 15:57:51.155311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.831 [2024-09-27 15:57:51.155318] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.831 [2024-09-27 15:57:51.155324] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.831 [2024-09-27 15:57:51.155337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.831 qpair failed and we were unable to recover it. 00:39:10.831 [2024-09-27 15:57:51.165251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.831 [2024-09-27 15:57:51.165302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.831 [2024-09-27 15:57:51.165316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.831 [2024-09-27 15:57:51.165323] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.831 [2024-09-27 15:57:51.165330] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.831 [2024-09-27 15:57:51.165343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.831 qpair failed and we were unable to recover it. 00:39:10.831 [2024-09-27 15:57:51.175357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.831 [2024-09-27 15:57:51.175408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.831 [2024-09-27 15:57:51.175421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.831 [2024-09-27 15:57:51.175428] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.831 [2024-09-27 15:57:51.175435] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.831 [2024-09-27 15:57:51.175448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.831 qpair failed and we were unable to recover it. 00:39:10.831 [2024-09-27 15:57:51.185338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.831 [2024-09-27 15:57:51.185394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.831 [2024-09-27 15:57:51.185407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.831 [2024-09-27 15:57:51.185415] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.831 [2024-09-27 15:57:51.185425] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.831 [2024-09-27 15:57:51.185438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.831 qpair failed and we were unable to recover it. 00:39:10.831 [2024-09-27 15:57:51.195350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.831 [2024-09-27 15:57:51.195428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.831 [2024-09-27 15:57:51.195444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.831 [2024-09-27 15:57:51.195452] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.831 [2024-09-27 15:57:51.195460] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.831 [2024-09-27 15:57:51.195477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.831 qpair failed and we were unable to recover it. 00:39:10.831 [2024-09-27 15:57:51.205392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.831 [2024-09-27 15:57:51.205442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.831 [2024-09-27 15:57:51.205456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.831 [2024-09-27 15:57:51.205464] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.831 [2024-09-27 15:57:51.205470] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.831 [2024-09-27 15:57:51.205483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.831 qpair failed and we were unable to recover it. 00:39:10.831 [2024-09-27 15:57:51.215327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.831 [2024-09-27 15:57:51.215383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.831 [2024-09-27 15:57:51.215398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.831 [2024-09-27 15:57:51.215405] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.831 [2024-09-27 15:57:51.215411] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.831 [2024-09-27 15:57:51.215425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.831 qpair failed and we were unable to recover it. 00:39:10.831 [2024-09-27 15:57:51.225453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.831 [2024-09-27 15:57:51.225517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.831 [2024-09-27 15:57:51.225531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.831 [2024-09-27 15:57:51.225538] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.832 [2024-09-27 15:57:51.225545] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.832 [2024-09-27 15:57:51.225558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.832 qpair failed and we were unable to recover it. 00:39:10.832 [2024-09-27 15:57:51.235471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.832 [2024-09-27 15:57:51.235523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.832 [2024-09-27 15:57:51.235537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.832 [2024-09-27 15:57:51.235544] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.832 [2024-09-27 15:57:51.235551] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.832 [2024-09-27 15:57:51.235564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.832 qpair failed and we were unable to recover it. 00:39:10.832 [2024-09-27 15:57:51.245394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.832 [2024-09-27 15:57:51.245450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.832 [2024-09-27 15:57:51.245463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.832 [2024-09-27 15:57:51.245471] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.832 [2024-09-27 15:57:51.245477] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.832 [2024-09-27 15:57:51.245490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.832 qpair failed and we were unable to recover it. 00:39:10.832 [2024-09-27 15:57:51.255440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.832 [2024-09-27 15:57:51.255495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.832 [2024-09-27 15:57:51.255508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.832 [2024-09-27 15:57:51.255515] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.832 [2024-09-27 15:57:51.255522] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.832 [2024-09-27 15:57:51.255535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.832 qpair failed and we were unable to recover it. 00:39:10.832 [2024-09-27 15:57:51.265560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.832 [2024-09-27 15:57:51.265608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.832 [2024-09-27 15:57:51.265621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.832 [2024-09-27 15:57:51.265628] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.832 [2024-09-27 15:57:51.265635] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.832 [2024-09-27 15:57:51.265648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.832 qpair failed and we were unable to recover it. 00:39:10.832 [2024-09-27 15:57:51.275587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.832 [2024-09-27 15:57:51.275639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.832 [2024-09-27 15:57:51.275652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.832 [2024-09-27 15:57:51.275659] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.832 [2024-09-27 15:57:51.275669] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.832 [2024-09-27 15:57:51.275682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.832 qpair failed and we were unable to recover it. 00:39:10.832 [2024-09-27 15:57:51.285622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.832 [2024-09-27 15:57:51.285667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.832 [2024-09-27 15:57:51.285681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.832 [2024-09-27 15:57:51.285688] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.832 [2024-09-27 15:57:51.285695] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.832 [2024-09-27 15:57:51.285708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.832 qpair failed and we were unable to recover it. 00:39:10.832 [2024-09-27 15:57:51.295677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.832 [2024-09-27 15:57:51.295732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.832 [2024-09-27 15:57:51.295745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.832 [2024-09-27 15:57:51.295752] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.832 [2024-09-27 15:57:51.295759] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.832 [2024-09-27 15:57:51.295772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.832 qpair failed and we were unable to recover it. 00:39:10.832 [2024-09-27 15:57:51.305683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.832 [2024-09-27 15:57:51.305740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.832 [2024-09-27 15:57:51.305753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.832 [2024-09-27 15:57:51.305761] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.832 [2024-09-27 15:57:51.305768] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.832 [2024-09-27 15:57:51.305780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.832 qpair failed and we were unable to recover it. 00:39:10.832 [2024-09-27 15:57:51.315700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:10.832 [2024-09-27 15:57:51.315748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:10.832 [2024-09-27 15:57:51.315760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:10.832 [2024-09-27 15:57:51.315767] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:10.832 [2024-09-27 15:57:51.315774] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:10.832 [2024-09-27 15:57:51.315787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:10.832 qpair failed and we were unable to recover it. 00:39:11.095 [2024-09-27 15:57:51.325755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.096 [2024-09-27 15:57:51.325843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.096 [2024-09-27 15:57:51.325856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.096 [2024-09-27 15:57:51.325864] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.096 [2024-09-27 15:57:51.325871] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.096 [2024-09-27 15:57:51.325884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.096 qpair failed and we were unable to recover it. 00:39:11.096 [2024-09-27 15:57:51.335813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.096 [2024-09-27 15:57:51.335882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.096 [2024-09-27 15:57:51.335899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.096 [2024-09-27 15:57:51.335907] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.096 [2024-09-27 15:57:51.335913] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.096 [2024-09-27 15:57:51.335927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.096 qpair failed and we were unable to recover it. 00:39:11.096 [2024-09-27 15:57:51.345794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.096 [2024-09-27 15:57:51.345847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.096 [2024-09-27 15:57:51.345860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.096 [2024-09-27 15:57:51.345867] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.096 [2024-09-27 15:57:51.345874] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.096 [2024-09-27 15:57:51.345887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.096 qpair failed and we were unable to recover it. 00:39:11.096 [2024-09-27 15:57:51.355807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.096 [2024-09-27 15:57:51.355858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.096 [2024-09-27 15:57:51.355870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.096 [2024-09-27 15:57:51.355878] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.096 [2024-09-27 15:57:51.355884] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.096 [2024-09-27 15:57:51.355902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.096 qpair failed and we were unable to recover it. 00:39:11.096 [2024-09-27 15:57:51.365825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.096 [2024-09-27 15:57:51.365872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.096 [2024-09-27 15:57:51.365887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.096 [2024-09-27 15:57:51.365898] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.096 [2024-09-27 15:57:51.365908] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.096 [2024-09-27 15:57:51.365922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.096 qpair failed and we were unable to recover it. 00:39:11.096 [2024-09-27 15:57:51.375905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.096 [2024-09-27 15:57:51.375995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.096 [2024-09-27 15:57:51.376009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.096 [2024-09-27 15:57:51.376017] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.096 [2024-09-27 15:57:51.376024] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.096 [2024-09-27 15:57:51.376037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.096 qpair failed and we were unable to recover it. 00:39:11.096 [2024-09-27 15:57:51.385890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.096 [2024-09-27 15:57:51.385960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.096 [2024-09-27 15:57:51.385973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.096 [2024-09-27 15:57:51.385980] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.096 [2024-09-27 15:57:51.385987] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.096 [2024-09-27 15:57:51.386000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.096 qpair failed and we were unable to recover it. 00:39:11.096 [2024-09-27 15:57:51.395899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.096 [2024-09-27 15:57:51.395946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.096 [2024-09-27 15:57:51.395961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.096 [2024-09-27 15:57:51.395968] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.096 [2024-09-27 15:57:51.395974] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.096 [2024-09-27 15:57:51.395989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.096 qpair failed and we were unable to recover it. 00:39:11.096 [2024-09-27 15:57:51.405991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.096 [2024-09-27 15:57:51.406037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.096 [2024-09-27 15:57:51.406050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.096 [2024-09-27 15:57:51.406057] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.096 [2024-09-27 15:57:51.406064] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.096 [2024-09-27 15:57:51.406078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.096 qpair failed and we were unable to recover it. 00:39:11.096 [2024-09-27 15:57:51.416002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.096 [2024-09-27 15:57:51.416060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.096 [2024-09-27 15:57:51.416073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.096 [2024-09-27 15:57:51.416080] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.096 [2024-09-27 15:57:51.416087] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.096 [2024-09-27 15:57:51.416100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.096 qpair failed and we were unable to recover it. 00:39:11.096 [2024-09-27 15:57:51.426054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.096 [2024-09-27 15:57:51.426125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.096 [2024-09-27 15:57:51.426138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.096 [2024-09-27 15:57:51.426145] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.096 [2024-09-27 15:57:51.426152] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.096 [2024-09-27 15:57:51.426165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.096 qpair failed and we were unable to recover it. 00:39:11.096 [2024-09-27 15:57:51.435887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.097 [2024-09-27 15:57:51.435939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.097 [2024-09-27 15:57:51.435952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.097 [2024-09-27 15:57:51.435959] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.097 [2024-09-27 15:57:51.435966] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.097 [2024-09-27 15:57:51.435979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.097 qpair failed and we were unable to recover it. 00:39:11.097 [2024-09-27 15:57:51.446038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.097 [2024-09-27 15:57:51.446127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.097 [2024-09-27 15:57:51.446141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.097 [2024-09-27 15:57:51.446149] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.097 [2024-09-27 15:57:51.446156] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.097 [2024-09-27 15:57:51.446169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.097 qpair failed and we were unable to recover it. 00:39:11.097 [2024-09-27 15:57:51.456091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.097 [2024-09-27 15:57:51.456161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.097 [2024-09-27 15:57:51.456175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.097 [2024-09-27 15:57:51.456186] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.097 [2024-09-27 15:57:51.456192] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.097 [2024-09-27 15:57:51.456206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.097 qpair failed and we were unable to recover it. 00:39:11.097 [2024-09-27 15:57:51.466015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.097 [2024-09-27 15:57:51.466075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.097 [2024-09-27 15:57:51.466088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.097 [2024-09-27 15:57:51.466096] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.097 [2024-09-27 15:57:51.466103] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.097 [2024-09-27 15:57:51.466116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.097 qpair failed and we were unable to recover it. 00:39:11.097 [2024-09-27 15:57:51.476131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.097 [2024-09-27 15:57:51.476179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.097 [2024-09-27 15:57:51.476192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.097 [2024-09-27 15:57:51.476200] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.097 [2024-09-27 15:57:51.476206] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.097 [2024-09-27 15:57:51.476219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.097 qpair failed and we were unable to recover it. 00:39:11.097 [2024-09-27 15:57:51.486145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.097 [2024-09-27 15:57:51.486202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.097 [2024-09-27 15:57:51.486216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.097 [2024-09-27 15:57:51.486223] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.097 [2024-09-27 15:57:51.486229] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.097 [2024-09-27 15:57:51.486242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.097 qpair failed and we were unable to recover it. 00:39:11.097 [2024-09-27 15:57:51.496227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.097 [2024-09-27 15:57:51.496286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.097 [2024-09-27 15:57:51.496299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.097 [2024-09-27 15:57:51.496306] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.097 [2024-09-27 15:57:51.496313] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.097 [2024-09-27 15:57:51.496326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.097 qpair failed and we were unable to recover it. 00:39:11.097 [2024-09-27 15:57:51.506297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.097 [2024-09-27 15:57:51.506370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.097 [2024-09-27 15:57:51.506384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.097 [2024-09-27 15:57:51.506391] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.097 [2024-09-27 15:57:51.506398] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.097 [2024-09-27 15:57:51.506412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.097 qpair failed and we were unable to recover it. 00:39:11.097 [2024-09-27 15:57:51.516231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.097 [2024-09-27 15:57:51.516281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.097 [2024-09-27 15:57:51.516294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.097 [2024-09-27 15:57:51.516301] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.097 [2024-09-27 15:57:51.516308] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.097 [2024-09-27 15:57:51.516321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.097 qpair failed and we were unable to recover it. 00:39:11.097 [2024-09-27 15:57:51.526258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.097 [2024-09-27 15:57:51.526304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.097 [2024-09-27 15:57:51.526317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.097 [2024-09-27 15:57:51.526324] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.097 [2024-09-27 15:57:51.526331] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.097 [2024-09-27 15:57:51.526344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.097 qpair failed and we were unable to recover it. 00:39:11.097 [2024-09-27 15:57:51.536224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.097 [2024-09-27 15:57:51.536277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.097 [2024-09-27 15:57:51.536290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.097 [2024-09-27 15:57:51.536297] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.097 [2024-09-27 15:57:51.536304] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.097 [2024-09-27 15:57:51.536317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.097 qpair failed and we were unable to recover it. 00:39:11.097 [2024-09-27 15:57:51.546331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.098 [2024-09-27 15:57:51.546383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.098 [2024-09-27 15:57:51.546396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.098 [2024-09-27 15:57:51.546407] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.098 [2024-09-27 15:57:51.546413] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.098 [2024-09-27 15:57:51.546426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.098 qpair failed and we were unable to recover it. 00:39:11.098 [2024-09-27 15:57:51.556319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.098 [2024-09-27 15:57:51.556369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.098 [2024-09-27 15:57:51.556383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.098 [2024-09-27 15:57:51.556390] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.098 [2024-09-27 15:57:51.556396] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.098 [2024-09-27 15:57:51.556410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.098 qpair failed and we were unable to recover it. 00:39:11.098 [2024-09-27 15:57:51.566364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.098 [2024-09-27 15:57:51.566412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.098 [2024-09-27 15:57:51.566426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.098 [2024-09-27 15:57:51.566434] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.098 [2024-09-27 15:57:51.566441] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.098 [2024-09-27 15:57:51.566454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.098 qpair failed and we were unable to recover it. 00:39:11.098 [2024-09-27 15:57:51.576311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.098 [2024-09-27 15:57:51.576414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.098 [2024-09-27 15:57:51.576427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.098 [2024-09-27 15:57:51.576435] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.098 [2024-09-27 15:57:51.576441] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.098 [2024-09-27 15:57:51.576455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.098 qpair failed and we were unable to recover it. 00:39:11.360 [2024-09-27 15:57:51.586339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.360 [2024-09-27 15:57:51.586403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.360 [2024-09-27 15:57:51.586417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.360 [2024-09-27 15:57:51.586424] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.360 [2024-09-27 15:57:51.586430] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.360 [2024-09-27 15:57:51.586444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.360 qpair failed and we were unable to recover it. 00:39:11.361 [2024-09-27 15:57:51.596428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.361 [2024-09-27 15:57:51.596472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.361 [2024-09-27 15:57:51.596486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.361 [2024-09-27 15:57:51.596493] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.361 [2024-09-27 15:57:51.596500] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.361 [2024-09-27 15:57:51.596513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.361 qpair failed and we were unable to recover it. 00:39:11.361 [2024-09-27 15:57:51.606470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.361 [2024-09-27 15:57:51.606514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.361 [2024-09-27 15:57:51.606529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.361 [2024-09-27 15:57:51.606536] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.361 [2024-09-27 15:57:51.606542] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.361 [2024-09-27 15:57:51.606556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.361 qpair failed and we were unable to recover it. 00:39:11.361 [2024-09-27 15:57:51.616540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.361 [2024-09-27 15:57:51.616632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.361 [2024-09-27 15:57:51.616646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.361 [2024-09-27 15:57:51.616654] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.361 [2024-09-27 15:57:51.616661] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.361 [2024-09-27 15:57:51.616674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.361 qpair failed and we were unable to recover it. 00:39:11.361 [2024-09-27 15:57:51.626507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.361 [2024-09-27 15:57:51.626558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.361 [2024-09-27 15:57:51.626571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.361 [2024-09-27 15:57:51.626578] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.361 [2024-09-27 15:57:51.626585] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.361 [2024-09-27 15:57:51.626598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.361 qpair failed and we were unable to recover it. 00:39:11.361 [2024-09-27 15:57:51.636601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.361 [2024-09-27 15:57:51.636648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.361 [2024-09-27 15:57:51.636661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.361 [2024-09-27 15:57:51.636672] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.361 [2024-09-27 15:57:51.636678] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.361 [2024-09-27 15:57:51.636692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.361 qpair failed and we were unable to recover it. 00:39:11.361 [2024-09-27 15:57:51.646570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.361 [2024-09-27 15:57:51.646620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.361 [2024-09-27 15:57:51.646633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.361 [2024-09-27 15:57:51.646640] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.361 [2024-09-27 15:57:51.646647] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.361 [2024-09-27 15:57:51.646660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.361 qpair failed and we were unable to recover it. 00:39:11.361 [2024-09-27 15:57:51.656651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.361 [2024-09-27 15:57:51.656758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.361 [2024-09-27 15:57:51.656772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.361 [2024-09-27 15:57:51.656779] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.361 [2024-09-27 15:57:51.656786] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.361 [2024-09-27 15:57:51.656799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.361 qpair failed and we were unable to recover it. 00:39:11.361 [2024-09-27 15:57:51.666653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.361 [2024-09-27 15:57:51.666698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.361 [2024-09-27 15:57:51.666713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.361 [2024-09-27 15:57:51.666720] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.361 [2024-09-27 15:57:51.666727] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.361 [2024-09-27 15:57:51.666741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.361 qpair failed and we were unable to recover it. 00:39:11.361 [2024-09-27 15:57:51.676542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.361 [2024-09-27 15:57:51.676595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.361 [2024-09-27 15:57:51.676609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.361 [2024-09-27 15:57:51.676616] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.361 [2024-09-27 15:57:51.676622] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.361 [2024-09-27 15:57:51.676636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.361 qpair failed and we were unable to recover it. 00:39:11.361 [2024-09-27 15:57:51.686688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.361 [2024-09-27 15:57:51.686734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.361 [2024-09-27 15:57:51.686747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.361 [2024-09-27 15:57:51.686754] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.361 [2024-09-27 15:57:51.686761] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.361 [2024-09-27 15:57:51.686774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.361 qpair failed and we were unable to recover it. 00:39:11.361 [2024-09-27 15:57:51.696762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.361 [2024-09-27 15:57:51.696818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.361 [2024-09-27 15:57:51.696831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.361 [2024-09-27 15:57:51.696838] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.361 [2024-09-27 15:57:51.696845] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.361 [2024-09-27 15:57:51.696858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.361 qpair failed and we were unable to recover it. 00:39:11.361 [2024-09-27 15:57:51.706731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.361 [2024-09-27 15:57:51.706779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.361 [2024-09-27 15:57:51.706793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.361 [2024-09-27 15:57:51.706800] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.361 [2024-09-27 15:57:51.706806] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.361 [2024-09-27 15:57:51.706819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.361 qpair failed and we were unable to recover it. 00:39:11.361 [2024-09-27 15:57:51.716766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.361 [2024-09-27 15:57:51.716863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.361 [2024-09-27 15:57:51.716877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.361 [2024-09-27 15:57:51.716884] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.361 [2024-09-27 15:57:51.716892] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.361 [2024-09-27 15:57:51.716910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.361 qpair failed and we were unable to recover it. 00:39:11.362 [2024-09-27 15:57:51.726796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.362 [2024-09-27 15:57:51.726846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.362 [2024-09-27 15:57:51.726859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.362 [2024-09-27 15:57:51.726869] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.362 [2024-09-27 15:57:51.726876] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.362 [2024-09-27 15:57:51.726889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.362 qpair failed and we were unable to recover it. 00:39:11.362 [2024-09-27 15:57:51.736750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.362 [2024-09-27 15:57:51.736806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.362 [2024-09-27 15:57:51.736820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.362 [2024-09-27 15:57:51.736827] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.362 [2024-09-27 15:57:51.736834] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.362 [2024-09-27 15:57:51.736847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.362 qpair failed and we were unable to recover it. 00:39:11.362 [2024-09-27 15:57:51.746870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.362 [2024-09-27 15:57:51.746922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.362 [2024-09-27 15:57:51.746936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.362 [2024-09-27 15:57:51.746943] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.362 [2024-09-27 15:57:51.746949] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.362 [2024-09-27 15:57:51.746963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.362 qpair failed and we were unable to recover it. 00:39:11.362 [2024-09-27 15:57:51.756878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.362 [2024-09-27 15:57:51.756930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.362 [2024-09-27 15:57:51.756943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.362 [2024-09-27 15:57:51.756950] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.362 [2024-09-27 15:57:51.756957] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.362 [2024-09-27 15:57:51.756970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.362 qpair failed and we were unable to recover it. 00:39:11.362 [2024-09-27 15:57:51.766921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.362 [2024-09-27 15:57:51.766971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.362 [2024-09-27 15:57:51.766984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.362 [2024-09-27 15:57:51.766991] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.362 [2024-09-27 15:57:51.766998] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.362 [2024-09-27 15:57:51.767011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.362 qpair failed and we were unable to recover it. 00:39:11.362 [2024-09-27 15:57:51.776985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.362 [2024-09-27 15:57:51.777042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.362 [2024-09-27 15:57:51.777055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.362 [2024-09-27 15:57:51.777062] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.362 [2024-09-27 15:57:51.777068] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.362 [2024-09-27 15:57:51.777082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.362 qpair failed and we were unable to recover it. 00:39:11.362 [2024-09-27 15:57:51.787025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.362 [2024-09-27 15:57:51.787081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.362 [2024-09-27 15:57:51.787095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.362 [2024-09-27 15:57:51.787104] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.362 [2024-09-27 15:57:51.787111] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.362 [2024-09-27 15:57:51.787125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.362 qpair failed and we were unable to recover it. 00:39:11.362 [2024-09-27 15:57:51.796991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.362 [2024-09-27 15:57:51.797038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.362 [2024-09-27 15:57:51.797051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.362 [2024-09-27 15:57:51.797058] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.362 [2024-09-27 15:57:51.797065] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.362 [2024-09-27 15:57:51.797078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.362 qpair failed and we were unable to recover it. 00:39:11.362 [2024-09-27 15:57:51.806948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.362 [2024-09-27 15:57:51.806994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.362 [2024-09-27 15:57:51.807007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.362 [2024-09-27 15:57:51.807014] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.362 [2024-09-27 15:57:51.807021] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.362 [2024-09-27 15:57:51.807034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.362 qpair failed and we were unable to recover it. 00:39:11.362 [2024-09-27 15:57:51.817078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.362 [2024-09-27 15:57:51.817133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.362 [2024-09-27 15:57:51.817150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.362 [2024-09-27 15:57:51.817157] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.362 [2024-09-27 15:57:51.817164] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.362 [2024-09-27 15:57:51.817177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.362 qpair failed and we were unable to recover it. 00:39:11.362 [2024-09-27 15:57:51.826957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.362 [2024-09-27 15:57:51.827010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.362 [2024-09-27 15:57:51.827024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.362 [2024-09-27 15:57:51.827031] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.362 [2024-09-27 15:57:51.827038] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.362 [2024-09-27 15:57:51.827052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.362 qpair failed and we were unable to recover it. 00:39:11.362 [2024-09-27 15:57:51.837107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.362 [2024-09-27 15:57:51.837164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.362 [2024-09-27 15:57:51.837178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.362 [2024-09-27 15:57:51.837185] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.362 [2024-09-27 15:57:51.837191] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.362 [2024-09-27 15:57:51.837204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.362 qpair failed and we were unable to recover it. 00:39:11.362 [2024-09-27 15:57:51.847122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.362 [2024-09-27 15:57:51.847175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.362 [2024-09-27 15:57:51.847188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.362 [2024-09-27 15:57:51.847195] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.362 [2024-09-27 15:57:51.847202] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.362 [2024-09-27 15:57:51.847215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.362 qpair failed and we were unable to recover it. 00:39:11.625 [2024-09-27 15:57:51.857181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.625 [2024-09-27 15:57:51.857233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.625 [2024-09-27 15:57:51.857246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.625 [2024-09-27 15:57:51.857253] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.625 [2024-09-27 15:57:51.857260] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.625 [2024-09-27 15:57:51.857273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.625 qpair failed and we were unable to recover it. 00:39:11.625 [2024-09-27 15:57:51.867182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.625 [2024-09-27 15:57:51.867230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.625 [2024-09-27 15:57:51.867244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.625 [2024-09-27 15:57:51.867252] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.625 [2024-09-27 15:57:51.867258] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.625 [2024-09-27 15:57:51.867271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.625 qpair failed and we were unable to recover it. 00:39:11.625 [2024-09-27 15:57:51.877173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.625 [2024-09-27 15:57:51.877222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.625 [2024-09-27 15:57:51.877236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.625 [2024-09-27 15:57:51.877243] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.625 [2024-09-27 15:57:51.877250] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.625 [2024-09-27 15:57:51.877263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.625 qpair failed and we were unable to recover it. 00:39:11.625 [2024-09-27 15:57:51.887224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.625 [2024-09-27 15:57:51.887271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.625 [2024-09-27 15:57:51.887285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.625 [2024-09-27 15:57:51.887292] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.625 [2024-09-27 15:57:51.887299] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.625 [2024-09-27 15:57:51.887312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.625 qpair failed and we were unable to recover it. 00:39:11.625 [2024-09-27 15:57:51.897324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.625 [2024-09-27 15:57:51.897377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.625 [2024-09-27 15:57:51.897390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.625 [2024-09-27 15:57:51.897397] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.625 [2024-09-27 15:57:51.897404] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.625 [2024-09-27 15:57:51.897417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.625 qpair failed and we were unable to recover it. 00:39:11.625 [2024-09-27 15:57:51.907292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.625 [2024-09-27 15:57:51.907344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.625 [2024-09-27 15:57:51.907361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.625 [2024-09-27 15:57:51.907368] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.625 [2024-09-27 15:57:51.907375] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.625 [2024-09-27 15:57:51.907388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.625 qpair failed and we were unable to recover it. 00:39:11.625 [2024-09-27 15:57:51.917226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.625 [2024-09-27 15:57:51.917281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.625 [2024-09-27 15:57:51.917291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.625 [2024-09-27 15:57:51.917296] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.625 [2024-09-27 15:57:51.917300] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.625 [2024-09-27 15:57:51.917310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.625 qpair failed and we were unable to recover it. 00:39:11.625 [2024-09-27 15:57:51.927317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.625 [2024-09-27 15:57:51.927359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.625 [2024-09-27 15:57:51.927369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.625 [2024-09-27 15:57:51.927374] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.625 [2024-09-27 15:57:51.927379] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.625 [2024-09-27 15:57:51.927388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.625 qpair failed and we were unable to recover it. 00:39:11.625 [2024-09-27 15:57:51.937396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.625 [2024-09-27 15:57:51.937453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.625 [2024-09-27 15:57:51.937463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.625 [2024-09-27 15:57:51.937468] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.625 [2024-09-27 15:57:51.937473] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.625 [2024-09-27 15:57:51.937483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.625 qpair failed and we were unable to recover it. 00:39:11.625 [2024-09-27 15:57:51.947268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.625 [2024-09-27 15:57:51.947316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.625 [2024-09-27 15:57:51.947326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.625 [2024-09-27 15:57:51.947331] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.625 [2024-09-27 15:57:51.947336] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.625 [2024-09-27 15:57:51.947348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.625 qpair failed and we were unable to recover it. 00:39:11.625 [2024-09-27 15:57:51.957385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.625 [2024-09-27 15:57:51.957429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.625 [2024-09-27 15:57:51.957439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.625 [2024-09-27 15:57:51.957444] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.625 [2024-09-27 15:57:51.957449] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.626 [2024-09-27 15:57:51.957458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.626 qpair failed and we were unable to recover it. 00:39:11.626 [2024-09-27 15:57:51.967447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.626 [2024-09-27 15:57:51.967487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.626 [2024-09-27 15:57:51.967497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.626 [2024-09-27 15:57:51.967502] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.626 [2024-09-27 15:57:51.967507] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.626 [2024-09-27 15:57:51.967516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.626 qpair failed and we were unable to recover it. 00:39:11.626 [2024-09-27 15:57:51.977507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.626 [2024-09-27 15:57:51.977589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.626 [2024-09-27 15:57:51.977599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.626 [2024-09-27 15:57:51.977604] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.626 [2024-09-27 15:57:51.977609] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.626 [2024-09-27 15:57:51.977619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.626 qpair failed and we were unable to recover it. 00:39:11.626 [2024-09-27 15:57:51.987488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.626 [2024-09-27 15:57:51.987532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.626 [2024-09-27 15:57:51.987542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.626 [2024-09-27 15:57:51.987547] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.626 [2024-09-27 15:57:51.987552] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.626 [2024-09-27 15:57:51.987561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.626 qpair failed and we were unable to recover it. 00:39:11.626 [2024-09-27 15:57:51.997525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.626 [2024-09-27 15:57:51.997577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.626 [2024-09-27 15:57:51.997591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.626 [2024-09-27 15:57:51.997596] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.626 [2024-09-27 15:57:51.997600] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.626 [2024-09-27 15:57:51.997610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.626 qpair failed and we were unable to recover it. 00:39:11.626 [2024-09-27 15:57:52.007558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.626 [2024-09-27 15:57:52.007623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.626 [2024-09-27 15:57:52.007633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.626 [2024-09-27 15:57:52.007638] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.626 [2024-09-27 15:57:52.007643] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.626 [2024-09-27 15:57:52.007652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.626 qpair failed and we were unable to recover it. 00:39:11.626 [2024-09-27 15:57:52.017624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.626 [2024-09-27 15:57:52.017692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.626 [2024-09-27 15:57:52.017702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.626 [2024-09-27 15:57:52.017707] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.626 [2024-09-27 15:57:52.017712] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.626 [2024-09-27 15:57:52.017721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.626 qpair failed and we were unable to recover it. 00:39:11.626 [2024-09-27 15:57:52.027655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.626 [2024-09-27 15:57:52.027731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.626 [2024-09-27 15:57:52.027742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.626 [2024-09-27 15:57:52.027747] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.626 [2024-09-27 15:57:52.027752] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.626 [2024-09-27 15:57:52.027761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.626 qpair failed and we were unable to recover it. 00:39:11.626 [2024-09-27 15:57:52.037635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.626 [2024-09-27 15:57:52.037679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.626 [2024-09-27 15:57:52.037689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.626 [2024-09-27 15:57:52.037694] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.626 [2024-09-27 15:57:52.037699] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.626 [2024-09-27 15:57:52.037711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.626 qpair failed and we were unable to recover it. 00:39:11.626 [2024-09-27 15:57:52.047542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.626 [2024-09-27 15:57:52.047584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.626 [2024-09-27 15:57:52.047595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.626 [2024-09-27 15:57:52.047600] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.626 [2024-09-27 15:57:52.047605] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.626 [2024-09-27 15:57:52.047614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.626 qpair failed and we were unable to recover it. 00:39:11.626 [2024-09-27 15:57:52.057728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.626 [2024-09-27 15:57:52.057776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.626 [2024-09-27 15:57:52.057786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.626 [2024-09-27 15:57:52.057791] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.626 [2024-09-27 15:57:52.057796] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.626 [2024-09-27 15:57:52.057805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.626 qpair failed and we were unable to recover it. 00:39:11.626 [2024-09-27 15:57:52.067726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.626 [2024-09-27 15:57:52.067810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.626 [2024-09-27 15:57:52.067821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.626 [2024-09-27 15:57:52.067826] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.626 [2024-09-27 15:57:52.067831] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.626 [2024-09-27 15:57:52.067841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.626 qpair failed and we were unable to recover it. 00:39:11.626 [2024-09-27 15:57:52.077613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.626 [2024-09-27 15:57:52.077659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.626 [2024-09-27 15:57:52.077671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.626 [2024-09-27 15:57:52.077676] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.626 [2024-09-27 15:57:52.077681] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.626 [2024-09-27 15:57:52.077691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.626 qpair failed and we were unable to recover it. 00:39:11.626 [2024-09-27 15:57:52.087768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.626 [2024-09-27 15:57:52.087813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.626 [2024-09-27 15:57:52.087827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.626 [2024-09-27 15:57:52.087832] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.626 [2024-09-27 15:57:52.087837] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.627 [2024-09-27 15:57:52.087846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.627 qpair failed and we were unable to recover it. 00:39:11.627 [2024-09-27 15:57:52.097884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.627 [2024-09-27 15:57:52.097939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.627 [2024-09-27 15:57:52.097956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.627 [2024-09-27 15:57:52.097961] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.627 [2024-09-27 15:57:52.097966] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.627 [2024-09-27 15:57:52.097976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.627 qpair failed and we were unable to recover it. 00:39:11.627 [2024-09-27 15:57:52.107841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.627 [2024-09-27 15:57:52.107885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.627 [2024-09-27 15:57:52.107899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.627 [2024-09-27 15:57:52.107904] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.627 [2024-09-27 15:57:52.107909] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.627 [2024-09-27 15:57:52.107919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.627 qpair failed and we were unable to recover it. 00:39:11.889 [2024-09-27 15:57:52.117858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.889 [2024-09-27 15:57:52.117947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.889 [2024-09-27 15:57:52.117959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.889 [2024-09-27 15:57:52.117964] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.889 [2024-09-27 15:57:52.117969] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.889 [2024-09-27 15:57:52.117979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.889 qpair failed and we were unable to recover it. 00:39:11.889 [2024-09-27 15:57:52.127857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.889 [2024-09-27 15:57:52.127918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.889 [2024-09-27 15:57:52.127929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.889 [2024-09-27 15:57:52.127934] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.889 [2024-09-27 15:57:52.127938] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.889 [2024-09-27 15:57:52.127952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.889 qpair failed and we were unable to recover it. 00:39:11.889 [2024-09-27 15:57:52.137961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.889 [2024-09-27 15:57:52.138011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.889 [2024-09-27 15:57:52.138021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.889 [2024-09-27 15:57:52.138026] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.889 [2024-09-27 15:57:52.138031] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.889 [2024-09-27 15:57:52.138041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.889 qpair failed and we were unable to recover it. 00:39:11.889 [2024-09-27 15:57:52.147968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.889 [2024-09-27 15:57:52.148031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.889 [2024-09-27 15:57:52.148040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.889 [2024-09-27 15:57:52.148046] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.889 [2024-09-27 15:57:52.148050] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.889 [2024-09-27 15:57:52.148060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.889 qpair failed and we were unable to recover it. 00:39:11.889 [2024-09-27 15:57:52.157965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.889 [2024-09-27 15:57:52.158008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.889 [2024-09-27 15:57:52.158018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.889 [2024-09-27 15:57:52.158023] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.890 [2024-09-27 15:57:52.158027] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.890 [2024-09-27 15:57:52.158037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.890 qpair failed and we were unable to recover it. 00:39:11.890 [2024-09-27 15:57:52.167989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.890 [2024-09-27 15:57:52.168059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.890 [2024-09-27 15:57:52.168069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.890 [2024-09-27 15:57:52.168074] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.890 [2024-09-27 15:57:52.168078] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.890 [2024-09-27 15:57:52.168088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.890 qpair failed and we were unable to recover it. 00:39:11.890 [2024-09-27 15:57:52.178041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.890 [2024-09-27 15:57:52.178096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.890 [2024-09-27 15:57:52.178108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.890 [2024-09-27 15:57:52.178113] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.890 [2024-09-27 15:57:52.178118] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.890 [2024-09-27 15:57:52.178128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.890 qpair failed and we were unable to recover it. 00:39:11.890 [2024-09-27 15:57:52.188110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.890 [2024-09-27 15:57:52.188177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.890 [2024-09-27 15:57:52.188187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.890 [2024-09-27 15:57:52.188192] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.890 [2024-09-27 15:57:52.188196] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.890 [2024-09-27 15:57:52.188206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.890 qpair failed and we were unable to recover it. 00:39:11.890 [2024-09-27 15:57:52.198126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.890 [2024-09-27 15:57:52.198192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.890 [2024-09-27 15:57:52.198202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.890 [2024-09-27 15:57:52.198207] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.890 [2024-09-27 15:57:52.198212] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.890 [2024-09-27 15:57:52.198221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.890 qpair failed and we were unable to recover it. 00:39:11.890 [2024-09-27 15:57:52.208145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.890 [2024-09-27 15:57:52.208217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.890 [2024-09-27 15:57:52.208227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.890 [2024-09-27 15:57:52.208232] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.890 [2024-09-27 15:57:52.208236] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.890 [2024-09-27 15:57:52.208246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.890 qpair failed and we were unable to recover it. 00:39:11.890 [2024-09-27 15:57:52.218147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.890 [2024-09-27 15:57:52.218195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.890 [2024-09-27 15:57:52.218205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.890 [2024-09-27 15:57:52.218210] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.890 [2024-09-27 15:57:52.218214] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.890 [2024-09-27 15:57:52.218228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.890 qpair failed and we were unable to recover it. 00:39:11.890 [2024-09-27 15:57:52.228169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.890 [2024-09-27 15:57:52.228215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.890 [2024-09-27 15:57:52.228226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.890 [2024-09-27 15:57:52.228231] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.890 [2024-09-27 15:57:52.228236] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.890 [2024-09-27 15:57:52.228246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.890 qpair failed and we were unable to recover it. 00:39:11.890 [2024-09-27 15:57:52.238170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.890 [2024-09-27 15:57:52.238219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.890 [2024-09-27 15:57:52.238229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.890 [2024-09-27 15:57:52.238234] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.890 [2024-09-27 15:57:52.238239] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.890 [2024-09-27 15:57:52.238249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.890 qpair failed and we were unable to recover it. 00:39:11.890 [2024-09-27 15:57:52.248219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.890 [2024-09-27 15:57:52.248259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.890 [2024-09-27 15:57:52.248268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.890 [2024-09-27 15:57:52.248273] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.890 [2024-09-27 15:57:52.248278] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.890 [2024-09-27 15:57:52.248288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.890 qpair failed and we were unable to recover it. 00:39:11.890 [2024-09-27 15:57:52.258279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.890 [2024-09-27 15:57:52.258328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.890 [2024-09-27 15:57:52.258338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.890 [2024-09-27 15:57:52.258343] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.890 [2024-09-27 15:57:52.258347] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.890 [2024-09-27 15:57:52.258357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.890 qpair failed and we were unable to recover it. 00:39:11.890 [2024-09-27 15:57:52.268272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.890 [2024-09-27 15:57:52.268322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.890 [2024-09-27 15:57:52.268334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.890 [2024-09-27 15:57:52.268339] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.890 [2024-09-27 15:57:52.268344] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.890 [2024-09-27 15:57:52.268354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.890 qpair failed and we were unable to recover it. 00:39:11.890 [2024-09-27 15:57:52.278144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.890 [2024-09-27 15:57:52.278186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.890 [2024-09-27 15:57:52.278195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.890 [2024-09-27 15:57:52.278200] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.890 [2024-09-27 15:57:52.278205] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.890 [2024-09-27 15:57:52.278215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.890 qpair failed and we were unable to recover it. 00:39:11.890 [2024-09-27 15:57:52.288258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.890 [2024-09-27 15:57:52.288298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.890 [2024-09-27 15:57:52.288307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.890 [2024-09-27 15:57:52.288312] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.891 [2024-09-27 15:57:52.288317] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.891 [2024-09-27 15:57:52.288326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.891 qpair failed and we were unable to recover it. 00:39:11.891 [2024-09-27 15:57:52.298370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.891 [2024-09-27 15:57:52.298420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.891 [2024-09-27 15:57:52.298430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.891 [2024-09-27 15:57:52.298435] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.891 [2024-09-27 15:57:52.298439] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.891 [2024-09-27 15:57:52.298449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.891 qpair failed and we were unable to recover it. 00:39:11.891 [2024-09-27 15:57:52.308346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.891 [2024-09-27 15:57:52.308394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.891 [2024-09-27 15:57:52.308404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.891 [2024-09-27 15:57:52.308409] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.891 [2024-09-27 15:57:52.308417] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.891 [2024-09-27 15:57:52.308426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.891 qpair failed and we were unable to recover it. 00:39:11.891 [2024-09-27 15:57:52.318256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.891 [2024-09-27 15:57:52.318297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.891 [2024-09-27 15:57:52.318307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.891 [2024-09-27 15:57:52.318312] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.891 [2024-09-27 15:57:52.318317] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.891 [2024-09-27 15:57:52.318327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.891 qpair failed and we were unable to recover it. 00:39:11.891 [2024-09-27 15:57:52.328410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.891 [2024-09-27 15:57:52.328456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.891 [2024-09-27 15:57:52.328466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.891 [2024-09-27 15:57:52.328470] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.891 [2024-09-27 15:57:52.328475] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.891 [2024-09-27 15:57:52.328485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.891 qpair failed and we were unable to recover it. 00:39:11.891 [2024-09-27 15:57:52.338496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.891 [2024-09-27 15:57:52.338546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.891 [2024-09-27 15:57:52.338556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.891 [2024-09-27 15:57:52.338561] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.891 [2024-09-27 15:57:52.338566] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.891 [2024-09-27 15:57:52.338575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.891 qpair failed and we were unable to recover it. 00:39:11.891 [2024-09-27 15:57:52.348490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.891 [2024-09-27 15:57:52.348560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.891 [2024-09-27 15:57:52.348571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.891 [2024-09-27 15:57:52.348576] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.891 [2024-09-27 15:57:52.348581] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.891 [2024-09-27 15:57:52.348590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.891 qpair failed and we were unable to recover it. 00:39:11.891 [2024-09-27 15:57:52.358500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.891 [2024-09-27 15:57:52.358549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.891 [2024-09-27 15:57:52.358558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.891 [2024-09-27 15:57:52.358564] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.891 [2024-09-27 15:57:52.358568] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.891 [2024-09-27 15:57:52.358578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.891 qpair failed and we were unable to recover it. 00:39:11.891 [2024-09-27 15:57:52.368516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:11.891 [2024-09-27 15:57:52.368602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:11.891 [2024-09-27 15:57:52.368612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:11.891 [2024-09-27 15:57:52.368618] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:11.891 [2024-09-27 15:57:52.368624] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:11.891 [2024-09-27 15:57:52.368633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:11.891 qpair failed and we were unable to recover it. 00:39:12.154 [2024-09-27 15:57:52.378557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.154 [2024-09-27 15:57:52.378608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.154 [2024-09-27 15:57:52.378619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.154 [2024-09-27 15:57:52.378624] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.154 [2024-09-27 15:57:52.378629] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.154 [2024-09-27 15:57:52.378638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.154 qpair failed and we were unable to recover it. 00:39:12.154 [2024-09-27 15:57:52.388601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.154 [2024-09-27 15:57:52.388650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.154 [2024-09-27 15:57:52.388660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.154 [2024-09-27 15:57:52.388665] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.154 [2024-09-27 15:57:52.388670] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.154 [2024-09-27 15:57:52.388680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.154 qpair failed and we were unable to recover it. 00:39:12.154 [2024-09-27 15:57:52.398617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.154 [2024-09-27 15:57:52.398662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.154 [2024-09-27 15:57:52.398672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.154 [2024-09-27 15:57:52.398677] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.154 [2024-09-27 15:57:52.398688] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.154 [2024-09-27 15:57:52.398698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.154 qpair failed and we were unable to recover it. 00:39:12.154 [2024-09-27 15:57:52.408686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.154 [2024-09-27 15:57:52.408755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.154 [2024-09-27 15:57:52.408765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.154 [2024-09-27 15:57:52.408769] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.154 [2024-09-27 15:57:52.408774] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.154 [2024-09-27 15:57:52.408783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.154 qpair failed and we were unable to recover it. 00:39:12.154 [2024-09-27 15:57:52.418560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.154 [2024-09-27 15:57:52.418643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.154 [2024-09-27 15:57:52.418653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.154 [2024-09-27 15:57:52.418658] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.154 [2024-09-27 15:57:52.418663] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.154 [2024-09-27 15:57:52.418673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.154 qpair failed and we were unable to recover it. 00:39:12.154 [2024-09-27 15:57:52.428684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.154 [2024-09-27 15:57:52.428733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.154 [2024-09-27 15:57:52.428742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.154 [2024-09-27 15:57:52.428747] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.154 [2024-09-27 15:57:52.428752] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.154 [2024-09-27 15:57:52.428761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.154 qpair failed and we were unable to recover it. 00:39:12.154 [2024-09-27 15:57:52.438698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.154 [2024-09-27 15:57:52.438755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.154 [2024-09-27 15:57:52.438765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.154 [2024-09-27 15:57:52.438771] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.154 [2024-09-27 15:57:52.438775] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.154 [2024-09-27 15:57:52.438785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.154 qpair failed and we were unable to recover it. 00:39:12.154 [2024-09-27 15:57:52.448611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.154 [2024-09-27 15:57:52.448661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.154 [2024-09-27 15:57:52.448671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.154 [2024-09-27 15:57:52.448677] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.154 [2024-09-27 15:57:52.448681] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.154 [2024-09-27 15:57:52.448691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.154 qpair failed and we were unable to recover it. 00:39:12.154 [2024-09-27 15:57:52.458818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.154 [2024-09-27 15:57:52.458866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.154 [2024-09-27 15:57:52.458877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.154 [2024-09-27 15:57:52.458882] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.154 [2024-09-27 15:57:52.458887] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.154 [2024-09-27 15:57:52.458899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.154 qpair failed and we were unable to recover it. 00:39:12.154 [2024-09-27 15:57:52.468812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.154 [2024-09-27 15:57:52.468860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.154 [2024-09-27 15:57:52.468870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.154 [2024-09-27 15:57:52.468875] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.154 [2024-09-27 15:57:52.468880] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.154 [2024-09-27 15:57:52.468889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.154 qpair failed and we were unable to recover it. 00:39:12.154 [2024-09-27 15:57:52.478811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.154 [2024-09-27 15:57:52.478853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.154 [2024-09-27 15:57:52.478863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.154 [2024-09-27 15:57:52.478868] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.154 [2024-09-27 15:57:52.478873] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.154 [2024-09-27 15:57:52.478882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.154 qpair failed and we were unable to recover it. 00:39:12.154 [2024-09-27 15:57:52.488837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.155 [2024-09-27 15:57:52.488890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.155 [2024-09-27 15:57:52.488903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.155 [2024-09-27 15:57:52.488908] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.155 [2024-09-27 15:57:52.488915] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.155 [2024-09-27 15:57:52.488925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.155 qpair failed and we were unable to recover it. 00:39:12.155 [2024-09-27 15:57:52.498935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.155 [2024-09-27 15:57:52.499018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.155 [2024-09-27 15:57:52.499027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.155 [2024-09-27 15:57:52.499033] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.155 [2024-09-27 15:57:52.499037] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.155 [2024-09-27 15:57:52.499047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.155 qpair failed and we were unable to recover it. 00:39:12.155 [2024-09-27 15:57:52.508920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.155 [2024-09-27 15:57:52.508961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.155 [2024-09-27 15:57:52.508971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.155 [2024-09-27 15:57:52.508976] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.155 [2024-09-27 15:57:52.508981] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.155 [2024-09-27 15:57:52.508991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.155 qpair failed and we were unable to recover it. 00:39:12.155 [2024-09-27 15:57:52.518937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.155 [2024-09-27 15:57:52.519026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.155 [2024-09-27 15:57:52.519036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.155 [2024-09-27 15:57:52.519042] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.155 [2024-09-27 15:57:52.519047] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.155 [2024-09-27 15:57:52.519056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.155 qpair failed and we were unable to recover it. 00:39:12.155 [2024-09-27 15:57:52.528818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.155 [2024-09-27 15:57:52.528859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.155 [2024-09-27 15:57:52.528869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.155 [2024-09-27 15:57:52.528874] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.155 [2024-09-27 15:57:52.528879] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.155 [2024-09-27 15:57:52.528888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.155 qpair failed and we were unable to recover it. 00:39:12.155 [2024-09-27 15:57:52.539005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.155 [2024-09-27 15:57:52.539056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.155 [2024-09-27 15:57:52.539066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.155 [2024-09-27 15:57:52.539071] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.155 [2024-09-27 15:57:52.539076] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.155 [2024-09-27 15:57:52.539085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.155 qpair failed and we were unable to recover it. 00:39:12.155 [2024-09-27 15:57:52.548896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.155 [2024-09-27 15:57:52.548940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.155 [2024-09-27 15:57:52.548950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.155 [2024-09-27 15:57:52.548955] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.155 [2024-09-27 15:57:52.548960] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.155 [2024-09-27 15:57:52.548970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.155 qpair failed and we were unable to recover it. 00:39:12.155 [2024-09-27 15:57:52.559041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.155 [2024-09-27 15:57:52.559085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.155 [2024-09-27 15:57:52.559095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.155 [2024-09-27 15:57:52.559101] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.155 [2024-09-27 15:57:52.559105] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.155 [2024-09-27 15:57:52.559115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.155 qpair failed and we were unable to recover it. 00:39:12.155 [2024-09-27 15:57:52.569034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.155 [2024-09-27 15:57:52.569078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.155 [2024-09-27 15:57:52.569088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.155 [2024-09-27 15:57:52.569094] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.155 [2024-09-27 15:57:52.569098] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.155 [2024-09-27 15:57:52.569108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.155 qpair failed and we were unable to recover it. 00:39:12.155 [2024-09-27 15:57:52.579136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.155 [2024-09-27 15:57:52.579202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.155 [2024-09-27 15:57:52.579212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.155 [2024-09-27 15:57:52.579218] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.155 [2024-09-27 15:57:52.579225] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.155 [2024-09-27 15:57:52.579235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.155 qpair failed and we were unable to recover it. 00:39:12.155 [2024-09-27 15:57:52.589102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.155 [2024-09-27 15:57:52.589145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.155 [2024-09-27 15:57:52.589155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.155 [2024-09-27 15:57:52.589160] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.155 [2024-09-27 15:57:52.589165] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.155 [2024-09-27 15:57:52.589175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.155 qpair failed and we were unable to recover it. 00:39:12.155 [2024-09-27 15:57:52.599182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.155 [2024-09-27 15:57:52.599268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.155 [2024-09-27 15:57:52.599278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.155 [2024-09-27 15:57:52.599283] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.155 [2024-09-27 15:57:52.599289] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.155 [2024-09-27 15:57:52.599299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.155 qpair failed and we were unable to recover it. 00:39:12.155 [2024-09-27 15:57:52.609187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.155 [2024-09-27 15:57:52.609234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.155 [2024-09-27 15:57:52.609243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.155 [2024-09-27 15:57:52.609249] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.155 [2024-09-27 15:57:52.609253] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.155 [2024-09-27 15:57:52.609263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.155 qpair failed and we were unable to recover it. 00:39:12.155 [2024-09-27 15:57:52.619239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.156 [2024-09-27 15:57:52.619287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.156 [2024-09-27 15:57:52.619298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.156 [2024-09-27 15:57:52.619303] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.156 [2024-09-27 15:57:52.619307] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.156 [2024-09-27 15:57:52.619317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.156 qpair failed and we were unable to recover it. 00:39:12.156 [2024-09-27 15:57:52.629250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.156 [2024-09-27 15:57:52.629339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.156 [2024-09-27 15:57:52.629350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.156 [2024-09-27 15:57:52.629356] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.156 [2024-09-27 15:57:52.629361] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.156 [2024-09-27 15:57:52.629371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.156 qpair failed and we were unable to recover it. 00:39:12.156 [2024-09-27 15:57:52.639273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.156 [2024-09-27 15:57:52.639355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.156 [2024-09-27 15:57:52.639366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.156 [2024-09-27 15:57:52.639370] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.156 [2024-09-27 15:57:52.639375] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.156 [2024-09-27 15:57:52.639386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.156 qpair failed and we were unable to recover it. 00:39:12.418 [2024-09-27 15:57:52.649294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.418 [2024-09-27 15:57:52.649336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.418 [2024-09-27 15:57:52.649347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.418 [2024-09-27 15:57:52.649352] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.418 [2024-09-27 15:57:52.649356] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.418 [2024-09-27 15:57:52.649366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.418 qpair failed and we were unable to recover it. 00:39:12.418 [2024-09-27 15:57:52.659375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.419 [2024-09-27 15:57:52.659427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.419 [2024-09-27 15:57:52.659436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.419 [2024-09-27 15:57:52.659442] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.419 [2024-09-27 15:57:52.659446] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.419 [2024-09-27 15:57:52.659456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.419 qpair failed and we were unable to recover it. 00:39:12.419 [2024-09-27 15:57:52.669332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.419 [2024-09-27 15:57:52.669377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.419 [2024-09-27 15:57:52.669388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.419 [2024-09-27 15:57:52.669396] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.419 [2024-09-27 15:57:52.669401] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.419 [2024-09-27 15:57:52.669411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.419 qpair failed and we were unable to recover it. 00:39:12.419 [2024-09-27 15:57:52.679246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.419 [2024-09-27 15:57:52.679288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.419 [2024-09-27 15:57:52.679299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.419 [2024-09-27 15:57:52.679304] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.419 [2024-09-27 15:57:52.679309] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.419 [2024-09-27 15:57:52.679319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.419 qpair failed and we were unable to recover it. 00:39:12.419 [2024-09-27 15:57:52.689409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.419 [2024-09-27 15:57:52.689454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.419 [2024-09-27 15:57:52.689463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.419 [2024-09-27 15:57:52.689469] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.419 [2024-09-27 15:57:52.689474] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.419 [2024-09-27 15:57:52.689483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.419 qpair failed and we were unable to recover it. 00:39:12.419 [2024-09-27 15:57:52.699435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.419 [2024-09-27 15:57:52.699484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.419 [2024-09-27 15:57:52.699494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.419 [2024-09-27 15:57:52.699499] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.419 [2024-09-27 15:57:52.699504] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.419 [2024-09-27 15:57:52.699513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.419 qpair failed and we were unable to recover it. 00:39:12.419 [2024-09-27 15:57:52.709488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.419 [2024-09-27 15:57:52.709533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.419 [2024-09-27 15:57:52.709543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.419 [2024-09-27 15:57:52.709548] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.419 [2024-09-27 15:57:52.709553] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.419 [2024-09-27 15:57:52.709562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.419 qpair failed and we were unable to recover it. 00:39:12.419 [2024-09-27 15:57:52.719486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.419 [2024-09-27 15:57:52.719528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.419 [2024-09-27 15:57:52.719538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.419 [2024-09-27 15:57:52.719543] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.419 [2024-09-27 15:57:52.719548] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.419 [2024-09-27 15:57:52.719558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.419 qpair failed and we were unable to recover it. 00:39:12.419 [2024-09-27 15:57:52.729503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.419 [2024-09-27 15:57:52.729544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.419 [2024-09-27 15:57:52.729554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.419 [2024-09-27 15:57:52.729559] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.419 [2024-09-27 15:57:52.729564] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.419 [2024-09-27 15:57:52.729573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.419 qpair failed and we were unable to recover it. 00:39:12.419 [2024-09-27 15:57:52.739597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.419 [2024-09-27 15:57:52.739644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.419 [2024-09-27 15:57:52.739654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.419 [2024-09-27 15:57:52.739660] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.419 [2024-09-27 15:57:52.739664] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.419 [2024-09-27 15:57:52.739674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.419 qpair failed and we were unable to recover it. 00:39:12.419 [2024-09-27 15:57:52.749586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.419 [2024-09-27 15:57:52.749632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.419 [2024-09-27 15:57:52.749652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.419 [2024-09-27 15:57:52.749658] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.419 [2024-09-27 15:57:52.749663] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.419 [2024-09-27 15:57:52.749677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.419 qpair failed and we were unable to recover it. 00:39:12.419 [2024-09-27 15:57:52.759600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.419 [2024-09-27 15:57:52.759646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.419 [2024-09-27 15:57:52.759665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.419 [2024-09-27 15:57:52.759675] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.419 [2024-09-27 15:57:52.759680] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.419 [2024-09-27 15:57:52.759694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.419 qpair failed and we were unable to recover it. 00:39:12.419 [2024-09-27 15:57:52.769596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.419 [2024-09-27 15:57:52.769642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.419 [2024-09-27 15:57:52.769661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.419 [2024-09-27 15:57:52.769668] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.419 [2024-09-27 15:57:52.769673] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.419 [2024-09-27 15:57:52.769686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.419 qpair failed and we were unable to recover it. 00:39:12.419 [2024-09-27 15:57:52.779703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.419 [2024-09-27 15:57:52.779792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.419 [2024-09-27 15:57:52.779812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.419 [2024-09-27 15:57:52.779818] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.419 [2024-09-27 15:57:52.779824] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.419 [2024-09-27 15:57:52.779837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.420 qpair failed and we were unable to recover it. 00:39:12.420 [2024-09-27 15:57:52.789688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.420 [2024-09-27 15:57:52.789778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.420 [2024-09-27 15:57:52.789790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.420 [2024-09-27 15:57:52.789795] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.420 [2024-09-27 15:57:52.789801] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.420 [2024-09-27 15:57:52.789812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.420 qpair failed and we were unable to recover it. 00:39:12.420 [2024-09-27 15:57:52.799717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.420 [2024-09-27 15:57:52.799766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.420 [2024-09-27 15:57:52.799776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.420 [2024-09-27 15:57:52.799781] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.420 [2024-09-27 15:57:52.799786] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.420 [2024-09-27 15:57:52.799796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.420 qpair failed and we were unable to recover it. 00:39:12.420 [2024-09-27 15:57:52.809688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.420 [2024-09-27 15:57:52.809734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.420 [2024-09-27 15:57:52.809744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.420 [2024-09-27 15:57:52.809749] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.420 [2024-09-27 15:57:52.809753] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.420 [2024-09-27 15:57:52.809763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.420 qpair failed and we were unable to recover it. 00:39:12.420 [2024-09-27 15:57:52.819847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.420 [2024-09-27 15:57:52.819898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.420 [2024-09-27 15:57:52.819909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.420 [2024-09-27 15:57:52.819914] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.420 [2024-09-27 15:57:52.819918] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.420 [2024-09-27 15:57:52.819928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.420 qpair failed and we were unable to recover it. 00:39:12.420 [2024-09-27 15:57:52.829838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.420 [2024-09-27 15:57:52.829881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.420 [2024-09-27 15:57:52.829891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.420 [2024-09-27 15:57:52.829899] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.420 [2024-09-27 15:57:52.829904] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.420 [2024-09-27 15:57:52.829914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.420 qpair failed and we were unable to recover it. 00:39:12.420 [2024-09-27 15:57:52.839788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.420 [2024-09-27 15:57:52.839862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.420 [2024-09-27 15:57:52.839872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.420 [2024-09-27 15:57:52.839877] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.420 [2024-09-27 15:57:52.839882] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.420 [2024-09-27 15:57:52.839891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.420 qpair failed and we were unable to recover it. 00:39:12.420 [2024-09-27 15:57:52.849797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.420 [2024-09-27 15:57:52.849888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.420 [2024-09-27 15:57:52.849918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.420 [2024-09-27 15:57:52.849927] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.420 [2024-09-27 15:57:52.849932] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.420 [2024-09-27 15:57:52.849943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.420 qpair failed and we were unable to recover it. 00:39:12.420 [2024-09-27 15:57:52.859897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.420 [2024-09-27 15:57:52.859945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.420 [2024-09-27 15:57:52.859956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.420 [2024-09-27 15:57:52.859960] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.420 [2024-09-27 15:57:52.859965] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.420 [2024-09-27 15:57:52.859975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.420 qpair failed and we were unable to recover it. 00:39:12.420 [2024-09-27 15:57:52.870100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.420 [2024-09-27 15:57:52.870147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.420 [2024-09-27 15:57:52.870157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.420 [2024-09-27 15:57:52.870162] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.420 [2024-09-27 15:57:52.870167] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.420 [2024-09-27 15:57:52.870176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.420 qpair failed and we were unable to recover it. 00:39:12.420 [2024-09-27 15:57:52.879768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.420 [2024-09-27 15:57:52.879811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.420 [2024-09-27 15:57:52.879821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.420 [2024-09-27 15:57:52.879826] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.420 [2024-09-27 15:57:52.879830] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.420 [2024-09-27 15:57:52.879839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.420 qpair failed and we were unable to recover it. 00:39:12.420 [2024-09-27 15:57:52.889946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.420 [2024-09-27 15:57:52.889991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.420 [2024-09-27 15:57:52.890003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.420 [2024-09-27 15:57:52.890009] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.420 [2024-09-27 15:57:52.890015] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.420 [2024-09-27 15:57:52.890027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.420 qpair failed and we were unable to recover it. 00:39:12.420 [2024-09-27 15:57:52.899998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.420 [2024-09-27 15:57:52.900048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.420 [2024-09-27 15:57:52.900059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.420 [2024-09-27 15:57:52.900064] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.420 [2024-09-27 15:57:52.900068] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.420 [2024-09-27 15:57:52.900078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.420 qpair failed and we were unable to recover it. 00:39:12.683 [2024-09-27 15:57:52.909904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.683 [2024-09-27 15:57:52.909963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.683 [2024-09-27 15:57:52.909974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.683 [2024-09-27 15:57:52.909980] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.683 [2024-09-27 15:57:52.909985] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.683 [2024-09-27 15:57:52.909995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.683 qpair failed and we were unable to recover it. 00:39:12.683 [2024-09-27 15:57:52.920012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.684 [2024-09-27 15:57:52.920099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.684 [2024-09-27 15:57:52.920109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.684 [2024-09-27 15:57:52.920114] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.684 [2024-09-27 15:57:52.920118] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.684 [2024-09-27 15:57:52.920128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.684 qpair failed and we were unable to recover it. 00:39:12.684 [2024-09-27 15:57:52.930047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.684 [2024-09-27 15:57:52.930091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.684 [2024-09-27 15:57:52.930101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.684 [2024-09-27 15:57:52.930106] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.684 [2024-09-27 15:57:52.930111] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.684 [2024-09-27 15:57:52.930121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.684 qpair failed and we were unable to recover it. 00:39:12.684 [2024-09-27 15:57:52.940135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.684 [2024-09-27 15:57:52.940224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.684 [2024-09-27 15:57:52.940234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.684 [2024-09-27 15:57:52.940242] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.684 [2024-09-27 15:57:52.940247] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.684 [2024-09-27 15:57:52.940256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.684 qpair failed and we were unable to recover it. 00:39:12.684 [2024-09-27 15:57:52.950130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.684 [2024-09-27 15:57:52.950176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.684 [2024-09-27 15:57:52.950185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.684 [2024-09-27 15:57:52.950191] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.684 [2024-09-27 15:57:52.950195] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.684 [2024-09-27 15:57:52.950205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.684 qpair failed and we were unable to recover it. 00:39:12.684 [2024-09-27 15:57:52.959982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.684 [2024-09-27 15:57:52.960023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.684 [2024-09-27 15:57:52.960033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.684 [2024-09-27 15:57:52.960038] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.684 [2024-09-27 15:57:52.960043] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.684 [2024-09-27 15:57:52.960053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.684 qpair failed and we were unable to recover it. 00:39:12.684 [2024-09-27 15:57:52.970151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.684 [2024-09-27 15:57:52.970193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.684 [2024-09-27 15:57:52.970202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.684 [2024-09-27 15:57:52.970208] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.684 [2024-09-27 15:57:52.970212] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.684 [2024-09-27 15:57:52.970222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.684 qpair failed and we were unable to recover it. 00:39:12.684 [2024-09-27 15:57:52.980193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.684 [2024-09-27 15:57:52.980242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.684 [2024-09-27 15:57:52.980251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.684 [2024-09-27 15:57:52.980257] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.684 [2024-09-27 15:57:52.980261] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.684 [2024-09-27 15:57:52.980270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.684 qpair failed and we were unable to recover it. 00:39:12.684 [2024-09-27 15:57:52.990182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.684 [2024-09-27 15:57:52.990230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.684 [2024-09-27 15:57:52.990240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.684 [2024-09-27 15:57:52.990245] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.684 [2024-09-27 15:57:52.990250] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.684 [2024-09-27 15:57:52.990259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.684 qpair failed and we were unable to recover it. 00:39:12.684 [2024-09-27 15:57:53.000214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.684 [2024-09-27 15:57:53.000261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.684 [2024-09-27 15:57:53.000271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.684 [2024-09-27 15:57:53.000276] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.684 [2024-09-27 15:57:53.000281] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.684 [2024-09-27 15:57:53.000290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.684 qpair failed and we were unable to recover it. 00:39:12.684 [2024-09-27 15:57:53.010255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.684 [2024-09-27 15:57:53.010297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.684 [2024-09-27 15:57:53.010307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.684 [2024-09-27 15:57:53.010313] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.684 [2024-09-27 15:57:53.010318] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.684 [2024-09-27 15:57:53.010327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.684 qpair failed and we were unable to recover it. 00:39:12.684 [2024-09-27 15:57:53.020301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.684 [2024-09-27 15:57:53.020354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.684 [2024-09-27 15:57:53.020365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.684 [2024-09-27 15:57:53.020370] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.684 [2024-09-27 15:57:53.020375] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.684 [2024-09-27 15:57:53.020385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.684 qpair failed and we were unable to recover it. 00:39:12.684 [2024-09-27 15:57:53.030346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.684 [2024-09-27 15:57:53.030396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.684 [2024-09-27 15:57:53.030409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.684 [2024-09-27 15:57:53.030414] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.684 [2024-09-27 15:57:53.030419] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.684 [2024-09-27 15:57:53.030428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.684 qpair failed and we were unable to recover it. 00:39:12.684 [2024-09-27 15:57:53.040316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.684 [2024-09-27 15:57:53.040358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.684 [2024-09-27 15:57:53.040369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.684 [2024-09-27 15:57:53.040374] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.684 [2024-09-27 15:57:53.040378] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.684 [2024-09-27 15:57:53.040388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.684 qpair failed and we were unable to recover it. 00:39:12.685 [2024-09-27 15:57:53.050234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.685 [2024-09-27 15:57:53.050277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.685 [2024-09-27 15:57:53.050286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.685 [2024-09-27 15:57:53.050291] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.685 [2024-09-27 15:57:53.050296] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.685 [2024-09-27 15:57:53.050305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.685 qpair failed and we were unable to recover it. 00:39:12.685 [2024-09-27 15:57:53.060440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.685 [2024-09-27 15:57:53.060488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.685 [2024-09-27 15:57:53.060499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.685 [2024-09-27 15:57:53.060504] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.685 [2024-09-27 15:57:53.060509] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.685 [2024-09-27 15:57:53.060519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.685 qpair failed and we were unable to recover it. 00:39:12.685 [2024-09-27 15:57:53.070417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.685 [2024-09-27 15:57:53.070467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.685 [2024-09-27 15:57:53.070478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.685 [2024-09-27 15:57:53.070483] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.685 [2024-09-27 15:57:53.070487] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.685 [2024-09-27 15:57:53.070497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.685 qpair failed and we were unable to recover it. 00:39:12.685 [2024-09-27 15:57:53.080446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.685 [2024-09-27 15:57:53.080487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.685 [2024-09-27 15:57:53.080497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.685 [2024-09-27 15:57:53.080501] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.685 [2024-09-27 15:57:53.080506] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.685 [2024-09-27 15:57:53.080515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.685 qpair failed and we were unable to recover it. 00:39:12.685 [2024-09-27 15:57:53.090432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.685 [2024-09-27 15:57:53.090475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.685 [2024-09-27 15:57:53.090487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.685 [2024-09-27 15:57:53.090492] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.685 [2024-09-27 15:57:53.090497] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.685 [2024-09-27 15:57:53.090509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.685 qpair failed and we were unable to recover it. 00:39:12.685 [2024-09-27 15:57:53.100440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.685 [2024-09-27 15:57:53.100489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.685 [2024-09-27 15:57:53.100500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.685 [2024-09-27 15:57:53.100505] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.685 [2024-09-27 15:57:53.100510] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.685 [2024-09-27 15:57:53.100520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.685 qpair failed and we were unable to recover it. 00:39:12.685 [2024-09-27 15:57:53.110550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.685 [2024-09-27 15:57:53.110595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.685 [2024-09-27 15:57:53.110605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.685 [2024-09-27 15:57:53.110610] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.685 [2024-09-27 15:57:53.110615] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.685 [2024-09-27 15:57:53.110625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.685 qpair failed and we were unable to recover it. 00:39:12.685 [2024-09-27 15:57:53.120574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.685 [2024-09-27 15:57:53.120664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.685 [2024-09-27 15:57:53.120677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.685 [2024-09-27 15:57:53.120682] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.685 [2024-09-27 15:57:53.120686] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.685 [2024-09-27 15:57:53.120696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.685 qpair failed and we were unable to recover it. 00:39:12.685 [2024-09-27 15:57:53.130567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.685 [2024-09-27 15:57:53.130608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.685 [2024-09-27 15:57:53.130617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.685 [2024-09-27 15:57:53.130622] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.685 [2024-09-27 15:57:53.130627] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.685 [2024-09-27 15:57:53.130636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.685 qpair failed and we were unable to recover it. 00:39:12.685 [2024-09-27 15:57:53.140673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.685 [2024-09-27 15:57:53.140764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.685 [2024-09-27 15:57:53.140774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.685 [2024-09-27 15:57:53.140779] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.685 [2024-09-27 15:57:53.140784] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.685 [2024-09-27 15:57:53.140794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.685 qpair failed and we were unable to recover it. 00:39:12.685 [2024-09-27 15:57:53.150521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.685 [2024-09-27 15:57:53.150562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.685 [2024-09-27 15:57:53.150572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.685 [2024-09-27 15:57:53.150576] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.685 [2024-09-27 15:57:53.150581] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.685 [2024-09-27 15:57:53.150590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.685 qpair failed and we were unable to recover it. 00:39:12.685 [2024-09-27 15:57:53.160662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.685 [2024-09-27 15:57:53.160708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.685 [2024-09-27 15:57:53.160718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.685 [2024-09-27 15:57:53.160723] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.685 [2024-09-27 15:57:53.160728] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.685 [2024-09-27 15:57:53.160743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.685 qpair failed and we were unable to recover it. 00:39:12.947 [2024-09-27 15:57:53.170684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.947 [2024-09-27 15:57:53.170727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.947 [2024-09-27 15:57:53.170738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.947 [2024-09-27 15:57:53.170743] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.947 [2024-09-27 15:57:53.170747] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.947 [2024-09-27 15:57:53.170757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.947 qpair failed and we were unable to recover it. 00:39:12.947 [2024-09-27 15:57:53.180779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.947 [2024-09-27 15:57:53.180851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.947 [2024-09-27 15:57:53.180860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.947 [2024-09-27 15:57:53.180865] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.947 [2024-09-27 15:57:53.180870] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.947 [2024-09-27 15:57:53.180880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.947 qpair failed and we were unable to recover it. 00:39:12.947 [2024-09-27 15:57:53.190719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.947 [2024-09-27 15:57:53.190768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.947 [2024-09-27 15:57:53.190778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.947 [2024-09-27 15:57:53.190783] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.947 [2024-09-27 15:57:53.190787] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.947 [2024-09-27 15:57:53.190797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.947 qpair failed and we were unable to recover it. 00:39:12.947 [2024-09-27 15:57:53.200744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.947 [2024-09-27 15:57:53.200783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.947 [2024-09-27 15:57:53.200793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.947 [2024-09-27 15:57:53.200798] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.947 [2024-09-27 15:57:53.200803] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.947 [2024-09-27 15:57:53.200812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.947 qpair failed and we were unable to recover it. 00:39:12.947 [2024-09-27 15:57:53.210779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.947 [2024-09-27 15:57:53.210820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.947 [2024-09-27 15:57:53.210833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.947 [2024-09-27 15:57:53.210838] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.947 [2024-09-27 15:57:53.210842] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.947 [2024-09-27 15:57:53.210852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.947 qpair failed and we were unable to recover it. 00:39:12.947 [2024-09-27 15:57:53.220738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:12.947 [2024-09-27 15:57:53.220789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:12.947 [2024-09-27 15:57:53.220800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:12.947 [2024-09-27 15:57:53.220806] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:12.948 [2024-09-27 15:57:53.220810] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a6eca0 00:39:12.948 [2024-09-27 15:57:53.220820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:12.948 qpair failed and we were unable to recover it. 00:39:12.948 [2024-09-27 15:57:53.220972] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:39:12.948 A controller has encountered a failure and is being reset. 00:39:12.948 Controller properly reset. 00:39:12.948 Initializing NVMe Controllers 00:39:12.948 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:12.948 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:12.948 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:39:12.948 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:39:12.948 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:39:12.948 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:39:12.948 Initialization complete. Launching workers. 00:39:12.948 Starting thread on core 1 00:39:12.948 Starting thread on core 2 00:39:12.948 Starting thread on core 3 00:39:12.948 Starting thread on core 0 00:39:12.948 15:57:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:39:12.948 00:39:12.948 real 0m11.447s 00:39:12.948 user 0m21.713s 00:39:12.948 sys 0m3.919s 00:39:12.948 15:57:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:12.948 15:57:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:12.948 ************************************ 00:39:12.948 END TEST nvmf_target_disconnect_tc2 00:39:12.948 ************************************ 00:39:12.948 15:57:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:39:12.948 15:57:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:39:13.208 15:57:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:39:13.208 15:57:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@512 -- # nvmfcleanup 00:39:13.209 15:57:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:39:13.209 15:57:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:13.209 15:57:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:39:13.209 15:57:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:13.209 15:57:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:13.209 rmmod nvme_tcp 00:39:13.209 rmmod nvme_fabrics 00:39:13.209 rmmod nvme_keyring 00:39:13.209 15:57:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:13.209 15:57:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:39:13.209 15:57:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:39:13.209 15:57:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@513 -- # '[' -n 646406 ']' 00:39:13.209 15:57:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # killprocess 646406 00:39:13.209 15:57:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 646406 ']' 00:39:13.209 15:57:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 646406 00:39:13.209 15:57:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:39:13.209 15:57:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:13.209 15:57:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 646406 00:39:13.209 15:57:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:39:13.209 15:57:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:39:13.209 15:57:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 646406' 00:39:13.209 killing process with pid 646406 00:39:13.209 15:57:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 646406 00:39:13.209 15:57:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 646406 00:39:13.470 15:57:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:39:13.470 15:57:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:39:13.470 15:57:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:39:13.470 15:57:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:39:13.470 15:57:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@787 -- # iptables-save 00:39:13.470 15:57:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:39:13.470 15:57:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@787 -- # iptables-restore 00:39:13.470 15:57:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:13.470 15:57:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:13.470 15:57:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:13.470 15:57:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:13.470 15:57:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:15.384 15:57:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:15.384 00:39:15.384 real 0m22.040s 00:39:15.385 user 0m49.820s 00:39:15.385 sys 0m10.204s 00:39:15.385 15:57:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:15.385 15:57:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:39:15.385 ************************************ 00:39:15.385 END TEST nvmf_target_disconnect 00:39:15.385 ************************************ 00:39:15.385 15:57:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:39:15.385 00:39:15.385 real 8m0.934s 00:39:15.385 user 17m31.463s 00:39:15.385 sys 2m26.835s 00:39:15.385 15:57:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:15.385 15:57:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:39:15.385 ************************************ 00:39:15.385 END TEST nvmf_host 00:39:15.385 ************************************ 00:39:15.647 15:57:55 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:39:15.647 15:57:55 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:39:15.647 15:57:55 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:39:15.647 15:57:55 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:15.647 15:57:55 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:15.647 15:57:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:15.647 ************************************ 00:39:15.647 START TEST nvmf_target_core_interrupt_mode 00:39:15.647 ************************************ 00:39:15.647 15:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:39:15.647 * Looking for test storage... 00:39:15.647 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:39:15.647 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:15.647 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lcov --version 00:39:15.647 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:15.647 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:15.647 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:15.647 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:15.647 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:15.647 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:39:15.647 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:15.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:15.648 --rc genhtml_branch_coverage=1 00:39:15.648 --rc genhtml_function_coverage=1 00:39:15.648 --rc genhtml_legend=1 00:39:15.648 --rc geninfo_all_blocks=1 00:39:15.648 --rc geninfo_unexecuted_blocks=1 00:39:15.648 00:39:15.648 ' 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:15.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:15.648 --rc genhtml_branch_coverage=1 00:39:15.648 --rc genhtml_function_coverage=1 00:39:15.648 --rc genhtml_legend=1 00:39:15.648 --rc geninfo_all_blocks=1 00:39:15.648 --rc geninfo_unexecuted_blocks=1 00:39:15.648 00:39:15.648 ' 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:15.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:15.648 --rc genhtml_branch_coverage=1 00:39:15.648 --rc genhtml_function_coverage=1 00:39:15.648 --rc genhtml_legend=1 00:39:15.648 --rc geninfo_all_blocks=1 00:39:15.648 --rc geninfo_unexecuted_blocks=1 00:39:15.648 00:39:15.648 ' 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:15.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:15.648 --rc genhtml_branch_coverage=1 00:39:15.648 --rc genhtml_function_coverage=1 00:39:15.648 --rc genhtml_legend=1 00:39:15.648 --rc geninfo_all_blocks=1 00:39:15.648 --rc geninfo_unexecuted_blocks=1 00:39:15.648 00:39:15.648 ' 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:15.648 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:39:15.910 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:15.911 ************************************ 00:39:15.911 START TEST nvmf_abort 00:39:15.911 ************************************ 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:39:15.911 * Looking for test storage... 00:39:15.911 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:15.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:15.911 --rc genhtml_branch_coverage=1 00:39:15.911 --rc genhtml_function_coverage=1 00:39:15.911 --rc genhtml_legend=1 00:39:15.911 --rc geninfo_all_blocks=1 00:39:15.911 --rc geninfo_unexecuted_blocks=1 00:39:15.911 00:39:15.911 ' 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:15.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:15.911 --rc genhtml_branch_coverage=1 00:39:15.911 --rc genhtml_function_coverage=1 00:39:15.911 --rc genhtml_legend=1 00:39:15.911 --rc geninfo_all_blocks=1 00:39:15.911 --rc geninfo_unexecuted_blocks=1 00:39:15.911 00:39:15.911 ' 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:15.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:15.911 --rc genhtml_branch_coverage=1 00:39:15.911 --rc genhtml_function_coverage=1 00:39:15.911 --rc genhtml_legend=1 00:39:15.911 --rc geninfo_all_blocks=1 00:39:15.911 --rc geninfo_unexecuted_blocks=1 00:39:15.911 00:39:15.911 ' 00:39:15.911 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:15.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:15.912 --rc genhtml_branch_coverage=1 00:39:15.912 --rc genhtml_function_coverage=1 00:39:15.912 --rc genhtml_legend=1 00:39:15.912 --rc geninfo_all_blocks=1 00:39:15.912 --rc geninfo_unexecuted_blocks=1 00:39:15.912 00:39:15.912 ' 00:39:15.912 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:15.912 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:39:15.912 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:15.912 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:15.912 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:15.912 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:15.912 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:15.912 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:15.912 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:15.912 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:15.912 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:15.912 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:15.912 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:15.912 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:15.912 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:15.912 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:15.912 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:15.912 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:15.912 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:15.912 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:39:16.173 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:16.173 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:16.173 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:16.173 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.173 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.173 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.173 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:39:16.173 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.173 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:39:16.173 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:16.173 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:16.173 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:16.173 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:16.174 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:16.174 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:16.174 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:16.174 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:16.174 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:16.174 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:16.174 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:16.174 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:39:16.174 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:39:16.174 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:39:16.174 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:16.174 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # prepare_net_devs 00:39:16.174 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@434 -- # local -g is_hw=no 00:39:16.174 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # remove_spdk_ns 00:39:16.174 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:16.174 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:16.174 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:16.174 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:39:16.174 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:39:16.174 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:39:16.174 15:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:24.317 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:24.317 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:39:24.317 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:24.317 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:39:24.318 Found 0000:31:00.0 (0x8086 - 0x159b) 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:39:24.318 Found 0000:31:00.1 (0x8086 - 0x159b) 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:39:24.318 Found net devices under 0000:31:00.0: cvl_0_0 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:39:24.318 Found net devices under 0000:31:00.1: cvl_0_1 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # is_hw=yes 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:24.318 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:24.319 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:24.319 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.704 ms 00:39:24.319 00:39:24.319 --- 10.0.0.2 ping statistics --- 00:39:24.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:24.319 rtt min/avg/max/mdev = 0.704/0.704/0.704/0.000 ms 00:39:24.319 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:24.319 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:24.319 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:39:24.319 00:39:24.319 --- 10.0.0.1 ping statistics --- 00:39:24.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:24.319 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:39:24.319 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:24.319 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # return 0 00:39:24.319 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:39:24.319 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:24.319 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:39:24.319 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:39:24.319 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:24.319 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:39:24.319 15:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:39:24.319 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:39:24.319 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:39:24.319 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:24.319 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:24.319 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # nvmfpid=651900 00:39:24.319 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # waitforlisten 651900 00:39:24.319 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:39:24.319 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 651900 ']' 00:39:24.319 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:24.319 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:24.319 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:24.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:24.319 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:24.319 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:24.319 [2024-09-27 15:58:04.096740] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:24.319 [2024-09-27 15:58:04.097892] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:39:24.319 [2024-09-27 15:58:04.097952] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:24.319 [2024-09-27 15:58:04.190547] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:24.319 [2024-09-27 15:58:04.237161] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:24.319 [2024-09-27 15:58:04.237218] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:24.319 [2024-09-27 15:58:04.237227] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:24.319 [2024-09-27 15:58:04.237235] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:24.319 [2024-09-27 15:58:04.237242] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:24.319 [2024-09-27 15:58:04.237405] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:39:24.319 [2024-09-27 15:58:04.237561] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:24.319 [2024-09-27 15:58:04.237562] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:39:24.319 [2024-09-27 15:58:04.316355] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:24.319 [2024-09-27 15:58:04.317319] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:24.319 [2024-09-27 15:58:04.318124] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:24.319 [2024-09-27 15:58:04.318153] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:24.580 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:24.581 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:39:24.581 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:39:24.581 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:24.581 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:24.581 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:24.581 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:39:24.581 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:24.581 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:24.581 [2024-09-27 15:58:04.978520] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:24.581 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:24.581 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:39:24.581 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:24.581 15:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:24.581 Malloc0 00:39:24.581 15:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:24.581 15:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:24.581 15:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:24.581 15:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:24.581 Delay0 00:39:24.581 15:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:24.581 15:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:24.581 15:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:24.581 15:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:24.581 15:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:24.581 15:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:39:24.581 15:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:24.581 15:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:24.581 15:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:24.581 15:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:24.581 15:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:24.581 15:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:24.581 [2024-09-27 15:58:05.066503] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:24.843 15:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:24.843 15:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:24.843 15:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:24.843 15:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:24.843 15:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:24.843 15:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:39:24.843 [2024-09-27 15:58:05.240106] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:39:27.392 Initializing NVMe Controllers 00:39:27.392 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:39:27.392 controller IO queue size 128 less than required 00:39:27.392 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:39:27.392 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:39:27.392 Initialization complete. Launching workers. 00:39:27.392 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28844 00:39:27.392 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28901, failed to submit 66 00:39:27.392 success 28844, unsuccessful 57, failed 0 00:39:27.392 15:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:27.392 15:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:27.392 15:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:27.392 15:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:27.392 15:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:39:27.392 15:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:39:27.392 15:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # nvmfcleanup 00:39:27.392 15:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:39:27.392 15:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:27.392 15:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:39:27.392 15:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:27.392 15:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:27.392 rmmod nvme_tcp 00:39:27.392 rmmod nvme_fabrics 00:39:27.392 rmmod nvme_keyring 00:39:27.392 15:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:27.392 15:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:39:27.392 15:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:39:27.392 15:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@513 -- # '[' -n 651900 ']' 00:39:27.392 15:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # killprocess 651900 00:39:27.392 15:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 651900 ']' 00:39:27.392 15:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 651900 00:39:27.392 15:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:39:27.392 15:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:27.392 15:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 651900 00:39:27.392 15:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:27.392 15:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:27.392 15:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 651900' 00:39:27.392 killing process with pid 651900 00:39:27.392 15:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 651900 00:39:27.392 15:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 651900 00:39:27.392 15:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:39:27.392 15:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:39:27.392 15:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:39:27.392 15:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:39:27.392 15:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@787 -- # iptables-save 00:39:27.392 15:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:39:27.392 15:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@787 -- # iptables-restore 00:39:27.392 15:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:27.392 15:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:27.392 15:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:27.392 15:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:27.392 15:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:29.307 15:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:29.307 00:39:29.307 real 0m13.552s 00:39:29.307 user 0m11.092s 00:39:29.307 sys 0m7.057s 00:39:29.307 15:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:29.307 15:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:29.307 ************************************ 00:39:29.307 END TEST nvmf_abort 00:39:29.307 ************************************ 00:39:29.307 15:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:39:29.307 15:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:29.307 15:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:29.307 15:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:29.569 ************************************ 00:39:29.569 START TEST nvmf_ns_hotplug_stress 00:39:29.569 ************************************ 00:39:29.569 15:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:39:29.569 * Looking for test storage... 00:39:29.569 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:29.569 15:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:29.569 15:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:39:29.569 15:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:29.569 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:29.569 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:29.569 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:29.569 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:29.569 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:39:29.569 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:39:29.569 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:39:29.569 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:39:29.569 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:39:29.569 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:39:29.569 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:39:29.569 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:29.569 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:39:29.569 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:39:29.569 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:29.569 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:29.569 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:39:29.569 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:39:29.569 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:29.569 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:39:29.569 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:39:29.569 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:39:29.569 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:39:29.569 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:29.569 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:39:29.569 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:39:29.570 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:29.570 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:29.570 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:39:29.570 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:29.570 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:29.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:29.570 --rc genhtml_branch_coverage=1 00:39:29.570 --rc genhtml_function_coverage=1 00:39:29.570 --rc genhtml_legend=1 00:39:29.570 --rc geninfo_all_blocks=1 00:39:29.570 --rc geninfo_unexecuted_blocks=1 00:39:29.570 00:39:29.570 ' 00:39:29.570 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:29.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:29.570 --rc genhtml_branch_coverage=1 00:39:29.570 --rc genhtml_function_coverage=1 00:39:29.570 --rc genhtml_legend=1 00:39:29.570 --rc geninfo_all_blocks=1 00:39:29.570 --rc geninfo_unexecuted_blocks=1 00:39:29.570 00:39:29.570 ' 00:39:29.570 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:29.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:29.570 --rc genhtml_branch_coverage=1 00:39:29.570 --rc genhtml_function_coverage=1 00:39:29.570 --rc genhtml_legend=1 00:39:29.570 --rc geninfo_all_blocks=1 00:39:29.570 --rc geninfo_unexecuted_blocks=1 00:39:29.570 00:39:29.570 ' 00:39:29.570 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:29.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:29.570 --rc genhtml_branch_coverage=1 00:39:29.570 --rc genhtml_function_coverage=1 00:39:29.570 --rc genhtml_legend=1 00:39:29.570 --rc geninfo_all_blocks=1 00:39:29.570 --rc geninfo_unexecuted_blocks=1 00:39:29.570 00:39:29.570 ' 00:39:29.570 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:29.570 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:39:29.570 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:29.570 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:29.570 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:29.570 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:29.570 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:29.570 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:29.570 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:29.570 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:29.570 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:29.570 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:29.570 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:29.570 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:29.570 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:29.570 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:29.570 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:29.570 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:29.570 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:29.570 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:39:29.832 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:29.832 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:29.832 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:29.832 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:29.832 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:29.832 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:29.832 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:39:29.832 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:29.832 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:39:29.832 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:29.832 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:29.832 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:29.832 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:29.832 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:29.832 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:29.832 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:29.832 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:29.832 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:29.832 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:29.832 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:29.832 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:39:29.832 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:39:29.832 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:29.832 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:39:29.832 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:39:29.832 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:39:29.832 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:29.832 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:29.832 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:29.832 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:39:29.832 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:39:29.832 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:39:29.832 15:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:39:37.984 Found 0000:31:00.0 (0x8086 - 0x159b) 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:39:37.984 Found 0000:31:00.1 (0x8086 - 0x159b) 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:39:37.984 Found net devices under 0000:31:00.0: cvl_0_0 00:39:37.984 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:39:37.985 Found net devices under 0000:31:00.1: cvl_0_1 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:37.985 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:37.985 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:39:37.985 00:39:37.985 --- 10.0.0.2 ping statistics --- 00:39:37.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:37.985 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:37.985 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:37.985 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:39:37.985 00:39:37.985 --- 10.0.0.1 ping statistics --- 00:39:37.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:37.985 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # return 0 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # nvmfpid=656781 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # waitforlisten 656781 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 656781 ']' 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:37.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:37.985 15:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:37.985 [2024-09-27 15:58:17.835957] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:37.985 [2024-09-27 15:58:17.837092] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:39:37.985 [2024-09-27 15:58:17.837141] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:37.985 [2024-09-27 15:58:17.930174] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:37.985 [2024-09-27 15:58:17.976674] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:37.985 [2024-09-27 15:58:17.976743] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:37.985 [2024-09-27 15:58:17.976752] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:37.985 [2024-09-27 15:58:17.976759] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:37.985 [2024-09-27 15:58:17.976765] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:37.985 [2024-09-27 15:58:17.976964] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:39:37.985 [2024-09-27 15:58:17.977115] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:37.985 [2024-09-27 15:58:17.977116] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:39:37.986 [2024-09-27 15:58:18.054824] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:37.986 [2024-09-27 15:58:18.054824] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:37.986 [2024-09-27 15:58:18.055584] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:37.986 [2024-09-27 15:58:18.055676] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:38.248 15:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:38.248 15:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:39:38.248 15:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:39:38.248 15:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:38.248 15:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:38.248 15:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:38.248 15:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:39:38.248 15:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:38.510 [2024-09-27 15:58:18.878006] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:38.510 15:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:38.772 15:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:39.034 [2024-09-27 15:58:19.286884] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:39.034 15:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:39.034 15:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:39:39.296 Malloc0 00:39:39.296 15:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:39.557 Delay0 00:39:39.557 15:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:39.819 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:39:39.819 NULL1 00:39:39.819 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:39:40.081 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=657337 00:39:40.081 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:39:40.081 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 657337 00:39:40.081 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:40.342 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:40.603 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:39:40.603 15:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:39:40.603 true 00:39:40.603 15:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 657337 00:39:40.603 15:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:40.864 15:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:41.127 15:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:39:41.127 15:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:39:41.388 true 00:39:41.388 15:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 657337 00:39:41.388 15:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:41.388 Read completed with error (sct=0, sc=11) 00:39:41.650 15:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:41.650 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:41.650 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:41.650 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:41.650 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:41.650 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:41.650 15:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:39:41.650 15:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:39:41.911 true 00:39:41.911 15:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 657337 00:39:41.911 15:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:42.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:42.853 15:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:42.853 15:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:39:42.853 15:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:39:43.114 true 00:39:43.114 15:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 657337 00:39:43.114 15:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:43.375 15:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:43.375 15:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:39:43.375 15:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:39:43.636 true 00:39:43.636 15:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 657337 00:39:43.636 15:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:43.898 15:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:44.159 15:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:39:44.159 15:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:39:44.159 true 00:39:44.159 15:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 657337 00:39:44.159 15:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:44.420 15:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:44.682 15:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:39:44.682 15:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:39:44.682 true 00:39:44.682 15:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 657337 00:39:44.682 15:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:46.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:46.066 15:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:46.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:46.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:46.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:46.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:46.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:46.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:46.066 15:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:39:46.066 15:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:39:46.328 true 00:39:46.328 15:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 657337 00:39:46.328 15:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:47.272 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:47.273 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:39:47.273 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:39:47.534 true 00:39:47.534 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 657337 00:39:47.534 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:47.534 15:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:47.794 15:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:39:47.794 15:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:39:48.054 true 00:39:48.054 15:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 657337 00:39:48.054 15:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:48.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:48.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:48.995 15:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:49.255 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:49.255 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:49.255 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:49.255 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:49.255 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:49.255 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:49.255 15:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:39:49.255 15:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:39:49.516 true 00:39:49.516 15:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 657337 00:39:49.516 15:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:50.459 15:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:50.459 15:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:39:50.459 15:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:39:50.719 true 00:39:50.719 15:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 657337 00:39:50.719 15:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:50.719 15:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:50.980 15:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:39:50.980 15:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:39:51.240 true 00:39:51.240 15:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 657337 00:39:51.240 15:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:52.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:52.626 15:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:52.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:52.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:52.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:52.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:52.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:52.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:52.626 15:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:39:52.626 15:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:39:52.626 true 00:39:52.886 15:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 657337 00:39:52.886 15:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:53.828 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:53.828 15:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:53.828 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:53.828 15:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:39:53.828 15:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:39:53.828 true 00:39:54.088 15:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 657337 00:39:54.088 15:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:54.088 15:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:54.348 15:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:39:54.348 15:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:39:54.641 true 00:39:54.641 15:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 657337 00:39:54.641 15:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:54.641 15:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:54.932 15:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:39:54.932 15:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:39:54.932 true 00:39:55.226 15:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 657337 00:39:55.226 15:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:55.226 15:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:55.505 15:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:39:55.505 15:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:39:55.505 true 00:39:55.505 15:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 657337 00:39:55.505 15:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:55.787 15:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:56.072 15:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:39:56.072 15:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:39:56.072 true 00:39:56.072 15:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 657337 00:39:56.072 15:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:56.377 15:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:56.672 15:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:39:56.673 15:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:39:56.673 true 00:39:56.673 15:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 657337 00:39:56.673 15:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:56.988 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:56.988 15:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:56.988 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:56.988 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:56.988 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:56.988 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:56.988 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:56.988 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:56.988 15:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:39:56.988 15:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:39:57.249 true 00:39:57.249 15:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 657337 00:39:57.249 15:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:58.192 15:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:58.192 15:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:39:58.193 15:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:39:58.454 true 00:39:58.454 15:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 657337 00:39:58.454 15:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:58.715 15:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:58.715 15:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:39:58.715 15:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:39:58.976 true 00:39:58.976 15:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 657337 00:39:58.976 15:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:59.237 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:59.237 15:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:59.237 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:59.237 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:59.237 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:59.237 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:59.237 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:59.237 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:59.497 15:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:39:59.497 15:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:39:59.497 true 00:39:59.497 15:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 657337 00:39:59.497 15:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:00.437 15:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:00.697 15:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:40:00.698 15:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:40:00.698 true 00:40:00.698 15:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 657337 00:40:00.698 15:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:00.958 15:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:01.218 15:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:40:01.218 15:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:40:01.218 true 00:40:01.218 15:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 657337 00:40:01.218 15:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:02.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:02.599 15:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:02.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:02.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:02.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:02.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:02.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:02.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:02.599 15:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:40:02.599 15:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:40:02.859 true 00:40:02.859 15:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 657337 00:40:02.859 15:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:03.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:03.799 15:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:03.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:03.799 15:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:40:03.799 15:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:40:04.059 true 00:40:04.059 15:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 657337 00:40:04.059 15:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:04.059 15:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:04.319 15:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:40:04.319 15:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:40:04.579 true 00:40:04.579 15:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 657337 00:40:04.579 15:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:05.519 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:05.778 15:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:05.778 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:05.778 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:05.778 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:05.778 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:05.778 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:05.778 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:05.778 15:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:40:05.778 15:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:40:06.038 true 00:40:06.038 15:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 657337 00:40:06.038 15:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:06.979 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:06.979 15:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:06.979 15:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:40:06.979 15:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:40:07.239 true 00:40:07.239 15:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 657337 00:40:07.239 15:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:07.498 15:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:07.498 15:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:40:07.498 15:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:40:07.759 true 00:40:07.759 15:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 657337 00:40:07.759 15:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:08.020 15:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:08.020 15:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:40:08.020 15:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:40:08.282 true 00:40:08.282 15:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 657337 00:40:08.282 15:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:08.543 15:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:08.805 15:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:40:08.805 15:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:40:08.805 true 00:40:08.805 15:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 657337 00:40:08.805 15:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:09.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:09.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:09.066 15:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:09.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:09.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:09.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:09.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:09.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:09.326 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:09.326 15:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:40:09.326 15:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:40:09.326 true 00:40:09.326 15:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 657337 00:40:09.326 15:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:10.265 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:10.265 15:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:10.524 15:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:40:10.524 15:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:40:10.524 true 00:40:10.524 Initializing NVMe Controllers 00:40:10.524 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:10.524 Controller IO queue size 128, less than required. 00:40:10.524 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:10.524 Controller IO queue size 128, less than required. 00:40:10.524 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:10.524 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:40:10.524 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:40:10.524 Initialization complete. Launching workers. 00:40:10.524 ======================================================== 00:40:10.524 Latency(us) 00:40:10.524 Device Information : IOPS MiB/s Average min max 00:40:10.524 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2420.60 1.18 28981.14 1196.62 1026517.31 00:40:10.524 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16314.55 7.97 7815.01 1232.64 400583.91 00:40:10.524 ======================================================== 00:40:10.524 Total : 18735.15 9.15 10549.70 1196.62 1026517.31 00:40:10.524 00:40:10.524 15:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 657337 00:40:10.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (657337) - No such process 00:40:10.524 15:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 657337 00:40:10.524 15:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:10.782 15:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:11.041 15:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:40:11.041 15:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:40:11.041 15:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:40:11.041 15:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:40:11.041 15:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:40:11.041 null0 00:40:11.041 15:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:40:11.041 15:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:40:11.041 15:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:40:11.301 null1 00:40:11.301 15:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:40:11.301 15:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:40:11.301 15:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:40:11.561 null2 00:40:11.561 15:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:40:11.561 15:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:40:11.561 15:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:40:11.561 null3 00:40:11.561 15:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:40:11.561 15:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:40:11.561 15:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:40:11.822 null4 00:40:11.822 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:40:11.822 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:40:11.822 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:40:11.822 null5 00:40:11.822 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:40:11.822 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:40:11.822 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:40:12.082 null6 00:40:12.082 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:40:12.082 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:40:12.082 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:40:12.342 null7 00:40:12.342 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:40:12.342 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 663470 663471 663473 663476 663477 663479 663481 663483 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:12.343 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:12.603 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:12.603 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:12.603 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:12.603 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:12.603 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:12.603 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:12.603 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:12.604 15:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:12.604 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:12.604 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:12.604 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:12.604 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:12.604 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:12.604 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:12.604 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:12.604 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:12.604 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:12.604 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:12.604 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:12.604 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:12.604 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:12.604 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:12.604 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:12.864 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:12.864 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:12.864 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:12.864 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:12.864 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:12.864 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:12.864 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:12.864 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:12.864 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:12.865 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:12.865 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:12.865 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:12.865 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:12.865 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:12.865 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:12.865 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:12.865 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:13.126 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:13.126 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:13.126 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:13.126 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:13.126 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:13.126 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:13.126 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:13.126 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:13.126 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:13.126 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:13.126 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:13.126 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:13.126 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:13.126 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:13.126 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:13.126 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:13.126 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:13.126 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:13.126 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:13.126 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:13.126 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:13.126 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:13.126 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:13.126 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:13.126 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:13.126 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:13.126 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:13.387 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:13.387 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:13.387 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:13.387 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:13.387 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:13.387 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:13.387 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:13.387 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:13.387 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:13.387 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:13.387 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:13.387 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:13.387 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:13.387 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:13.387 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:13.387 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:13.387 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:13.387 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:13.387 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:13.387 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:13.387 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:13.387 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:13.387 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:13.387 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:13.387 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:13.387 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:13.648 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:13.648 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:13.648 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:13.648 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:13.648 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:13.648 15:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:13.648 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:13.648 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:13.648 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:13.648 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:13.648 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:13.648 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:13.648 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:13.648 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:13.648 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:13.648 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:13.648 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:13.908 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:13.908 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:13.908 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:13.908 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:13.908 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:13.908 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:13.908 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:13.908 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:13.908 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:13.908 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:13.908 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:13.908 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:13.908 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:13.908 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:13.908 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:13.908 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:13.908 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:13.908 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:13.908 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:13.908 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:13.908 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:13.908 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:13.908 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:14.168 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:14.168 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:14.168 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:14.168 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:14.168 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:14.168 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:14.168 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:14.168 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:14.168 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:14.168 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:14.168 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:14.168 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:14.168 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:14.168 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:14.168 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:14.168 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:14.168 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:14.168 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:14.168 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:14.168 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:14.168 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:14.168 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:14.168 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:14.168 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:14.428 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:14.428 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:14.428 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:14.428 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:14.428 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:14.428 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:14.428 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:14.428 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:14.428 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:14.428 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:14.428 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:14.428 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:14.428 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:14.428 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:14.428 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:14.428 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:14.428 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:14.428 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:14.428 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:14.428 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:14.428 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:14.428 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:14.428 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:14.428 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:14.428 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:14.428 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:14.689 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:14.689 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:14.689 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:14.689 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:14.689 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:14.689 15:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:14.689 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:14.689 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:14.689 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:14.690 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:14.690 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:14.690 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:14.690 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:14.690 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:14.949 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:14.949 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:14.949 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:14.949 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:14.949 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:14.949 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:14.949 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:14.949 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:14.949 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:14.949 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:14.949 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:14.949 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:14.949 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:14.949 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:14.949 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:14.949 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:14.949 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:14.949 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:14.949 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:14.949 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:14.949 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:14.949 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:14.949 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:14.949 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:14.949 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:15.208 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:15.209 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:15.209 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:15.209 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:15.209 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:15.209 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:15.209 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:15.209 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:15.209 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:15.209 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:15.209 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:15.209 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:15.209 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:15.209 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:15.209 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:15.209 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:15.209 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:15.209 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:15.209 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:15.209 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:15.209 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:15.209 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:15.209 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:15.209 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:15.209 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:15.209 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:15.209 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:15.209 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:15.470 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:15.470 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:15.470 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:15.470 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:15.470 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:15.470 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:15.470 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:15.470 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:15.470 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:15.470 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:15.470 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:15.731 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:15.731 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:15.731 15:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:15.731 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:15.731 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:15.731 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:15.731 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:15.731 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:15.731 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:15.731 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:15.731 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:15.731 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:15.731 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:15.731 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:15.731 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:15.731 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:15.731 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:15.731 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:15.731 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:15.731 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:15.731 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:15.731 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:15.731 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:15.731 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:15.731 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:15.731 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:15.731 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:15.992 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:15.992 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:15.992 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:15.992 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:15.992 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:15.992 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:15.992 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:15.992 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:15.992 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:15.992 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:15.992 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:15.992 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:15.992 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:15.992 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:15.992 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:15.992 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:15.992 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:15.992 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:15.992 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:15.992 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:15.992 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:15.992 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:40:15.992 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:40:15.992 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:40:15.992 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:40:15.992 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:15.992 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:40:15.992 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:15.992 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:15.992 rmmod nvme_tcp 00:40:16.252 rmmod nvme_fabrics 00:40:16.252 rmmod nvme_keyring 00:40:16.252 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:16.252 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:40:16.252 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:40:16.252 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@513 -- # '[' -n 656781 ']' 00:40:16.252 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # killprocess 656781 00:40:16.252 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 656781 ']' 00:40:16.252 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 656781 00:40:16.252 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:40:16.252 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:16.252 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 656781 00:40:16.252 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:40:16.252 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:40:16.252 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 656781' 00:40:16.252 killing process with pid 656781 00:40:16.252 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 656781 00:40:16.252 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 656781 00:40:16.252 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:40:16.252 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:40:16.252 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:40:16.252 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:40:16.252 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-save 00:40:16.252 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:40:16.252 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-restore 00:40:16.252 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:16.252 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:16.252 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:16.252 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:16.252 15:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:18.796 15:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:18.796 00:40:18.796 real 0m48.965s 00:40:18.796 user 2m57.750s 00:40:18.796 sys 0m21.587s 00:40:18.796 15:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:18.796 15:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:40:18.796 ************************************ 00:40:18.796 END TEST nvmf_ns_hotplug_stress 00:40:18.796 ************************************ 00:40:18.796 15:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:40:18.796 15:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:40:18.796 15:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:18.796 15:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:18.796 ************************************ 00:40:18.796 START TEST nvmf_delete_subsystem 00:40:18.796 ************************************ 00:40:18.796 15:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:40:18.796 * Looking for test storage... 00:40:18.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:18.796 15:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:18.796 15:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:40:18.796 15:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:18.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:18.796 --rc genhtml_branch_coverage=1 00:40:18.796 --rc genhtml_function_coverage=1 00:40:18.796 --rc genhtml_legend=1 00:40:18.796 --rc geninfo_all_blocks=1 00:40:18.796 --rc geninfo_unexecuted_blocks=1 00:40:18.796 00:40:18.796 ' 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:18.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:18.796 --rc genhtml_branch_coverage=1 00:40:18.796 --rc genhtml_function_coverage=1 00:40:18.796 --rc genhtml_legend=1 00:40:18.796 --rc geninfo_all_blocks=1 00:40:18.796 --rc geninfo_unexecuted_blocks=1 00:40:18.796 00:40:18.796 ' 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:18.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:18.796 --rc genhtml_branch_coverage=1 00:40:18.796 --rc genhtml_function_coverage=1 00:40:18.796 --rc genhtml_legend=1 00:40:18.796 --rc geninfo_all_blocks=1 00:40:18.796 --rc geninfo_unexecuted_blocks=1 00:40:18.796 00:40:18.796 ' 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:18.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:18.796 --rc genhtml_branch_coverage=1 00:40:18.796 --rc genhtml_function_coverage=1 00:40:18.796 --rc genhtml_legend=1 00:40:18.796 --rc geninfo_all_blocks=1 00:40:18.796 --rc geninfo_unexecuted_blocks=1 00:40:18.796 00:40:18.796 ' 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:18.796 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:18.797 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:18.797 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:18.797 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:18.797 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:18.797 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:40:18.797 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:18.797 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:40:18.797 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:18.797 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:18.797 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:18.797 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:18.797 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:18.797 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:18.797 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:18.797 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:18.797 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:18.797 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:18.797 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:40:18.797 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:40:18.797 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:18.797 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:40:18.797 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:40:18.797 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:40:18.797 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:18.797 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:18.797 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:18.797 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:40:18.797 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:40:18.797 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:40:18.797 15:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:26.930 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:26.930 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:40:26.930 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:26.930 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:26.930 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:26.930 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:26.930 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:26.930 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:40:26.930 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:26.930 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:40:26.930 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:40:26.930 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:40:26.930 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:40:26.930 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:40:26.930 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:40:26.930 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:40:26.931 Found 0000:31:00.0 (0x8086 - 0x159b) 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:40:26.931 Found 0000:31:00.1 (0x8086 - 0x159b) 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:40:26.931 Found net devices under 0000:31:00.0: cvl_0_0 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:40:26.931 Found net devices under 0000:31:00.1: cvl_0_1 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # is_hw=yes 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:26.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:26.931 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.538 ms 00:40:26.931 00:40:26.931 --- 10.0.0.2 ping statistics --- 00:40:26.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:26.931 rtt min/avg/max/mdev = 0.538/0.538/0.538/0.000 ms 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:26.931 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:26.931 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:40:26.931 00:40:26.931 --- 10.0.0.1 ping statistics --- 00:40:26.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:26.931 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:40:26.931 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:26.932 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # return 0 00:40:26.932 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:40:26.932 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:26.932 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:40:26.932 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:40:26.932 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:26.932 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:40:26.932 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:40:26.932 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:40:26.932 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:40:26.932 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:26.932 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:26.932 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # nvmfpid=668462 00:40:26.932 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # waitforlisten 668462 00:40:26.932 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:40:26.932 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 668462 ']' 00:40:26.932 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:26.932 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:26.932 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:26.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:26.932 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:26.932 15:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:26.932 [2024-09-27 15:59:06.649387] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:26.932 [2024-09-27 15:59:06.650378] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:40:26.932 [2024-09-27 15:59:06.650415] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:26.932 [2024-09-27 15:59:06.733514] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:26.932 [2024-09-27 15:59:06.765412] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:26.932 [2024-09-27 15:59:06.765453] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:26.932 [2024-09-27 15:59:06.765460] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:26.932 [2024-09-27 15:59:06.765468] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:26.932 [2024-09-27 15:59:06.765474] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:26.932 [2024-09-27 15:59:06.765613] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:40:26.932 [2024-09-27 15:59:06.765614] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:40:26.932 [2024-09-27 15:59:06.813718] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:26.932 [2024-09-27 15:59:06.814182] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:26.932 [2024-09-27 15:59:06.814533] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:27.193 15:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:27.193 15:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:40:27.193 15:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:40:27.193 15:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:27.193 15:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:27.193 15:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:27.193 15:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:27.193 15:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:27.193 15:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:27.193 [2024-09-27 15:59:07.486496] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:27.193 15:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:27.193 15:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:40:27.193 15:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:27.193 15:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:27.193 15:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:27.193 15:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:27.193 15:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:27.193 15:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:27.193 [2024-09-27 15:59:07.522888] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:27.193 15:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:27.193 15:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:40:27.193 15:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:27.193 15:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:27.193 NULL1 00:40:27.193 15:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:27.193 15:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:40:27.193 15:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:27.193 15:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:27.193 Delay0 00:40:27.193 15:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:27.193 15:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:27.193 15:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:27.193 15:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:27.193 15:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:27.193 15:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=668720 00:40:27.193 15:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:40:27.193 15:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:40:27.193 [2024-09-27 15:59:07.636229] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:40:29.101 15:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:29.101 15:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:29.101 15:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 starting I/O failed: -6 00:40:29.362 Write completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Write completed with error (sct=0, sc=8) 00:40:29.362 starting I/O failed: -6 00:40:29.362 Write completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Write completed with error (sct=0, sc=8) 00:40:29.362 starting I/O failed: -6 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Write completed with error (sct=0, sc=8) 00:40:29.362 starting I/O failed: -6 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 starting I/O failed: -6 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Write completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 starting I/O failed: -6 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 starting I/O failed: -6 00:40:29.362 Write completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 starting I/O failed: -6 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 starting I/O failed: -6 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 starting I/O failed: -6 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 starting I/O failed: -6 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 [2024-09-27 15:59:09.758276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ebed0 is same with the state(6) to be set 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Write completed with error (sct=0, sc=8) 00:40:29.362 Write completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Write completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Write completed with error (sct=0, sc=8) 00:40:29.362 Write completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Write completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Write completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Write completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Write completed with error (sct=0, sc=8) 00:40:29.362 Write completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.362 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Write completed with error (sct=0, sc=8) 00:40:29.363 Write completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Write completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Write completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Write completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Write completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Write completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Write completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Write completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 starting I/O failed: -6 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Write completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Write completed with error (sct=0, sc=8) 00:40:29.363 starting I/O failed: -6 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Write completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 starting I/O failed: -6 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 starting I/O failed: -6 00:40:29.363 Write completed with error (sct=0, sc=8) 00:40:29.363 Write completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 starting I/O failed: -6 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Write completed with error (sct=0, sc=8) 00:40:29.363 Write completed with error (sct=0, sc=8) 00:40:29.363 starting I/O failed: -6 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Write completed with error (sct=0, sc=8) 00:40:29.363 starting I/O failed: -6 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 starting I/O failed: -6 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Write completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Write completed with error (sct=0, sc=8) 00:40:29.363 starting I/O failed: -6 00:40:29.363 Write completed with error (sct=0, sc=8) 00:40:29.363 Write completed with error (sct=0, sc=8) 00:40:29.363 Write completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 starting I/O failed: -6 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 [2024-09-27 15:59:09.763723] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1c38000c00 is same with the state(6) to be set 00:40:29.363 Write completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Write completed with error (sct=0, sc=8) 00:40:29.363 Write completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Write completed with error (sct=0, sc=8) 00:40:29.363 Write completed with error (sct=0, sc=8) 00:40:29.363 Write completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Write completed with error (sct=0, sc=8) 00:40:29.363 Write completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Write completed with error (sct=0, sc=8) 00:40:29.363 Write completed with error (sct=0, sc=8) 00:40:29.363 Write completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Write completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Write completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Write completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Write completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Write completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Read completed with error (sct=0, sc=8) 00:40:29.363 Write completed with error (sct=0, sc=8) 00:40:30.306 [2024-09-27 15:59:10.736655] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e9b20 is same with the state(6) to be set 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Write completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Write completed with error (sct=0, sc=8) 00:40:30.306 Write completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 [2024-09-27 15:59:10.761702] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ec0b0 is same with the state(6) to be set 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Write completed with error (sct=0, sc=8) 00:40:30.306 Write completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Write completed with error (sct=0, sc=8) 00:40:30.306 Write completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Write completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 [2024-09-27 15:59:10.762043] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8eac50 is same with the state(6) to be set 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Write completed with error (sct=0, sc=8) 00:40:30.306 Write completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Write completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Write completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Write completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 [2024-09-27 15:59:10.766544] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1c3800cfe0 is same with the state(6) to be set 00:40:30.306 Write completed with error (sct=0, sc=8) 00:40:30.306 Write completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Write completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Write completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Write completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Write completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 Read completed with error (sct=0, sc=8) 00:40:30.306 [2024-09-27 15:59:10.766640] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1c3800d780 is same with the state(6) to be set 00:40:30.306 Initializing NVMe Controllers 00:40:30.306 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:30.306 Controller IO queue size 128, less than required. 00:40:30.306 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:30.306 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:40:30.306 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:40:30.306 Initialization complete. Launching workers. 00:40:30.306 ======================================================== 00:40:30.306 Latency(us) 00:40:30.306 Device Information : IOPS MiB/s Average min max 00:40:30.307 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 170.86 0.08 892141.29 326.20 1006500.61 00:40:30.307 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 164.38 0.08 908775.53 313.06 1043560.58 00:40:30.307 ======================================================== 00:40:30.307 Total : 335.24 0.16 900297.75 313.06 1043560.58 00:40:30.307 00:40:30.307 [2024-09-27 15:59:10.767245] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8e9b20 (9): Bad file descriptor 00:40:30.307 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:40:30.307 15:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:30.307 15:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:40:30.307 15:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 668720 00:40:30.307 15:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:40:30.876 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:40:30.876 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 668720 00:40:30.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (668720) - No such process 00:40:30.876 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 668720 00:40:30.876 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:40:30.876 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 668720 00:40:30.876 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:40:30.876 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:30.876 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:40:30.876 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:30.876 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 668720 00:40:30.876 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:40:30.876 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:40:30.876 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:40:30.876 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:40:30.876 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:40:30.876 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:30.876 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:30.876 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:30.876 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:30.876 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:30.876 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:30.876 [2024-09-27 15:59:11.298782] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:30.876 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:30.876 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:30.876 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:30.876 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:30.876 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:30.876 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=669400 00:40:30.876 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:40:30.876 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:40:30.876 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 669400 00:40:30.876 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:31.135 [2024-09-27 15:59:11.387909] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:40:31.396 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:31.396 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 669400 00:40:31.396 15:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:31.965 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:31.965 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 669400 00:40:31.965 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:32.536 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:32.536 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 669400 00:40:32.536 15:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:33.105 15:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:33.105 15:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 669400 00:40:33.105 15:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:33.364 15:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:33.364 15:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 669400 00:40:33.364 15:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:33.932 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:33.932 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 669400 00:40:33.932 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:34.192 Initializing NVMe Controllers 00:40:34.192 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:34.192 Controller IO queue size 128, less than required. 00:40:34.192 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:34.192 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:40:34.192 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:40:34.192 Initialization complete. Launching workers. 00:40:34.192 ======================================================== 00:40:34.192 Latency(us) 00:40:34.192 Device Information : IOPS MiB/s Average min max 00:40:34.192 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002044.51 1000188.66 1005911.47 00:40:34.192 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003844.28 1000204.45 1010402.15 00:40:34.192 ======================================================== 00:40:34.192 Total : 256.00 0.12 1002944.39 1000188.66 1010402.15 00:40:34.192 00:40:34.452 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:34.452 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 669400 00:40:34.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (669400) - No such process 00:40:34.452 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 669400 00:40:34.452 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:40:34.452 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:40:34.452 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:40:34.452 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:40:34.452 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:34.452 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:40:34.452 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:34.452 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:34.452 rmmod nvme_tcp 00:40:34.452 rmmod nvme_fabrics 00:40:34.452 rmmod nvme_keyring 00:40:34.452 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:34.452 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:40:34.452 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:40:34.452 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@513 -- # '[' -n 668462 ']' 00:40:34.452 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # killprocess 668462 00:40:34.452 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 668462 ']' 00:40:34.452 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 668462 00:40:34.452 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:40:34.452 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:34.452 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 668462 00:40:34.712 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:34.712 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:34.712 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 668462' 00:40:34.712 killing process with pid 668462 00:40:34.712 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 668462 00:40:34.712 15:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 668462 00:40:34.712 15:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:40:34.712 15:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:40:34.712 15:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:40:34.712 15:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:40:34.712 15:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-save 00:40:34.712 15:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-restore 00:40:34.712 15:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:40:34.712 15:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:34.712 15:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:34.712 15:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:34.712 15:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:34.712 15:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:37.251 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:37.251 00:40:37.251 real 0m18.299s 00:40:37.251 user 0m26.387s 00:40:37.251 sys 0m7.507s 00:40:37.251 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:37.251 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:37.251 ************************************ 00:40:37.251 END TEST nvmf_delete_subsystem 00:40:37.251 ************************************ 00:40:37.251 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:40:37.251 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:40:37.251 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:37.251 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:37.251 ************************************ 00:40:37.251 START TEST nvmf_host_management 00:40:37.251 ************************************ 00:40:37.251 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:40:37.251 * Looking for test storage... 00:40:37.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:37.251 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:37.251 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:40:37.251 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:37.251 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:37.251 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:37.251 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:37.251 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:37.251 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:40:37.251 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:40:37.251 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:40:37.251 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:40:37.251 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:40:37.251 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:40:37.251 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:40:37.251 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:37.251 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:40:37.251 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:40:37.251 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:37.251 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:37.251 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:40:37.251 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:40:37.251 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:37.251 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:37.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:37.252 --rc genhtml_branch_coverage=1 00:40:37.252 --rc genhtml_function_coverage=1 00:40:37.252 --rc genhtml_legend=1 00:40:37.252 --rc geninfo_all_blocks=1 00:40:37.252 --rc geninfo_unexecuted_blocks=1 00:40:37.252 00:40:37.252 ' 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:37.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:37.252 --rc genhtml_branch_coverage=1 00:40:37.252 --rc genhtml_function_coverage=1 00:40:37.252 --rc genhtml_legend=1 00:40:37.252 --rc geninfo_all_blocks=1 00:40:37.252 --rc geninfo_unexecuted_blocks=1 00:40:37.252 00:40:37.252 ' 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:37.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:37.252 --rc genhtml_branch_coverage=1 00:40:37.252 --rc genhtml_function_coverage=1 00:40:37.252 --rc genhtml_legend=1 00:40:37.252 --rc geninfo_all_blocks=1 00:40:37.252 --rc geninfo_unexecuted_blocks=1 00:40:37.252 00:40:37.252 ' 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:37.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:37.252 --rc genhtml_branch_coverage=1 00:40:37.252 --rc genhtml_function_coverage=1 00:40:37.252 --rc genhtml_legend=1 00:40:37.252 --rc geninfo_all_blocks=1 00:40:37.252 --rc geninfo_unexecuted_blocks=1 00:40:37.252 00:40:37.252 ' 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:37.252 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:37.253 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:40:37.253 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:40:37.253 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:40:37.253 15:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:45.382 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:45.382 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:40:45.382 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:45.382 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:45.382 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:45.382 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:45.382 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:45.382 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:40:45.382 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:45.382 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:40:45.382 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:40:45.382 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:40:45.382 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:40:45.382 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:40:45.382 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:40:45.382 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:45.382 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:45.382 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:45.382 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:45.382 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:45.382 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:45.382 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:45.382 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:45.382 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:45.382 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:45.382 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:45.382 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:40:45.382 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:40:45.382 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:40:45.382 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:40:45.382 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:40:45.382 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:40:45.382 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:45.382 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:40:45.382 Found 0000:31:00.0 (0x8086 - 0x159b) 00:40:45.382 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:45.382 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:45.382 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:45.382 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:45.382 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:45.382 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:45.382 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:40:45.382 Found 0000:31:00.1 (0x8086 - 0x159b) 00:40:45.382 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:40:45.383 Found net devices under 0000:31:00.0: cvl_0_0 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:40:45.383 Found net devices under 0000:31:00.1: cvl_0_1 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # is_hw=yes 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:45.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:45.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:40:45.383 00:40:45.383 --- 10.0.0.2 ping statistics --- 00:40:45.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:45.383 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:45.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:45.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:40:45.383 00:40:45.383 --- 10.0.0.1 ping statistics --- 00:40:45.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:45.383 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # return 0 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=674270 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 674270 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 674270 ']' 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:45.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:45.383 15:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:45.383 [2024-09-27 15:59:24.978637] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:45.383 [2024-09-27 15:59:24.979627] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:40:45.383 [2024-09-27 15:59:24.979664] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:45.383 [2024-09-27 15:59:25.063167] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:45.384 [2024-09-27 15:59:25.097555] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:45.384 [2024-09-27 15:59:25.097597] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:45.384 [2024-09-27 15:59:25.097605] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:45.384 [2024-09-27 15:59:25.097614] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:45.384 [2024-09-27 15:59:25.097621] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:45.384 [2024-09-27 15:59:25.097795] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:40:45.384 [2024-09-27 15:59:25.097952] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:40:45.384 [2024-09-27 15:59:25.098360] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:40:45.384 [2024-09-27 15:59:25.098361] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:40:45.384 [2024-09-27 15:59:25.166002] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:45.384 [2024-09-27 15:59:25.166812] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:45.384 [2024-09-27 15:59:25.167532] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:45.384 [2024-09-27 15:59:25.168006] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:45.384 [2024-09-27 15:59:25.168087] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:45.384 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:45.384 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:40:45.384 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:40:45.384 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:45.384 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:45.384 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:45.384 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:45.384 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.384 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:45.384 [2024-09-27 15:59:25.827270] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:45.384 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.384 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:40:45.384 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:45.384 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:45.384 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:40:45.384 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:40:45.384 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:40:45.384 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.384 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:45.645 Malloc0 00:40:45.645 [2024-09-27 15:59:25.919476] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:45.645 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.645 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:40:45.645 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:45.645 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:45.645 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=674509 00:40:45.645 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 674509 /var/tmp/bdevperf.sock 00:40:45.645 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 674509 ']' 00:40:45.645 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:45.645 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:45.645 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:45.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:45.645 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:40:45.645 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:40:45.645 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:45.645 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:45.645 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:40:45.645 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:40:45.645 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:40:45.645 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:40:45.645 { 00:40:45.645 "params": { 00:40:45.645 "name": "Nvme$subsystem", 00:40:45.645 "trtype": "$TEST_TRANSPORT", 00:40:45.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:45.645 "adrfam": "ipv4", 00:40:45.645 "trsvcid": "$NVMF_PORT", 00:40:45.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:45.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:45.645 "hdgst": ${hdgst:-false}, 00:40:45.645 "ddgst": ${ddgst:-false} 00:40:45.645 }, 00:40:45.645 "method": "bdev_nvme_attach_controller" 00:40:45.645 } 00:40:45.645 EOF 00:40:45.645 )") 00:40:45.645 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:40:45.645 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:40:45.645 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:40:45.645 15:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:40:45.645 "params": { 00:40:45.645 "name": "Nvme0", 00:40:45.645 "trtype": "tcp", 00:40:45.645 "traddr": "10.0.0.2", 00:40:45.645 "adrfam": "ipv4", 00:40:45.645 "trsvcid": "4420", 00:40:45.645 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:45.645 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:45.645 "hdgst": false, 00:40:45.645 "ddgst": false 00:40:45.645 }, 00:40:45.645 "method": "bdev_nvme_attach_controller" 00:40:45.645 }' 00:40:45.645 [2024-09-27 15:59:26.029021] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:40:45.645 [2024-09-27 15:59:26.029085] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid674509 ] 00:40:45.645 [2024-09-27 15:59:26.115085] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:45.906 [2024-09-27 15:59:26.162214] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:40:45.906 Running I/O for 10 seconds... 00:40:46.481 15:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:46.481 15:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:40:46.481 15:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:40:46.481 15:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:46.481 15:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:46.481 15:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:46.481 15:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:46.481 15:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:40:46.481 15:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:40:46.481 15:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:40:46.481 15:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:40:46.481 15:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:40:46.481 15:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:40:46.481 15:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:40:46.481 15:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:40:46.482 15:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:40:46.482 15:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:46.482 15:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:46.482 15:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:46.482 15:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=718 00:40:46.482 15:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 718 -ge 100 ']' 00:40:46.482 15:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:40:46.482 15:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:40:46.482 15:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:40:46.482 15:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:40:46.482 15:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:46.482 15:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:46.482 [2024-09-27 15:59:26.915268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdcf7d0 is same with the state(6) to be set 00:40:46.482 [2024-09-27 15:59:26.915309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdcf7d0 is same with the state(6) to be set 00:40:46.482 [2024-09-27 15:59:26.915318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdcf7d0 is same with the state(6) to be set 00:40:46.482 [2024-09-27 15:59:26.915331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdcf7d0 is same with the state(6) to be set 00:40:46.482 [2024-09-27 15:59:26.915338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdcf7d0 is same with the state(6) to be set 00:40:46.482 [2024-09-27 15:59:26.915345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdcf7d0 is same with the state(6) to be set 00:40:46.482 [2024-09-27 15:59:26.915351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdcf7d0 is same with the state(6) to be set 00:40:46.482 [2024-09-27 15:59:26.915358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdcf7d0 is same with the state(6) to be set 00:40:46.482 [2024-09-27 15:59:26.915365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdcf7d0 is same with the state(6) to be set 00:40:46.482 [2024-09-27 15:59:26.915371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdcf7d0 is same with the state(6) to be set 00:40:46.482 [2024-09-27 15:59:26.915378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdcf7d0 is same with the state(6) to be set 00:40:46.482 [2024-09-27 15:59:26.915385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdcf7d0 is same with the state(6) to be set 00:40:46.482 [2024-09-27 15:59:26.915391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdcf7d0 is same with the state(6) to be set 00:40:46.482 [2024-09-27 15:59:26.915398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdcf7d0 is same with the state(6) to be set 00:40:46.482 [2024-09-27 15:59:26.915404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdcf7d0 is same with the state(6) to be set 00:40:46.482 15:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:46.482 15:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:40:46.482 15:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:46.482 15:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:46.482 [2024-09-27 15:59:26.929675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:40:46.482 [2024-09-27 15:59:26.929718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.482 [2024-09-27 15:59:26.929729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:40:46.482 [2024-09-27 15:59:26.929737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.482 [2024-09-27 15:59:26.929745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:40:46.482 [2024-09-27 15:59:26.929753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.482 [2024-09-27 15:59:26.929761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:40:46.482 [2024-09-27 15:59:26.929769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.482 [2024-09-27 15:59:26.929776] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e369d0 is same with the state(6) to be set 00:40:46.482 [2024-09-27 15:59:26.930390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.482 [2024-09-27 15:59:26.930405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.482 [2024-09-27 15:59:26.930423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.482 [2024-09-27 15:59:26.930432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.482 [2024-09-27 15:59:26.930442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.482 [2024-09-27 15:59:26.930450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.482 [2024-09-27 15:59:26.930459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.482 [2024-09-27 15:59:26.930467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.482 [2024-09-27 15:59:26.930476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.482 [2024-09-27 15:59:26.930483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.482 [2024-09-27 15:59:26.930494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.482 [2024-09-27 15:59:26.930501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.482 [2024-09-27 15:59:26.930511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.482 [2024-09-27 15:59:26.930518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.482 [2024-09-27 15:59:26.930528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.482 [2024-09-27 15:59:26.930536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.482 [2024-09-27 15:59:26.930545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.482 [2024-09-27 15:59:26.930553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.482 [2024-09-27 15:59:26.930562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.482 [2024-09-27 15:59:26.930569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.482 [2024-09-27 15:59:26.930579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.482 [2024-09-27 15:59:26.930586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.482 [2024-09-27 15:59:26.930596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.482 [2024-09-27 15:59:26.930604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.482 [2024-09-27 15:59:26.930613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.482 [2024-09-27 15:59:26.930621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.482 [2024-09-27 15:59:26.930631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.482 [2024-09-27 15:59:26.930640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.482 [2024-09-27 15:59:26.930650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.482 [2024-09-27 15:59:26.930658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.482 [2024-09-27 15:59:26.930668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.482 [2024-09-27 15:59:26.930675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.482 [2024-09-27 15:59:26.930685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.482 [2024-09-27 15:59:26.930692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.482 [2024-09-27 15:59:26.930702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.482 [2024-09-27 15:59:26.930709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.482 [2024-09-27 15:59:26.930719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.482 [2024-09-27 15:59:26.930727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.482 [2024-09-27 15:59:26.930736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.482 [2024-09-27 15:59:26.930744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.482 [2024-09-27 15:59:26.930753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.482 [2024-09-27 15:59:26.930760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.482 [2024-09-27 15:59:26.930770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.482 [2024-09-27 15:59:26.930777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.482 [2024-09-27 15:59:26.930787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.482 [2024-09-27 15:59:26.930795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.482 [2024-09-27 15:59:26.930804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.482 [2024-09-27 15:59:26.930811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.482 [2024-09-27 15:59:26.930821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.482 [2024-09-27 15:59:26.930828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.482 [2024-09-27 15:59:26.930838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.482 [2024-09-27 15:59:26.930845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.482 [2024-09-27 15:59:26.930856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.482 [2024-09-27 15:59:26.930863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.482 [2024-09-27 15:59:26.930872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.483 [2024-09-27 15:59:26.930880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.483 [2024-09-27 15:59:26.930889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.483 [2024-09-27 15:59:26.930903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.483 [2024-09-27 15:59:26.930912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.483 [2024-09-27 15:59:26.930919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.483 [2024-09-27 15:59:26.930928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.483 [2024-09-27 15:59:26.930936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.483 [2024-09-27 15:59:26.930945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.483 [2024-09-27 15:59:26.930953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.483 [2024-09-27 15:59:26.930962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.483 [2024-09-27 15:59:26.930969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.483 [2024-09-27 15:59:26.930980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.483 [2024-09-27 15:59:26.930987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.483 [2024-09-27 15:59:26.930997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.483 [2024-09-27 15:59:26.931004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.483 [2024-09-27 15:59:26.931013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.483 [2024-09-27 15:59:26.931021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.483 [2024-09-27 15:59:26.931031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.483 [2024-09-27 15:59:26.931038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.483 [2024-09-27 15:59:26.931047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.483 [2024-09-27 15:59:26.931055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.483 [2024-09-27 15:59:26.931065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.483 [2024-09-27 15:59:26.931074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.483 [2024-09-27 15:59:26.931084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.483 [2024-09-27 15:59:26.931092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.483 [2024-09-27 15:59:26.931101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.483 [2024-09-27 15:59:26.931109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.483 [2024-09-27 15:59:26.931118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.483 [2024-09-27 15:59:26.931126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.483 [2024-09-27 15:59:26.931135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.483 [2024-09-27 15:59:26.931142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.483 [2024-09-27 15:59:26.931152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.483 [2024-09-27 15:59:26.931159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.483 [2024-09-27 15:59:26.931169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.483 [2024-09-27 15:59:26.931176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.483 [2024-09-27 15:59:26.931185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.483 [2024-09-27 15:59:26.931192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.483 [2024-09-27 15:59:26.931202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.483 [2024-09-27 15:59:26.931209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.483 [2024-09-27 15:59:26.931219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.483 [2024-09-27 15:59:26.931226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.483 [2024-09-27 15:59:26.931235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.483 [2024-09-27 15:59:26.931243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.483 [2024-09-27 15:59:26.931252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.483 [2024-09-27 15:59:26.931260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.483 [2024-09-27 15:59:26.931269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.483 [2024-09-27 15:59:26.931278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.483 [2024-09-27 15:59:26.931287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.483 [2024-09-27 15:59:26.931295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.483 [2024-09-27 15:59:26.931304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.483 [2024-09-27 15:59:26.931312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.483 [2024-09-27 15:59:26.931321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.483 [2024-09-27 15:59:26.931328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.483 [2024-09-27 15:59:26.931338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.483 [2024-09-27 15:59:26.931345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.483 [2024-09-27 15:59:26.931355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.483 [2024-09-27 15:59:26.931362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.483 [2024-09-27 15:59:26.931372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.483 [2024-09-27 15:59:26.931379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.483 [2024-09-27 15:59:26.931389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.483 [2024-09-27 15:59:26.931396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.483 [2024-09-27 15:59:26.931405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.483 [2024-09-27 15:59:26.931412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.483 [2024-09-27 15:59:26.931422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.483 [2024-09-27 15:59:26.931429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.483 [2024-09-27 15:59:26.931438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.483 [2024-09-27 15:59:26.931446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.483 [2024-09-27 15:59:26.931455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.483 [2024-09-27 15:59:26.931462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.483 [2024-09-27 15:59:26.931472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.483 [2024-09-27 15:59:26.931479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.483 [2024-09-27 15:59:26.931490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.483 [2024-09-27 15:59:26.931497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:46.483 [2024-09-27 15:59:26.931548] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x204f6b0 was disconnected and freed. reset controller. 00:40:46.483 15:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:46.483 15:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:40:46.483 [2024-09-27 15:59:26.932724] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:40:46.483 task offset: 106496 on job bdev=Nvme0n1 fails 00:40:46.483 00:40:46.483 Latency(us) 00:40:46.483 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:46.483 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:46.483 Job: Nvme0n1 ended in about 0.59 seconds with error 00:40:46.483 Verification LBA range: start 0x0 length 0x400 00:40:46.483 Nvme0n1 : 0.59 1420.77 88.80 109.29 0.00 40846.24 1590.61 35389.44 00:40:46.483 =================================================================================================================== 00:40:46.483 Total : 1420.77 88.80 109.29 0.00 40846.24 1590.61 35389.44 00:40:46.483 [2024-09-27 15:59:26.934721] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:46.483 [2024-09-27 15:59:26.934743] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e369d0 (9): Bad file descriptor 00:40:46.743 [2024-09-27 15:59:27.029014] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:40:47.685 15:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 674509 00:40:47.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (674509) - No such process 00:40:47.685 15:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:40:47.685 15:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:40:47.685 15:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:40:47.685 15:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:40:47.685 15:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:40:47.685 15:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:40:47.685 15:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:40:47.685 15:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:40:47.685 { 00:40:47.685 "params": { 00:40:47.685 "name": "Nvme$subsystem", 00:40:47.685 "trtype": "$TEST_TRANSPORT", 00:40:47.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:47.685 "adrfam": "ipv4", 00:40:47.685 "trsvcid": "$NVMF_PORT", 00:40:47.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:47.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:47.685 "hdgst": ${hdgst:-false}, 00:40:47.685 "ddgst": ${ddgst:-false} 00:40:47.685 }, 00:40:47.685 "method": "bdev_nvme_attach_controller" 00:40:47.685 } 00:40:47.685 EOF 00:40:47.685 )") 00:40:47.685 15:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:40:47.685 15:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:40:47.685 15:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:40:47.685 15:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:40:47.685 "params": { 00:40:47.685 "name": "Nvme0", 00:40:47.685 "trtype": "tcp", 00:40:47.685 "traddr": "10.0.0.2", 00:40:47.685 "adrfam": "ipv4", 00:40:47.685 "trsvcid": "4420", 00:40:47.685 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:47.685 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:47.685 "hdgst": false, 00:40:47.685 "ddgst": false 00:40:47.685 }, 00:40:47.685 "method": "bdev_nvme_attach_controller" 00:40:47.685 }' 00:40:47.685 [2024-09-27 15:59:27.994029] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:40:47.685 [2024-09-27 15:59:27.994099] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid674864 ] 00:40:47.685 [2024-09-27 15:59:28.077446] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:47.685 [2024-09-27 15:59:28.122489] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:40:47.945 Running I/O for 1 seconds... 00:40:48.887 1958.00 IOPS, 122.38 MiB/s 00:40:48.887 Latency(us) 00:40:48.887 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:48.887 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:48.887 Verification LBA range: start 0x0 length 0x400 00:40:48.887 Nvme0n1 : 1.01 1999.83 124.99 0.00 0.00 31295.31 2676.05 33204.91 00:40:48.887 =================================================================================================================== 00:40:48.887 Total : 1999.83 124.99 0.00 0.00 31295.31 2676.05 33204.91 00:40:49.148 15:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:40:49.148 15:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:40:49.148 15:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:40:49.148 15:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:40:49.148 15:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:40:49.148 15:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:40:49.148 15:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:40:49.148 15:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:49.148 15:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:40:49.148 15:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:49.148 15:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:49.148 rmmod nvme_tcp 00:40:49.148 rmmod nvme_fabrics 00:40:49.148 rmmod nvme_keyring 00:40:49.148 15:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:49.148 15:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:40:49.148 15:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:40:49.148 15:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 674270 ']' 00:40:49.148 15:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 674270 00:40:49.148 15:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 674270 ']' 00:40:49.148 15:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 674270 00:40:49.148 15:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:40:49.148 15:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:49.148 15:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 674270 00:40:49.148 15:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:40:49.148 15:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:40:49.148 15:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 674270' 00:40:49.148 killing process with pid 674270 00:40:49.148 15:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 674270 00:40:49.148 15:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 674270 00:40:49.410 [2024-09-27 15:59:29.729742] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:40:49.410 15:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:40:49.410 15:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:40:49.410 15:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:40:49.410 15:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:40:49.410 15:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-save 00:40:49.410 15:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:40:49.410 15:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-restore 00:40:49.410 15:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:49.410 15:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:49.410 15:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:49.410 15:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:49.410 15:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:51.954 15:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:51.954 15:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:40:51.954 00:40:51.954 real 0m14.586s 00:40:51.954 user 0m19.156s 00:40:51.954 sys 0m7.491s 00:40:51.954 15:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:51.954 15:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:51.954 ************************************ 00:40:51.954 END TEST nvmf_host_management 00:40:51.954 ************************************ 00:40:51.954 15:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:40:51.954 15:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:40:51.954 15:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:51.954 15:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:51.954 ************************************ 00:40:51.954 START TEST nvmf_lvol 00:40:51.954 ************************************ 00:40:51.954 15:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:40:51.954 * Looking for test storage... 00:40:51.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:51.954 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:51.954 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:40:51.954 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:51.954 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:51.954 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:51.954 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:51.954 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:51.954 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:40:51.954 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:40:51.954 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:40:51.954 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:40:51.954 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:40:51.954 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:40:51.954 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:40:51.954 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:51.954 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:40:51.954 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:40:51.954 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:51.954 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:51.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:51.955 --rc genhtml_branch_coverage=1 00:40:51.955 --rc genhtml_function_coverage=1 00:40:51.955 --rc genhtml_legend=1 00:40:51.955 --rc geninfo_all_blocks=1 00:40:51.955 --rc geninfo_unexecuted_blocks=1 00:40:51.955 00:40:51.955 ' 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:51.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:51.955 --rc genhtml_branch_coverage=1 00:40:51.955 --rc genhtml_function_coverage=1 00:40:51.955 --rc genhtml_legend=1 00:40:51.955 --rc geninfo_all_blocks=1 00:40:51.955 --rc geninfo_unexecuted_blocks=1 00:40:51.955 00:40:51.955 ' 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:51.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:51.955 --rc genhtml_branch_coverage=1 00:40:51.955 --rc genhtml_function_coverage=1 00:40:51.955 --rc genhtml_legend=1 00:40:51.955 --rc geninfo_all_blocks=1 00:40:51.955 --rc geninfo_unexecuted_blocks=1 00:40:51.955 00:40:51.955 ' 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:51.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:51.955 --rc genhtml_branch_coverage=1 00:40:51.955 --rc genhtml_function_coverage=1 00:40:51.955 --rc genhtml_legend=1 00:40:51.955 --rc geninfo_all_blocks=1 00:40:51.955 --rc geninfo_unexecuted_blocks=1 00:40:51.955 00:40:51.955 ' 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:51.955 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:51.956 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:51.956 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:40:51.956 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:40:51.956 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:40:51.956 15:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:41:00.090 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:00.090 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:41:00.090 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:41:00.091 Found 0000:31:00.0 (0x8086 - 0x159b) 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:41:00.091 Found 0000:31:00.1 (0x8086 - 0x159b) 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:41:00.091 Found net devices under 0000:31:00.0: cvl_0_0 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:41:00.091 Found net devices under 0000:31:00.1: cvl_0_1 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # is_hw=yes 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:00.091 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:00.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:00.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:41:00.091 00:41:00.091 --- 10.0.0.2 ping statistics --- 00:41:00.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:00.091 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:41:00.092 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:00.092 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:00.092 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:41:00.092 00:41:00.092 --- 10.0.0.1 ping statistics --- 00:41:00.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:00.092 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:41:00.092 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:00.092 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # return 0 00:41:00.092 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:41:00.092 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:00.092 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:41:00.092 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:41:00.092 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:00.092 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:41:00.092 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:41:00.092 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:41:00.092 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:41:00.092 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:00.092 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:41:00.092 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=679430 00:41:00.092 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 679430 00:41:00.092 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:41:00.092 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 679430 ']' 00:41:00.092 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:00.092 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:00.092 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:00.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:00.092 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:00.092 15:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:41:00.092 [2024-09-27 15:59:39.630666] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:00.092 [2024-09-27 15:59:39.631648] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:41:00.092 [2024-09-27 15:59:39.631685] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:00.092 [2024-09-27 15:59:39.714973] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:41:00.092 [2024-09-27 15:59:39.746832] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:00.092 [2024-09-27 15:59:39.746870] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:00.092 [2024-09-27 15:59:39.746878] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:00.092 [2024-09-27 15:59:39.746885] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:00.092 [2024-09-27 15:59:39.746890] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:00.092 [2024-09-27 15:59:39.747043] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:41:00.092 [2024-09-27 15:59:39.747275] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:41:00.092 [2024-09-27 15:59:39.747275] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:41:00.092 [2024-09-27 15:59:39.812445] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:00.092 [2024-09-27 15:59:39.813258] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:00.092 [2024-09-27 15:59:39.813346] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:00.092 [2024-09-27 15:59:39.813644] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:00.092 15:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:00.092 15:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:41:00.092 15:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:41:00.092 15:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:00.092 15:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:41:00.092 15:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:00.092 15:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:41:00.352 [2024-09-27 15:59:40.612098] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:00.352 15:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:00.612 15:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:41:00.612 15:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:00.612 15:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:41:00.612 15:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:41:00.871 15:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:41:01.131 15:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=5ed946ac-47bb-4b15-90ef-e6c9bc1603ab 00:41:01.131 15:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5ed946ac-47bb-4b15-90ef-e6c9bc1603ab lvol 20 00:41:01.131 15:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=b2b17345-d275-470a-bfd1-9b3aedcd51be 00:41:01.131 15:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:41:01.391 15:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b2b17345-d275-470a-bfd1-9b3aedcd51be 00:41:01.651 15:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:01.651 [2024-09-27 15:59:42.099905] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:01.651 15:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:01.910 15:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=679969 00:41:01.910 15:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:41:01.910 15:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:41:02.849 15:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot b2b17345-d275-470a-bfd1-9b3aedcd51be MY_SNAPSHOT 00:41:03.110 15:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=350666db-2f2e-4e73-87cd-baed1166fa98 00:41:03.110 15:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize b2b17345-d275-470a-bfd1-9b3aedcd51be 30 00:41:03.371 15:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 350666db-2f2e-4e73-87cd-baed1166fa98 MY_CLONE 00:41:03.631 15:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=4e9adf48-6641-415c-b0da-67c08b194dc8 00:41:03.632 15:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 4e9adf48-6641-415c-b0da-67c08b194dc8 00:41:04.201 15:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 679969 00:41:12.337 Initializing NVMe Controllers 00:41:12.337 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:41:12.337 Controller IO queue size 128, less than required. 00:41:12.337 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:41:12.337 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:41:12.337 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:41:12.337 Initialization complete. Launching workers. 00:41:12.337 ======================================================== 00:41:12.337 Latency(us) 00:41:12.337 Device Information : IOPS MiB/s Average min max 00:41:12.337 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15289.10 59.72 8374.66 2278.20 67601.51 00:41:12.337 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15818.80 61.79 8093.55 3907.41 63747.79 00:41:12.337 ======================================================== 00:41:12.337 Total : 31107.90 121.52 8231.71 2278.20 67601.51 00:41:12.337 00:41:12.337 15:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:12.337 15:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b2b17345-d275-470a-bfd1-9b3aedcd51be 00:41:12.597 15:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5ed946ac-47bb-4b15-90ef-e6c9bc1603ab 00:41:12.597 15:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:41:12.597 15:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:41:12.597 15:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:41:12.597 15:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:41:12.597 15:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:41:12.597 15:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:12.597 15:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:41:12.597 15:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:12.597 15:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:12.597 rmmod nvme_tcp 00:41:12.597 rmmod nvme_fabrics 00:41:12.858 rmmod nvme_keyring 00:41:12.858 15:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:12.858 15:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:41:12.858 15:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:41:12.858 15:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 679430 ']' 00:41:12.858 15:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 679430 00:41:12.858 15:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 679430 ']' 00:41:12.858 15:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 679430 00:41:12.858 15:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:41:12.858 15:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:12.858 15:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 679430 00:41:12.858 15:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:12.858 15:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:12.858 15:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 679430' 00:41:12.858 killing process with pid 679430 00:41:12.858 15:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 679430 00:41:12.858 15:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 679430 00:41:12.858 15:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:41:12.858 15:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:41:12.858 15:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:41:12.858 15:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:41:12.858 15:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-save 00:41:12.858 15:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:41:12.858 15:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-restore 00:41:12.858 15:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:12.858 15:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:12.858 15:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:12.858 15:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:12.858 15:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:15.400 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:15.400 00:41:15.400 real 0m23.480s 00:41:15.400 user 0m54.847s 00:41:15.400 sys 0m10.684s 00:41:15.400 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:15.400 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:41:15.400 ************************************ 00:41:15.400 END TEST nvmf_lvol 00:41:15.400 ************************************ 00:41:15.400 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:41:15.400 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:41:15.400 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:15.400 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:15.400 ************************************ 00:41:15.400 START TEST nvmf_lvs_grow 00:41:15.400 ************************************ 00:41:15.400 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:41:15.400 * Looking for test storage... 00:41:15.400 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:15.400 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:41:15.400 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:41:15.400 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:41:15.400 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:41:15.400 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:15.400 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:15.400 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:15.400 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:41:15.400 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:41:15.400 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:41:15.400 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:41:15.400 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:41:15.400 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:41:15.400 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:41:15.400 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:15.400 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:41:15.400 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:41:15.400 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:15.400 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:15.400 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:41:15.400 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:41:15.400 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:15.400 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:41:15.400 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:41:15.400 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:41:15.400 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:41:15.400 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:15.400 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:41:15.400 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:41:15.400 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:15.400 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:15.400 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:41:15.400 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:15.400 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:41:15.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:15.400 --rc genhtml_branch_coverage=1 00:41:15.400 --rc genhtml_function_coverage=1 00:41:15.400 --rc genhtml_legend=1 00:41:15.400 --rc geninfo_all_blocks=1 00:41:15.400 --rc geninfo_unexecuted_blocks=1 00:41:15.400 00:41:15.400 ' 00:41:15.400 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:41:15.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:15.400 --rc genhtml_branch_coverage=1 00:41:15.400 --rc genhtml_function_coverage=1 00:41:15.400 --rc genhtml_legend=1 00:41:15.401 --rc geninfo_all_blocks=1 00:41:15.401 --rc geninfo_unexecuted_blocks=1 00:41:15.401 00:41:15.401 ' 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:41:15.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:15.401 --rc genhtml_branch_coverage=1 00:41:15.401 --rc genhtml_function_coverage=1 00:41:15.401 --rc genhtml_legend=1 00:41:15.401 --rc geninfo_all_blocks=1 00:41:15.401 --rc geninfo_unexecuted_blocks=1 00:41:15.401 00:41:15.401 ' 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:41:15.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:15.401 --rc genhtml_branch_coverage=1 00:41:15.401 --rc genhtml_function_coverage=1 00:41:15.401 --rc genhtml_legend=1 00:41:15.401 --rc geninfo_all_blocks=1 00:41:15.401 --rc geninfo_unexecuted_blocks=1 00:41:15.401 00:41:15.401 ' 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:41:15.401 15:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:23.544 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:23.544 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:41:23.544 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:23.544 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:23.544 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:23.544 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:23.544 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:23.544 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:41:23.544 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:23.544 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:41:23.544 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:41:23.544 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:41:23.544 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:41:23.544 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:41:23.544 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:41:23.544 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:23.544 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:23.544 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:23.544 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:23.544 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:23.544 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:23.544 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:23.544 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:23.544 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:23.544 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:23.544 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:23.544 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:41:23.544 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:41:23.545 Found 0000:31:00.0 (0x8086 - 0x159b) 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:41:23.545 Found 0000:31:00.1 (0x8086 - 0x159b) 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:41:23.545 Found net devices under 0000:31:00.0: cvl_0_0 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:41:23.545 Found net devices under 0000:31:00.1: cvl_0_1 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # is_hw=yes 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:23.545 16:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:23.545 16:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:23.545 16:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:23.545 16:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:23.545 16:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:23.545 16:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:23.545 16:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:23.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:23.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:41:23.545 00:41:23.545 --- 10.0.0.2 ping statistics --- 00:41:23.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:23.545 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:41:23.545 16:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:23.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:23.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:41:23.545 00:41:23.545 --- 10.0.0.1 ping statistics --- 00:41:23.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:23.545 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:41:23.545 16:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:23.545 16:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # return 0 00:41:23.545 16:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:41:23.545 16:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:23.545 16:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:41:23.545 16:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:41:23.545 16:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:23.545 16:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:41:23.545 16:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:41:23.545 16:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:41:23.545 16:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:41:23.545 16:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:23.545 16:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:23.545 16:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=686231 00:41:23.545 16:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 686231 00:41:23.545 16:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:41:23.545 16:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 686231 ']' 00:41:23.545 16:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:23.546 16:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:23.546 16:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:23.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:23.546 16:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:23.546 16:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:23.546 [2024-09-27 16:00:03.255899] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:23.546 [2024-09-27 16:00:03.256760] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:41:23.546 [2024-09-27 16:00:03.256799] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:23.546 [2024-09-27 16:00:03.337291] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:23.546 [2024-09-27 16:00:03.373582] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:23.546 [2024-09-27 16:00:03.373628] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:23.546 [2024-09-27 16:00:03.373640] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:23.546 [2024-09-27 16:00:03.373647] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:23.546 [2024-09-27 16:00:03.373653] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:23.546 [2024-09-27 16:00:03.373683] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:41:23.546 [2024-09-27 16:00:03.423592] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:23.546 [2024-09-27 16:00:03.423849] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:23.807 16:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:23.807 16:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:41:23.807 16:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:41:23.807 16:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:23.807 16:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:23.807 16:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:23.807 16:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:41:23.807 [2024-09-27 16:00:04.266458] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:23.807 16:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:41:23.807 16:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:23.807 16:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:23.807 16:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:24.067 ************************************ 00:41:24.067 START TEST lvs_grow_clean 00:41:24.067 ************************************ 00:41:24.067 16:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:41:24.067 16:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:41:24.067 16:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:41:24.067 16:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:41:24.067 16:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:41:24.067 16:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:41:24.067 16:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:41:24.067 16:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:24.067 16:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:24.067 16:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:24.067 16:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:41:24.067 16:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:41:24.328 16:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=6d9e4f3e-adf4-460b-a9f1-a8ac6f245472 00:41:24.328 16:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d9e4f3e-adf4-460b-a9f1-a8ac6f245472 00:41:24.328 16:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:41:24.589 16:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:41:24.589 16:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:41:24.589 16:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6d9e4f3e-adf4-460b-a9f1-a8ac6f245472 lvol 150 00:41:24.850 16:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=c4d26032-6360-472a-aaa1-f2b383e7d20e 00:41:24.850 16:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:24.850 16:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:41:24.850 [2024-09-27 16:00:05.266179] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:41:24.850 [2024-09-27 16:00:05.266350] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:41:24.850 true 00:41:24.850 16:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d9e4f3e-adf4-460b-a9f1-a8ac6f245472 00:41:24.850 16:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:41:25.110 16:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:41:25.110 16:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:41:25.372 16:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c4d26032-6360-472a-aaa1-f2b383e7d20e 00:41:25.372 16:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:25.631 [2024-09-27 16:00:05.966702] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:25.631 16:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:25.892 16:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=686866 00:41:25.892 16:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:25.892 16:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:41:25.892 16:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 686866 /var/tmp/bdevperf.sock 00:41:25.892 16:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 686866 ']' 00:41:25.892 16:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:41:25.892 16:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:25.892 16:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:41:25.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:41:25.892 16:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:25.892 16:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:41:25.892 [2024-09-27 16:00:06.199482] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:41:25.892 [2024-09-27 16:00:06.199535] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid686866 ] 00:41:25.892 [2024-09-27 16:00:06.276969] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:25.892 [2024-09-27 16:00:06.308677] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:41:26.834 16:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:26.834 16:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:41:26.834 16:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:41:27.094 Nvme0n1 00:41:27.094 16:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:41:27.094 [ 00:41:27.094 { 00:41:27.094 "name": "Nvme0n1", 00:41:27.094 "aliases": [ 00:41:27.094 "c4d26032-6360-472a-aaa1-f2b383e7d20e" 00:41:27.094 ], 00:41:27.094 "product_name": "NVMe disk", 00:41:27.094 "block_size": 4096, 00:41:27.094 "num_blocks": 38912, 00:41:27.094 "uuid": "c4d26032-6360-472a-aaa1-f2b383e7d20e", 00:41:27.094 "numa_id": 0, 00:41:27.094 "assigned_rate_limits": { 00:41:27.094 "rw_ios_per_sec": 0, 00:41:27.094 "rw_mbytes_per_sec": 0, 00:41:27.094 "r_mbytes_per_sec": 0, 00:41:27.094 "w_mbytes_per_sec": 0 00:41:27.094 }, 00:41:27.094 "claimed": false, 00:41:27.094 "zoned": false, 00:41:27.094 "supported_io_types": { 00:41:27.094 "read": true, 00:41:27.094 "write": true, 00:41:27.094 "unmap": true, 00:41:27.094 "flush": true, 00:41:27.094 "reset": true, 00:41:27.094 "nvme_admin": true, 00:41:27.094 "nvme_io": true, 00:41:27.094 "nvme_io_md": false, 00:41:27.094 "write_zeroes": true, 00:41:27.094 "zcopy": false, 00:41:27.094 "get_zone_info": false, 00:41:27.094 "zone_management": false, 00:41:27.094 "zone_append": false, 00:41:27.094 "compare": true, 00:41:27.094 "compare_and_write": true, 00:41:27.094 "abort": true, 00:41:27.094 "seek_hole": false, 00:41:27.094 "seek_data": false, 00:41:27.094 "copy": true, 00:41:27.094 "nvme_iov_md": false 00:41:27.094 }, 00:41:27.094 "memory_domains": [ 00:41:27.094 { 00:41:27.095 "dma_device_id": "system", 00:41:27.095 "dma_device_type": 1 00:41:27.095 } 00:41:27.095 ], 00:41:27.095 "driver_specific": { 00:41:27.095 "nvme": [ 00:41:27.095 { 00:41:27.095 "trid": { 00:41:27.095 "trtype": "TCP", 00:41:27.095 "adrfam": "IPv4", 00:41:27.095 "traddr": "10.0.0.2", 00:41:27.095 "trsvcid": "4420", 00:41:27.095 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:41:27.095 }, 00:41:27.095 "ctrlr_data": { 00:41:27.095 "cntlid": 1, 00:41:27.095 "vendor_id": "0x8086", 00:41:27.095 "model_number": "SPDK bdev Controller", 00:41:27.095 "serial_number": "SPDK0", 00:41:27.095 "firmware_revision": "25.01", 00:41:27.095 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:27.095 "oacs": { 00:41:27.095 "security": 0, 00:41:27.095 "format": 0, 00:41:27.095 "firmware": 0, 00:41:27.095 "ns_manage": 0 00:41:27.095 }, 00:41:27.095 "multi_ctrlr": true, 00:41:27.095 "ana_reporting": false 00:41:27.095 }, 00:41:27.095 "vs": { 00:41:27.095 "nvme_version": "1.3" 00:41:27.095 }, 00:41:27.095 "ns_data": { 00:41:27.095 "id": 1, 00:41:27.095 "can_share": true 00:41:27.095 } 00:41:27.095 } 00:41:27.095 ], 00:41:27.095 "mp_policy": "active_passive" 00:41:27.095 } 00:41:27.095 } 00:41:27.095 ] 00:41:27.095 16:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=687198 00:41:27.095 16:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:41:27.095 16:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:41:27.356 Running I/O for 10 seconds... 00:41:28.296 Latency(us) 00:41:28.296 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:28.296 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:28.296 Nvme0n1 : 1.00 16635.00 64.98 0.00 0.00 0.00 0.00 0.00 00:41:28.296 =================================================================================================================== 00:41:28.296 Total : 16635.00 64.98 0.00 0.00 0.00 0.00 0.00 00:41:28.296 00:41:29.236 16:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6d9e4f3e-adf4-460b-a9f1-a8ac6f245472 00:41:29.236 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:29.236 Nvme0n1 : 2.00 16997.50 66.40 0.00 0.00 0.00 0.00 0.00 00:41:29.236 =================================================================================================================== 00:41:29.236 Total : 16997.50 66.40 0.00 0.00 0.00 0.00 0.00 00:41:29.236 00:41:29.236 true 00:41:29.496 16:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d9e4f3e-adf4-460b-a9f1-a8ac6f245472 00:41:29.496 16:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:41:29.496 16:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:41:29.496 16:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:41:29.496 16:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 687198 00:41:30.436 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:30.436 Nvme0n1 : 3.00 17083.67 66.73 0.00 0.00 0.00 0.00 0.00 00:41:30.436 =================================================================================================================== 00:41:30.436 Total : 17083.67 66.73 0.00 0.00 0.00 0.00 0.00 00:41:30.436 00:41:31.376 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:31.376 Nvme0n1 : 4.00 17184.75 67.13 0.00 0.00 0.00 0.00 0.00 00:41:31.377 =================================================================================================================== 00:41:31.377 Total : 17184.75 67.13 0.00 0.00 0.00 0.00 0.00 00:41:31.377 00:41:32.317 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:32.317 Nvme0n1 : 5.00 18456.60 72.10 0.00 0.00 0.00 0.00 0.00 00:41:32.317 =================================================================================================================== 00:41:32.317 Total : 18456.60 72.10 0.00 0.00 0.00 0.00 0.00 00:41:32.317 00:41:33.258 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:33.258 Nvme0n1 : 6.00 19468.50 76.05 0.00 0.00 0.00 0.00 0.00 00:41:33.258 =================================================================================================================== 00:41:33.258 Total : 19468.50 76.05 0.00 0.00 0.00 0.00 0.00 00:41:33.258 00:41:34.200 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:34.200 Nvme0n1 : 7.00 20201.57 78.91 0.00 0.00 0.00 0.00 0.00 00:41:34.200 =================================================================================================================== 00:41:34.200 Total : 20201.57 78.91 0.00 0.00 0.00 0.00 0.00 00:41:34.200 00:41:35.586 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:35.586 Nvme0n1 : 8.00 20754.38 81.07 0.00 0.00 0.00 0.00 0.00 00:41:35.586 =================================================================================================================== 00:41:35.586 Total : 20754.38 81.07 0.00 0.00 0.00 0.00 0.00 00:41:35.586 00:41:36.526 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:36.526 Nvme0n1 : 9.00 21186.11 82.76 0.00 0.00 0.00 0.00 0.00 00:41:36.526 =================================================================================================================== 00:41:36.526 Total : 21186.11 82.76 0.00 0.00 0.00 0.00 0.00 00:41:36.526 00:41:37.468 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:37.468 Nvme0n1 : 10.00 21528.30 84.09 0.00 0.00 0.00 0.00 0.00 00:41:37.468 =================================================================================================================== 00:41:37.468 Total : 21528.30 84.09 0.00 0.00 0.00 0.00 0.00 00:41:37.468 00:41:37.468 00:41:37.468 Latency(us) 00:41:37.468 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:37.468 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:37.468 Nvme0n1 : 10.00 21529.58 84.10 0.00 0.00 5941.19 3932.16 22609.92 00:41:37.468 =================================================================================================================== 00:41:37.468 Total : 21529.58 84.10 0.00 0.00 5941.19 3932.16 22609.92 00:41:37.468 { 00:41:37.468 "results": [ 00:41:37.468 { 00:41:37.468 "job": "Nvme0n1", 00:41:37.468 "core_mask": "0x2", 00:41:37.468 "workload": "randwrite", 00:41:37.468 "status": "finished", 00:41:37.468 "queue_depth": 128, 00:41:37.468 "io_size": 4096, 00:41:37.468 "runtime": 10.004977, 00:41:37.468 "iops": 21529.584725682027, 00:41:37.468 "mibps": 84.09994033469542, 00:41:37.468 "io_failed": 0, 00:41:37.468 "io_timeout": 0, 00:41:37.468 "avg_latency_us": 5941.193840630508, 00:41:37.468 "min_latency_us": 3932.16, 00:41:37.468 "max_latency_us": 22609.92 00:41:37.468 } 00:41:37.468 ], 00:41:37.468 "core_count": 1 00:41:37.468 } 00:41:37.468 16:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 686866 00:41:37.468 16:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 686866 ']' 00:41:37.468 16:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 686866 00:41:37.468 16:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:41:37.468 16:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:37.469 16:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 686866 00:41:37.469 16:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:41:37.469 16:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:41:37.469 16:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 686866' 00:41:37.469 killing process with pid 686866 00:41:37.469 16:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 686866 00:41:37.469 Received shutdown signal, test time was about 10.000000 seconds 00:41:37.469 00:41:37.469 Latency(us) 00:41:37.469 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:37.469 =================================================================================================================== 00:41:37.469 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:37.469 16:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 686866 00:41:37.469 16:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:37.729 16:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:37.989 16:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d9e4f3e-adf4-460b-a9f1-a8ac6f245472 00:41:37.989 16:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:41:37.989 16:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:41:37.989 16:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:41:37.989 16:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:41:38.250 [2024-09-27 16:00:18.582221] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:41:38.250 16:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d9e4f3e-adf4-460b-a9f1-a8ac6f245472 00:41:38.250 16:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:41:38.250 16:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d9e4f3e-adf4-460b-a9f1-a8ac6f245472 00:41:38.251 16:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:38.251 16:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:38.251 16:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:38.251 16:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:38.251 16:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:38.251 16:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:38.251 16:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:38.251 16:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:41:38.251 16:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d9e4f3e-adf4-460b-a9f1-a8ac6f245472 00:41:38.510 request: 00:41:38.510 { 00:41:38.510 "uuid": "6d9e4f3e-adf4-460b-a9f1-a8ac6f245472", 00:41:38.510 "method": "bdev_lvol_get_lvstores", 00:41:38.510 "req_id": 1 00:41:38.510 } 00:41:38.510 Got JSON-RPC error response 00:41:38.510 response: 00:41:38.510 { 00:41:38.510 "code": -19, 00:41:38.510 "message": "No such device" 00:41:38.510 } 00:41:38.510 16:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:41:38.510 16:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:41:38.510 16:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:41:38.510 16:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:41:38.510 16:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:38.511 aio_bdev 00:41:38.511 16:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c4d26032-6360-472a-aaa1-f2b383e7d20e 00:41:38.511 16:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=c4d26032-6360-472a-aaa1-f2b383e7d20e 00:41:38.511 16:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:41:38.511 16:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:41:38.511 16:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:41:38.511 16:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:41:38.511 16:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:41:38.771 16:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c4d26032-6360-472a-aaa1-f2b383e7d20e -t 2000 00:41:38.771 [ 00:41:38.771 { 00:41:38.771 "name": "c4d26032-6360-472a-aaa1-f2b383e7d20e", 00:41:38.771 "aliases": [ 00:41:38.771 "lvs/lvol" 00:41:38.771 ], 00:41:38.771 "product_name": "Logical Volume", 00:41:38.771 "block_size": 4096, 00:41:38.771 "num_blocks": 38912, 00:41:38.771 "uuid": "c4d26032-6360-472a-aaa1-f2b383e7d20e", 00:41:38.771 "assigned_rate_limits": { 00:41:38.771 "rw_ios_per_sec": 0, 00:41:38.771 "rw_mbytes_per_sec": 0, 00:41:38.771 "r_mbytes_per_sec": 0, 00:41:38.771 "w_mbytes_per_sec": 0 00:41:38.771 }, 00:41:38.772 "claimed": false, 00:41:38.772 "zoned": false, 00:41:38.772 "supported_io_types": { 00:41:38.772 "read": true, 00:41:38.772 "write": true, 00:41:38.772 "unmap": true, 00:41:38.772 "flush": false, 00:41:38.772 "reset": true, 00:41:38.772 "nvme_admin": false, 00:41:38.772 "nvme_io": false, 00:41:38.772 "nvme_io_md": false, 00:41:38.772 "write_zeroes": true, 00:41:38.772 "zcopy": false, 00:41:38.772 "get_zone_info": false, 00:41:38.772 "zone_management": false, 00:41:38.772 "zone_append": false, 00:41:38.772 "compare": false, 00:41:38.772 "compare_and_write": false, 00:41:38.772 "abort": false, 00:41:38.772 "seek_hole": true, 00:41:38.772 "seek_data": true, 00:41:38.772 "copy": false, 00:41:38.772 "nvme_iov_md": false 00:41:38.772 }, 00:41:38.772 "driver_specific": { 00:41:38.772 "lvol": { 00:41:38.772 "lvol_store_uuid": "6d9e4f3e-adf4-460b-a9f1-a8ac6f245472", 00:41:38.772 "base_bdev": "aio_bdev", 00:41:38.772 "thin_provision": false, 00:41:38.772 "num_allocated_clusters": 38, 00:41:38.772 "snapshot": false, 00:41:38.772 "clone": false, 00:41:38.772 "esnap_clone": false 00:41:38.772 } 00:41:38.772 } 00:41:38.772 } 00:41:38.772 ] 00:41:38.772 16:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:41:38.772 16:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d9e4f3e-adf4-460b-a9f1-a8ac6f245472 00:41:38.772 16:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:41:39.033 16:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:41:39.033 16:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d9e4f3e-adf4-460b-a9f1-a8ac6f245472 00:41:39.033 16:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:41:39.295 16:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:41:39.295 16:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c4d26032-6360-472a-aaa1-f2b383e7d20e 00:41:39.295 16:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6d9e4f3e-adf4-460b-a9f1-a8ac6f245472 00:41:39.557 16:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:41:39.818 16:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:39.818 00:41:39.818 real 0m15.861s 00:41:39.818 user 0m15.422s 00:41:39.818 sys 0m1.547s 00:41:39.818 16:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:39.818 16:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:41:39.818 ************************************ 00:41:39.818 END TEST lvs_grow_clean 00:41:39.818 ************************************ 00:41:39.818 16:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:41:39.818 16:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:41:39.818 16:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:39.818 16:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:39.818 ************************************ 00:41:39.818 START TEST lvs_grow_dirty 00:41:39.818 ************************************ 00:41:39.818 16:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:41:39.818 16:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:41:39.818 16:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:41:39.818 16:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:41:39.818 16:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:41:39.818 16:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:41:39.818 16:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:41:39.818 16:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:39.818 16:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:39.818 16:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:40.079 16:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:41:40.079 16:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:41:40.340 16:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=4bc6361b-d0dd-4eae-b351-8279b525bbb3 00:41:40.340 16:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bc6361b-d0dd-4eae-b351-8279b525bbb3 00:41:40.340 16:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:41:40.340 16:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:41:40.340 16:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:41:40.600 16:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4bc6361b-d0dd-4eae-b351-8279b525bbb3 lvol 150 00:41:40.600 16:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=5fe63f63-a697-492d-9e3c-33ebdffd0ebc 00:41:40.600 16:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:40.600 16:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:41:40.861 [2024-09-27 16:00:21.146148] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:41:40.861 [2024-09-27 16:00:21.146320] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:41:40.861 true 00:41:40.861 16:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bc6361b-d0dd-4eae-b351-8279b525bbb3 00:41:40.861 16:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:41:40.861 16:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:41:40.861 16:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:41:41.122 16:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5fe63f63-a697-492d-9e3c-33ebdffd0ebc 00:41:41.384 16:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:41.384 [2024-09-27 16:00:21.870648] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:41.645 16:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:41.645 16:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=690385 00:41:41.645 16:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:41.645 16:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:41:41.645 16:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 690385 /var/tmp/bdevperf.sock 00:41:41.645 16:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 690385 ']' 00:41:41.645 16:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:41:41.645 16:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:41.645 16:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:41:41.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:41:41.645 16:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:41.645 16:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:41.645 [2024-09-27 16:00:22.103707] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:41:41.645 [2024-09-27 16:00:22.103760] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid690385 ] 00:41:41.906 [2024-09-27 16:00:22.181082] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:41.906 [2024-09-27 16:00:22.209474] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:41:42.477 16:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:42.477 16:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:41:42.477 16:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:41:42.737 Nvme0n1 00:41:42.738 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:41:42.999 [ 00:41:42.999 { 00:41:42.999 "name": "Nvme0n1", 00:41:42.999 "aliases": [ 00:41:42.999 "5fe63f63-a697-492d-9e3c-33ebdffd0ebc" 00:41:42.999 ], 00:41:42.999 "product_name": "NVMe disk", 00:41:42.999 "block_size": 4096, 00:41:42.999 "num_blocks": 38912, 00:41:42.999 "uuid": "5fe63f63-a697-492d-9e3c-33ebdffd0ebc", 00:41:42.999 "numa_id": 0, 00:41:42.999 "assigned_rate_limits": { 00:41:42.999 "rw_ios_per_sec": 0, 00:41:42.999 "rw_mbytes_per_sec": 0, 00:41:42.999 "r_mbytes_per_sec": 0, 00:41:42.999 "w_mbytes_per_sec": 0 00:41:42.999 }, 00:41:42.999 "claimed": false, 00:41:42.999 "zoned": false, 00:41:42.999 "supported_io_types": { 00:41:42.999 "read": true, 00:41:42.999 "write": true, 00:41:42.999 "unmap": true, 00:41:42.999 "flush": true, 00:41:42.999 "reset": true, 00:41:42.999 "nvme_admin": true, 00:41:42.999 "nvme_io": true, 00:41:42.999 "nvme_io_md": false, 00:41:42.999 "write_zeroes": true, 00:41:42.999 "zcopy": false, 00:41:42.999 "get_zone_info": false, 00:41:42.999 "zone_management": false, 00:41:42.999 "zone_append": false, 00:41:42.999 "compare": true, 00:41:42.999 "compare_and_write": true, 00:41:42.999 "abort": true, 00:41:42.999 "seek_hole": false, 00:41:42.999 "seek_data": false, 00:41:42.999 "copy": true, 00:41:42.999 "nvme_iov_md": false 00:41:42.999 }, 00:41:42.999 "memory_domains": [ 00:41:42.999 { 00:41:42.999 "dma_device_id": "system", 00:41:42.999 "dma_device_type": 1 00:41:42.999 } 00:41:42.999 ], 00:41:42.999 "driver_specific": { 00:41:42.999 "nvme": [ 00:41:42.999 { 00:41:42.999 "trid": { 00:41:42.999 "trtype": "TCP", 00:41:42.999 "adrfam": "IPv4", 00:41:42.999 "traddr": "10.0.0.2", 00:41:42.999 "trsvcid": "4420", 00:41:42.999 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:41:42.999 }, 00:41:42.999 "ctrlr_data": { 00:41:42.999 "cntlid": 1, 00:41:42.999 "vendor_id": "0x8086", 00:41:42.999 "model_number": "SPDK bdev Controller", 00:41:42.999 "serial_number": "SPDK0", 00:41:42.999 "firmware_revision": "25.01", 00:41:42.999 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:42.999 "oacs": { 00:41:42.999 "security": 0, 00:41:42.999 "format": 0, 00:41:42.999 "firmware": 0, 00:41:42.999 "ns_manage": 0 00:41:42.999 }, 00:41:42.999 "multi_ctrlr": true, 00:41:42.999 "ana_reporting": false 00:41:42.999 }, 00:41:42.999 "vs": { 00:41:42.999 "nvme_version": "1.3" 00:41:42.999 }, 00:41:42.999 "ns_data": { 00:41:42.999 "id": 1, 00:41:42.999 "can_share": true 00:41:42.999 } 00:41:42.999 } 00:41:42.999 ], 00:41:42.999 "mp_policy": "active_passive" 00:41:42.999 } 00:41:42.999 } 00:41:42.999 ] 00:41:42.999 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=690511 00:41:42.999 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:41:42.999 16:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:41:42.999 Running I/O for 10 seconds... 00:41:43.941 Latency(us) 00:41:43.941 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:43.941 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:43.941 Nvme0n1 : 1.00 24434.00 95.45 0.00 0.00 0.00 0.00 0.00 00:41:43.941 =================================================================================================================== 00:41:43.941 Total : 24434.00 95.45 0.00 0.00 0.00 0.00 0.00 00:41:43.941 00:41:44.882 16:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4bc6361b-d0dd-4eae-b351-8279b525bbb3 00:41:45.143 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:45.143 Nvme0n1 : 2.00 24856.50 97.10 0.00 0.00 0.00 0.00 0.00 00:41:45.143 =================================================================================================================== 00:41:45.143 Total : 24856.50 97.10 0.00 0.00 0.00 0.00 0.00 00:41:45.143 00:41:45.143 true 00:41:45.143 16:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bc6361b-d0dd-4eae-b351-8279b525bbb3 00:41:45.143 16:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:41:45.403 16:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:41:45.403 16:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:41:45.403 16:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 690511 00:41:45.972 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:45.972 Nvme0n1 : 3.00 25018.67 97.73 0.00 0.00 0.00 0.00 0.00 00:41:45.972 =================================================================================================================== 00:41:45.972 Total : 25018.67 97.73 0.00 0.00 0.00 0.00 0.00 00:41:45.972 00:41:47.353 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:47.353 Nvme0n1 : 4.00 25130.50 98.17 0.00 0.00 0.00 0.00 0.00 00:41:47.353 =================================================================================================================== 00:41:47.353 Total : 25130.50 98.17 0.00 0.00 0.00 0.00 0.00 00:41:47.353 00:41:48.293 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:48.293 Nvme0n1 : 5.00 25198.80 98.43 0.00 0.00 0.00 0.00 0.00 00:41:48.293 =================================================================================================================== 00:41:48.293 Total : 25198.80 98.43 0.00 0.00 0.00 0.00 0.00 00:41:48.293 00:41:49.232 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:49.232 Nvme0n1 : 6.00 25244.33 98.61 0.00 0.00 0.00 0.00 0.00 00:41:49.232 =================================================================================================================== 00:41:49.232 Total : 25244.33 98.61 0.00 0.00 0.00 0.00 0.00 00:41:49.232 00:41:50.170 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:50.170 Nvme0n1 : 7.00 25276.86 98.74 0.00 0.00 0.00 0.00 0.00 00:41:50.170 =================================================================================================================== 00:41:50.170 Total : 25276.86 98.74 0.00 0.00 0.00 0.00 0.00 00:41:50.170 00:41:51.109 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:51.109 Nvme0n1 : 8.00 25301.25 98.83 0.00 0.00 0.00 0.00 0.00 00:41:51.109 =================================================================================================================== 00:41:51.109 Total : 25301.25 98.83 0.00 0.00 0.00 0.00 0.00 00:41:51.109 00:41:52.048 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:52.048 Nvme0n1 : 9.00 25326.78 98.93 0.00 0.00 0.00 0.00 0.00 00:41:52.048 =================================================================================================================== 00:41:52.048 Total : 25326.78 98.93 0.00 0.00 0.00 0.00 0.00 00:41:52.048 00:41:52.987 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:52.987 Nvme0n1 : 10.00 25347.80 99.01 0.00 0.00 0.00 0.00 0.00 00:41:52.987 =================================================================================================================== 00:41:52.987 Total : 25347.80 99.01 0.00 0.00 0.00 0.00 0.00 00:41:52.987 00:41:52.987 00:41:52.987 Latency(us) 00:41:52.987 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:52.987 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:52.987 Nvme0n1 : 10.00 25348.81 99.02 0.00 0.00 5046.40 3194.88 30801.92 00:41:52.987 =================================================================================================================== 00:41:52.987 Total : 25348.81 99.02 0.00 0.00 5046.40 3194.88 30801.92 00:41:52.987 { 00:41:52.987 "results": [ 00:41:52.987 { 00:41:52.987 "job": "Nvme0n1", 00:41:52.987 "core_mask": "0x2", 00:41:52.987 "workload": "randwrite", 00:41:52.987 "status": "finished", 00:41:52.987 "queue_depth": 128, 00:41:52.987 "io_size": 4096, 00:41:52.987 "runtime": 10.004653, 00:41:52.987 "iops": 25348.80520094, 00:41:52.987 "mibps": 99.01877031617188, 00:41:52.987 "io_failed": 0, 00:41:52.987 "io_timeout": 0, 00:41:52.987 "avg_latency_us": 5046.397945789926, 00:41:52.987 "min_latency_us": 3194.88, 00:41:52.987 "max_latency_us": 30801.92 00:41:52.987 } 00:41:52.987 ], 00:41:52.987 "core_count": 1 00:41:52.987 } 00:41:52.987 16:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 690385 00:41:52.987 16:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 690385 ']' 00:41:52.987 16:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 690385 00:41:52.987 16:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:41:52.987 16:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:52.987 16:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 690385 00:41:53.246 16:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:41:53.246 16:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:41:53.246 16:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 690385' 00:41:53.246 killing process with pid 690385 00:41:53.246 16:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 690385 00:41:53.246 Received shutdown signal, test time was about 10.000000 seconds 00:41:53.246 00:41:53.246 Latency(us) 00:41:53.246 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:53.246 =================================================================================================================== 00:41:53.246 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:53.246 16:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 690385 00:41:53.246 16:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:53.505 16:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:53.505 16:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bc6361b-d0dd-4eae-b351-8279b525bbb3 00:41:53.505 16:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:41:53.765 16:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:41:53.765 16:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:41:53.765 16:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 686231 00:41:53.765 16:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 686231 00:41:53.765 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 686231 Killed "${NVMF_APP[@]}" "$@" 00:41:53.765 16:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:41:53.765 16:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:41:53.765 16:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:41:53.765 16:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:53.765 16:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:53.765 16:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=692542 00:41:53.765 16:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 692542 00:41:53.765 16:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:41:53.765 16:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 692542 ']' 00:41:53.765 16:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:53.765 16:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:53.765 16:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:53.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:53.765 16:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:53.765 16:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:54.025 [2024-09-27 16:00:34.277199] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:54.025 [2024-09-27 16:00:34.278216] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:41:54.025 [2024-09-27 16:00:34.278257] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:54.025 [2024-09-27 16:00:34.365886] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:54.025 [2024-09-27 16:00:34.403410] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:54.025 [2024-09-27 16:00:34.403456] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:54.025 [2024-09-27 16:00:34.403463] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:54.025 [2024-09-27 16:00:34.403468] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:54.025 [2024-09-27 16:00:34.403472] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:54.025 [2024-09-27 16:00:34.403495] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:41:54.025 [2024-09-27 16:00:34.452356] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:54.025 [2024-09-27 16:00:34.452561] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:54.595 16:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:54.595 16:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:41:54.595 16:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:41:54.595 16:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:54.595 16:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:54.855 16:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:54.856 16:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:54.856 [2024-09-27 16:00:35.281557] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:41:54.856 [2024-09-27 16:00:35.281768] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:41:54.856 [2024-09-27 16:00:35.281855] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:41:54.856 16:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:41:54.856 16:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 5fe63f63-a697-492d-9e3c-33ebdffd0ebc 00:41:54.856 16:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=5fe63f63-a697-492d-9e3c-33ebdffd0ebc 00:41:54.856 16:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:41:54.856 16:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:41:54.856 16:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:41:54.856 16:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:41:54.856 16:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:41:55.115 16:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5fe63f63-a697-492d-9e3c-33ebdffd0ebc -t 2000 00:41:55.391 [ 00:41:55.391 { 00:41:55.391 "name": "5fe63f63-a697-492d-9e3c-33ebdffd0ebc", 00:41:55.391 "aliases": [ 00:41:55.391 "lvs/lvol" 00:41:55.391 ], 00:41:55.391 "product_name": "Logical Volume", 00:41:55.391 "block_size": 4096, 00:41:55.391 "num_blocks": 38912, 00:41:55.391 "uuid": "5fe63f63-a697-492d-9e3c-33ebdffd0ebc", 00:41:55.391 "assigned_rate_limits": { 00:41:55.391 "rw_ios_per_sec": 0, 00:41:55.391 "rw_mbytes_per_sec": 0, 00:41:55.391 "r_mbytes_per_sec": 0, 00:41:55.391 "w_mbytes_per_sec": 0 00:41:55.391 }, 00:41:55.391 "claimed": false, 00:41:55.391 "zoned": false, 00:41:55.391 "supported_io_types": { 00:41:55.391 "read": true, 00:41:55.391 "write": true, 00:41:55.391 "unmap": true, 00:41:55.391 "flush": false, 00:41:55.391 "reset": true, 00:41:55.391 "nvme_admin": false, 00:41:55.391 "nvme_io": false, 00:41:55.391 "nvme_io_md": false, 00:41:55.391 "write_zeroes": true, 00:41:55.391 "zcopy": false, 00:41:55.391 "get_zone_info": false, 00:41:55.391 "zone_management": false, 00:41:55.391 "zone_append": false, 00:41:55.391 "compare": false, 00:41:55.391 "compare_and_write": false, 00:41:55.391 "abort": false, 00:41:55.391 "seek_hole": true, 00:41:55.391 "seek_data": true, 00:41:55.391 "copy": false, 00:41:55.391 "nvme_iov_md": false 00:41:55.391 }, 00:41:55.391 "driver_specific": { 00:41:55.391 "lvol": { 00:41:55.391 "lvol_store_uuid": "4bc6361b-d0dd-4eae-b351-8279b525bbb3", 00:41:55.391 "base_bdev": "aio_bdev", 00:41:55.391 "thin_provision": false, 00:41:55.392 "num_allocated_clusters": 38, 00:41:55.392 "snapshot": false, 00:41:55.392 "clone": false, 00:41:55.392 "esnap_clone": false 00:41:55.392 } 00:41:55.392 } 00:41:55.392 } 00:41:55.392 ] 00:41:55.392 16:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:41:55.392 16:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bc6361b-d0dd-4eae-b351-8279b525bbb3 00:41:55.392 16:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:41:55.392 16:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:41:55.392 16:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bc6361b-d0dd-4eae-b351-8279b525bbb3 00:41:55.392 16:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:41:55.709 16:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:41:55.709 16:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:41:55.709 [2024-09-27 16:00:36.124056] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:41:56.022 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bc6361b-d0dd-4eae-b351-8279b525bbb3 00:41:56.022 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:41:56.022 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bc6361b-d0dd-4eae-b351-8279b525bbb3 00:41:56.022 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:56.022 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:56.022 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:56.022 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:56.022 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:56.022 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:56.022 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:56.022 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:41:56.022 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bc6361b-d0dd-4eae-b351-8279b525bbb3 00:41:56.022 request: 00:41:56.022 { 00:41:56.022 "uuid": "4bc6361b-d0dd-4eae-b351-8279b525bbb3", 00:41:56.022 "method": "bdev_lvol_get_lvstores", 00:41:56.022 "req_id": 1 00:41:56.022 } 00:41:56.022 Got JSON-RPC error response 00:41:56.022 response: 00:41:56.022 { 00:41:56.022 "code": -19, 00:41:56.022 "message": "No such device" 00:41:56.022 } 00:41:56.023 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:41:56.023 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:41:56.023 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:41:56.023 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:41:56.023 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:56.357 aio_bdev 00:41:56.357 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5fe63f63-a697-492d-9e3c-33ebdffd0ebc 00:41:56.357 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=5fe63f63-a697-492d-9e3c-33ebdffd0ebc 00:41:56.357 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:41:56.357 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:41:56.357 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:41:56.357 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:41:56.357 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:41:56.357 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5fe63f63-a697-492d-9e3c-33ebdffd0ebc -t 2000 00:41:56.647 [ 00:41:56.647 { 00:41:56.647 "name": "5fe63f63-a697-492d-9e3c-33ebdffd0ebc", 00:41:56.647 "aliases": [ 00:41:56.647 "lvs/lvol" 00:41:56.647 ], 00:41:56.647 "product_name": "Logical Volume", 00:41:56.647 "block_size": 4096, 00:41:56.647 "num_blocks": 38912, 00:41:56.647 "uuid": "5fe63f63-a697-492d-9e3c-33ebdffd0ebc", 00:41:56.647 "assigned_rate_limits": { 00:41:56.647 "rw_ios_per_sec": 0, 00:41:56.647 "rw_mbytes_per_sec": 0, 00:41:56.647 "r_mbytes_per_sec": 0, 00:41:56.647 "w_mbytes_per_sec": 0 00:41:56.647 }, 00:41:56.647 "claimed": false, 00:41:56.647 "zoned": false, 00:41:56.647 "supported_io_types": { 00:41:56.647 "read": true, 00:41:56.647 "write": true, 00:41:56.647 "unmap": true, 00:41:56.647 "flush": false, 00:41:56.647 "reset": true, 00:41:56.647 "nvme_admin": false, 00:41:56.647 "nvme_io": false, 00:41:56.647 "nvme_io_md": false, 00:41:56.647 "write_zeroes": true, 00:41:56.647 "zcopy": false, 00:41:56.647 "get_zone_info": false, 00:41:56.647 "zone_management": false, 00:41:56.647 "zone_append": false, 00:41:56.647 "compare": false, 00:41:56.647 "compare_and_write": false, 00:41:56.647 "abort": false, 00:41:56.647 "seek_hole": true, 00:41:56.647 "seek_data": true, 00:41:56.647 "copy": false, 00:41:56.647 "nvme_iov_md": false 00:41:56.647 }, 00:41:56.647 "driver_specific": { 00:41:56.647 "lvol": { 00:41:56.647 "lvol_store_uuid": "4bc6361b-d0dd-4eae-b351-8279b525bbb3", 00:41:56.647 "base_bdev": "aio_bdev", 00:41:56.647 "thin_provision": false, 00:41:56.647 "num_allocated_clusters": 38, 00:41:56.647 "snapshot": false, 00:41:56.647 "clone": false, 00:41:56.647 "esnap_clone": false 00:41:56.647 } 00:41:56.647 } 00:41:56.647 } 00:41:56.647 ] 00:41:56.647 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:41:56.647 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bc6361b-d0dd-4eae-b351-8279b525bbb3 00:41:56.647 16:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:41:56.647 16:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:41:56.647 16:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bc6361b-d0dd-4eae-b351-8279b525bbb3 00:41:56.647 16:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:41:56.931 16:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:41:56.931 16:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5fe63f63-a697-492d-9e3c-33ebdffd0ebc 00:41:56.931 16:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4bc6361b-d0dd-4eae-b351-8279b525bbb3 00:41:57.192 16:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:41:57.452 16:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:57.452 00:41:57.452 real 0m17.566s 00:41:57.452 user 0m35.290s 00:41:57.452 sys 0m3.162s 00:41:57.452 16:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:57.452 16:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:57.452 ************************************ 00:41:57.452 END TEST lvs_grow_dirty 00:41:57.452 ************************************ 00:41:57.452 16:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:41:57.452 16:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:41:57.452 16:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:41:57.452 16:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:41:57.452 16:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:41:57.452 16:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:41:57.452 16:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:41:57.452 16:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:41:57.452 16:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:41:57.452 nvmf_trace.0 00:41:57.452 16:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:41:57.452 16:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:41:57.452 16:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:41:57.452 16:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:41:57.452 16:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:57.452 16:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:41:57.452 16:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:57.452 16:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:57.712 rmmod nvme_tcp 00:41:57.712 rmmod nvme_fabrics 00:41:57.712 rmmod nvme_keyring 00:41:57.712 16:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:57.712 16:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:41:57.712 16:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:41:57.712 16:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 692542 ']' 00:41:57.712 16:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 692542 00:41:57.712 16:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 692542 ']' 00:41:57.712 16:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 692542 00:41:57.712 16:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:41:57.712 16:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:57.712 16:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 692542 00:41:57.712 16:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:57.712 16:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:57.712 16:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 692542' 00:41:57.712 killing process with pid 692542 00:41:57.712 16:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 692542 00:41:57.712 16:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 692542 00:41:57.712 16:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:41:57.712 16:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:41:57.712 16:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:41:57.712 16:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:41:57.712 16:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-save 00:41:57.712 16:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:41:57.712 16:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-restore 00:41:57.712 16:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:57.971 16:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:57.971 16:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:57.971 16:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:57.971 16:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:59.882 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:59.882 00:41:59.882 real 0m44.794s 00:41:59.882 user 0m53.620s 00:41:59.882 sys 0m10.874s 00:41:59.882 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:59.882 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:59.882 ************************************ 00:41:59.882 END TEST nvmf_lvs_grow 00:41:59.882 ************************************ 00:41:59.882 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:41:59.882 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:41:59.882 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:59.882 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:59.882 ************************************ 00:41:59.882 START TEST nvmf_bdev_io_wait 00:41:59.882 ************************************ 00:41:59.882 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:42:00.143 * Looking for test storage... 00:42:00.143 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:00.143 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:42:00.143 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:42:00.143 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:42:00.143 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:42:00.143 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:00.143 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:00.143 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:00.143 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:42:00.143 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:42:00.143 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:42:00.143 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:42:00.143 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:42:00.143 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:42:00.143 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:42:00.143 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:00.143 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:42:00.143 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:42:00.143 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:00.143 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:00.143 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:42:00.143 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:42:00.143 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:00.143 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:42:00.143 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:42:00.143 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:42:00.143 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:42:00.143 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:00.143 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:42:00.143 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:42:00.143 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:00.143 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:00.143 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:42:00.143 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:00.143 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:42:00.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:00.143 --rc genhtml_branch_coverage=1 00:42:00.143 --rc genhtml_function_coverage=1 00:42:00.143 --rc genhtml_legend=1 00:42:00.143 --rc geninfo_all_blocks=1 00:42:00.143 --rc geninfo_unexecuted_blocks=1 00:42:00.143 00:42:00.143 ' 00:42:00.143 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:42:00.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:00.143 --rc genhtml_branch_coverage=1 00:42:00.143 --rc genhtml_function_coverage=1 00:42:00.143 --rc genhtml_legend=1 00:42:00.143 --rc geninfo_all_blocks=1 00:42:00.143 --rc geninfo_unexecuted_blocks=1 00:42:00.143 00:42:00.143 ' 00:42:00.143 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:42:00.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:00.143 --rc genhtml_branch_coverage=1 00:42:00.143 --rc genhtml_function_coverage=1 00:42:00.143 --rc genhtml_legend=1 00:42:00.143 --rc geninfo_all_blocks=1 00:42:00.143 --rc geninfo_unexecuted_blocks=1 00:42:00.143 00:42:00.143 ' 00:42:00.143 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:42:00.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:00.144 --rc genhtml_branch_coverage=1 00:42:00.144 --rc genhtml_function_coverage=1 00:42:00.144 --rc genhtml_legend=1 00:42:00.144 --rc geninfo_all_blocks=1 00:42:00.144 --rc geninfo_unexecuted_blocks=1 00:42:00.144 00:42:00.144 ' 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:42:00.144 16:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:08.283 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:08.283 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:42:08.283 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:08.283 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:08.283 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:08.283 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:08.283 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:08.283 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:42:08.283 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:08.283 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:42:08.283 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:42:08.283 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:42:08.283 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:42:08.283 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:42:08.283 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:42:08.283 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:08.283 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:08.283 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:08.283 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:08.283 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:08.283 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:08.283 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:08.283 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:08.283 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:08.283 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:08.283 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:08.283 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:42:08.283 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:42:08.283 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:42:08.283 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:42:08.283 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:42:08.283 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:42:08.283 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:42:08.283 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:42:08.283 Found 0000:31:00.0 (0x8086 - 0x159b) 00:42:08.283 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:42:08.283 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:42:08.283 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:08.283 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:08.283 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:42:08.283 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:42:08.283 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:42:08.283 Found 0000:31:00.1 (0x8086 - 0x159b) 00:42:08.283 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:42:08.283 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:42:08.284 Found net devices under 0000:31:00.0: cvl_0_0 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:42:08.284 Found net devices under 0000:31:00.1: cvl_0_1 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # is_hw=yes 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:08.284 16:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:08.284 16:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:08.284 16:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:08.284 16:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:08.284 16:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:08.284 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:08.284 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.600 ms 00:42:08.284 00:42:08.284 --- 10.0.0.2 ping statistics --- 00:42:08.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:08.284 rtt min/avg/max/mdev = 0.600/0.600/0.600/0.000 ms 00:42:08.284 16:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:08.284 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:08.284 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.256 ms 00:42:08.284 00:42:08.284 --- 10.0.0.1 ping statistics --- 00:42:08.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:08.284 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:42:08.284 16:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:08.284 16:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # return 0 00:42:08.284 16:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:42:08.284 16:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:08.284 16:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:42:08.284 16:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:42:08.284 16:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:08.284 16:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:42:08.284 16:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:42:08.284 16:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:42:08.284 16:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:42:08.284 16:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:08.284 16:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:08.284 16:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=697543 00:42:08.284 16:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 697543 00:42:08.284 16:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:42:08.284 16:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 697543 ']' 00:42:08.284 16:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:08.284 16:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:08.284 16:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:08.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:08.284 16:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:08.284 16:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:08.284 [2024-09-27 16:00:48.239466] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:08.284 [2024-09-27 16:00:48.240446] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:42:08.284 [2024-09-27 16:00:48.240484] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:08.284 [2024-09-27 16:00:48.322513] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:08.284 [2024-09-27 16:00:48.356022] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:08.284 [2024-09-27 16:00:48.356058] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:08.284 [2024-09-27 16:00:48.356066] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:08.284 [2024-09-27 16:00:48.356073] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:08.284 [2024-09-27 16:00:48.356079] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:08.284 [2024-09-27 16:00:48.356214] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:42:08.284 [2024-09-27 16:00:48.356363] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:42:08.284 [2024-09-27 16:00:48.356512] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:42:08.284 [2024-09-27 16:00:48.356513] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:42:08.284 [2024-09-27 16:00:48.356827] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:08.545 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:08.545 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:42:08.545 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:42:08.545 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:08.545 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:08.807 [2024-09-27 16:00:49.144782] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:08.807 [2024-09-27 16:00:49.145457] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:42:08.807 [2024-09-27 16:00:49.145637] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:08.807 [2024-09-27 16:00:49.145791] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:08.807 [2024-09-27 16:00:49.157364] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:08.807 Malloc0 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:08.807 [2024-09-27 16:00:49.245622] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=697804 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=697807 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:42:08.807 { 00:42:08.807 "params": { 00:42:08.807 "name": "Nvme$subsystem", 00:42:08.807 "trtype": "$TEST_TRANSPORT", 00:42:08.807 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:08.807 "adrfam": "ipv4", 00:42:08.807 "trsvcid": "$NVMF_PORT", 00:42:08.807 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:08.807 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:08.807 "hdgst": ${hdgst:-false}, 00:42:08.807 "ddgst": ${ddgst:-false} 00:42:08.807 }, 00:42:08.807 "method": "bdev_nvme_attach_controller" 00:42:08.807 } 00:42:08.807 EOF 00:42:08.807 )") 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=697809 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:42:08.807 { 00:42:08.807 "params": { 00:42:08.807 "name": "Nvme$subsystem", 00:42:08.807 "trtype": "$TEST_TRANSPORT", 00:42:08.807 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:08.807 "adrfam": "ipv4", 00:42:08.807 "trsvcid": "$NVMF_PORT", 00:42:08.807 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:08.807 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:08.807 "hdgst": ${hdgst:-false}, 00:42:08.807 "ddgst": ${ddgst:-false} 00:42:08.807 }, 00:42:08.807 "method": "bdev_nvme_attach_controller" 00:42:08.807 } 00:42:08.807 EOF 00:42:08.807 )") 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=697813 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:42:08.807 { 00:42:08.807 "params": { 00:42:08.807 "name": "Nvme$subsystem", 00:42:08.807 "trtype": "$TEST_TRANSPORT", 00:42:08.807 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:08.807 "adrfam": "ipv4", 00:42:08.807 "trsvcid": "$NVMF_PORT", 00:42:08.807 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:08.807 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:08.807 "hdgst": ${hdgst:-false}, 00:42:08.807 "ddgst": ${ddgst:-false} 00:42:08.807 }, 00:42:08.807 "method": "bdev_nvme_attach_controller" 00:42:08.807 } 00:42:08.807 EOF 00:42:08.807 )") 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:42:08.807 { 00:42:08.807 "params": { 00:42:08.807 "name": "Nvme$subsystem", 00:42:08.807 "trtype": "$TEST_TRANSPORT", 00:42:08.807 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:08.807 "adrfam": "ipv4", 00:42:08.807 "trsvcid": "$NVMF_PORT", 00:42:08.807 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:08.807 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:08.807 "hdgst": ${hdgst:-false}, 00:42:08.807 "ddgst": ${ddgst:-false} 00:42:08.807 }, 00:42:08.807 "method": "bdev_nvme_attach_controller" 00:42:08.807 } 00:42:08.807 EOF 00:42:08.807 )") 00:42:08.807 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:42:08.808 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 697804 00:42:08.808 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:42:08.808 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:42:08.808 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:42:08.808 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:42:08.808 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:42:08.808 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:42:08.808 "params": { 00:42:08.808 "name": "Nvme1", 00:42:08.808 "trtype": "tcp", 00:42:08.808 "traddr": "10.0.0.2", 00:42:08.808 "adrfam": "ipv4", 00:42:08.808 "trsvcid": "4420", 00:42:08.808 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:08.808 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:08.808 "hdgst": false, 00:42:08.808 "ddgst": false 00:42:08.808 }, 00:42:08.808 "method": "bdev_nvme_attach_controller" 00:42:08.808 }' 00:42:08.808 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:42:08.808 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:42:08.808 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:42:08.808 "params": { 00:42:08.808 "name": "Nvme1", 00:42:08.808 "trtype": "tcp", 00:42:08.808 "traddr": "10.0.0.2", 00:42:08.808 "adrfam": "ipv4", 00:42:08.808 "trsvcid": "4420", 00:42:08.808 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:08.808 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:08.808 "hdgst": false, 00:42:08.808 "ddgst": false 00:42:08.808 }, 00:42:08.808 "method": "bdev_nvme_attach_controller" 00:42:08.808 }' 00:42:08.808 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:42:08.808 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:42:08.808 "params": { 00:42:08.808 "name": "Nvme1", 00:42:08.808 "trtype": "tcp", 00:42:08.808 "traddr": "10.0.0.2", 00:42:08.808 "adrfam": "ipv4", 00:42:08.808 "trsvcid": "4420", 00:42:08.808 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:08.808 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:08.808 "hdgst": false, 00:42:08.808 "ddgst": false 00:42:08.808 }, 00:42:08.808 "method": "bdev_nvme_attach_controller" 00:42:08.808 }' 00:42:08.808 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:42:08.808 16:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:42:08.808 "params": { 00:42:08.808 "name": "Nvme1", 00:42:08.808 "trtype": "tcp", 00:42:08.808 "traddr": "10.0.0.2", 00:42:08.808 "adrfam": "ipv4", 00:42:08.808 "trsvcid": "4420", 00:42:08.808 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:08.808 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:08.808 "hdgst": false, 00:42:08.808 "ddgst": false 00:42:08.808 }, 00:42:08.808 "method": "bdev_nvme_attach_controller" 00:42:08.808 }' 00:42:09.068 [2024-09-27 16:00:49.302966] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:42:09.068 [2024-09-27 16:00:49.302970] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:42:09.068 [2024-09-27 16:00:49.303036] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-09-27 16:00:49.303037] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:42:09.068 --proc-type=auto ] 00:42:09.068 [2024-09-27 16:00:49.305772] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:42:09.068 [2024-09-27 16:00:49.305840] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:42:09.068 [2024-09-27 16:00:49.309222] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:42:09.068 [2024-09-27 16:00:49.309283] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:42:09.068 [2024-09-27 16:00:49.496004] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:09.068 [2024-09-27 16:00:49.519030] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:42:09.328 [2024-09-27 16:00:49.556765] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:09.328 [2024-09-27 16:00:49.580985] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:42:09.328 [2024-09-27 16:00:49.616695] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:09.328 [2024-09-27 16:00:49.644382] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:42:09.328 [2024-09-27 16:00:49.710160] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:09.328 [2024-09-27 16:00:49.742570] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:42:09.588 Running I/O for 1 seconds... 00:42:09.588 Running I/O for 1 seconds... 00:42:09.847 Running I/O for 1 seconds... 00:42:09.847 Running I/O for 1 seconds... 00:42:10.788 14292.00 IOPS, 55.83 MiB/s 7788.00 IOPS, 30.42 MiB/s 00:42:10.788 Latency(us) 00:42:10.788 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:10.788 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:42:10.788 Nvme1n1 : 1.01 14351.01 56.06 0.00 0.00 8890.11 2416.64 14964.05 00:42:10.788 =================================================================================================================== 00:42:10.788 Total : 14351.01 56.06 0.00 0.00 8890.11 2416.64 14964.05 00:42:10.788 00:42:10.788 Latency(us) 00:42:10.788 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:10.788 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:42:10.788 Nvme1n1 : 1.02 7798.73 30.46 0.00 0.00 16251.18 5488.64 27852.80 00:42:10.788 =================================================================================================================== 00:42:10.788 Total : 7798.73 30.46 0.00 0.00 16251.18 5488.64 27852.80 00:42:10.788 16:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 697807 00:42:10.788 16:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 697809 00:42:11.048 9333.00 IOPS, 36.46 MiB/s 00:42:11.048 Latency(us) 00:42:11.048 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:11.048 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:42:11.048 Nvme1n1 : 1.01 9434.47 36.85 0.00 0.00 13529.05 4041.39 36044.80 00:42:11.048 =================================================================================================================== 00:42:11.048 Total : 9434.47 36.85 0.00 0.00 13529.05 4041.39 36044.80 00:42:11.048 188760.00 IOPS, 737.34 MiB/s 00:42:11.048 Latency(us) 00:42:11.048 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:11.048 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:42:11.048 Nvme1n1 : 1.00 188386.71 735.89 0.00 0.00 676.07 307.20 1966.08 00:42:11.048 =================================================================================================================== 00:42:11.048 Total : 188386.71 735.89 0.00 0.00 676.07 307.20 1966.08 00:42:11.048 16:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 697813 00:42:11.048 16:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:11.048 16:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:11.048 16:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:11.048 16:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:11.048 16:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:42:11.049 16:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:42:11.049 16:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:42:11.049 16:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:42:11.049 16:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:11.049 16:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:42:11.049 16:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:11.049 16:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:11.049 rmmod nvme_tcp 00:42:11.049 rmmod nvme_fabrics 00:42:11.049 rmmod nvme_keyring 00:42:11.049 16:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:11.308 16:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:42:11.308 16:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:42:11.308 16:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 697543 ']' 00:42:11.308 16:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 697543 00:42:11.308 16:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 697543 ']' 00:42:11.308 16:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 697543 00:42:11.308 16:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:42:11.308 16:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:11.308 16:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 697543 00:42:11.308 16:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:11.308 16:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:11.308 16:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 697543' 00:42:11.308 killing process with pid 697543 00:42:11.308 16:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 697543 00:42:11.308 16:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 697543 00:42:11.308 16:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:42:11.308 16:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:42:11.308 16:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:42:11.308 16:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:42:11.308 16:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-save 00:42:11.308 16:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:42:11.308 16:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-restore 00:42:11.308 16:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:11.308 16:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:11.308 16:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:11.308 16:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:11.308 16:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:13.850 16:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:13.850 00:42:13.850 real 0m13.500s 00:42:13.850 user 0m16.870s 00:42:13.850 sys 0m8.132s 00:42:13.850 16:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:13.850 16:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:13.850 ************************************ 00:42:13.850 END TEST nvmf_bdev_io_wait 00:42:13.850 ************************************ 00:42:13.850 16:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:42:13.850 16:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:42:13.850 16:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:13.850 16:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:13.850 ************************************ 00:42:13.850 START TEST nvmf_queue_depth 00:42:13.850 ************************************ 00:42:13.850 16:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:42:13.850 * Looking for test storage... 00:42:13.850 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:13.850 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:42:13.850 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:42:13.850 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:42:13.850 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:42:13.850 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:13.850 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:13.850 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:13.850 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:42:13.850 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:42:13.850 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:42:13.850 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:42:13.850 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:42:13.850 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:42:13.850 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:42:13.850 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:13.850 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:42:13.850 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:42:13.850 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:13.850 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:13.850 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:42:13.850 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:42:13.850 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:13.850 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:42:13.850 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:42:13.850 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:42:13.850 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:42:13.850 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:13.850 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:42:13.850 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:42:13.850 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:13.850 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:13.850 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:42:13.850 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:13.850 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:42:13.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:13.850 --rc genhtml_branch_coverage=1 00:42:13.850 --rc genhtml_function_coverage=1 00:42:13.850 --rc genhtml_legend=1 00:42:13.850 --rc geninfo_all_blocks=1 00:42:13.850 --rc geninfo_unexecuted_blocks=1 00:42:13.850 00:42:13.850 ' 00:42:13.850 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:42:13.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:13.850 --rc genhtml_branch_coverage=1 00:42:13.850 --rc genhtml_function_coverage=1 00:42:13.850 --rc genhtml_legend=1 00:42:13.850 --rc geninfo_all_blocks=1 00:42:13.850 --rc geninfo_unexecuted_blocks=1 00:42:13.850 00:42:13.850 ' 00:42:13.850 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:42:13.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:13.850 --rc genhtml_branch_coverage=1 00:42:13.850 --rc genhtml_function_coverage=1 00:42:13.850 --rc genhtml_legend=1 00:42:13.850 --rc geninfo_all_blocks=1 00:42:13.850 --rc geninfo_unexecuted_blocks=1 00:42:13.850 00:42:13.850 ' 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:42:13.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:13.851 --rc genhtml_branch_coverage=1 00:42:13.851 --rc genhtml_function_coverage=1 00:42:13.851 --rc genhtml_legend=1 00:42:13.851 --rc geninfo_all_blocks=1 00:42:13.851 --rc geninfo_unexecuted_blocks=1 00:42:13.851 00:42:13.851 ' 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:42:13.851 16:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:21.991 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:21.991 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:42:21.991 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:21.991 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:21.991 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:21.991 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:21.991 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:21.991 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:42:21.991 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:21.991 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:42:21.991 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:42:21.991 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:42:21.991 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:42:21.991 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:42:21.991 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:42:21.992 Found 0000:31:00.0 (0x8086 - 0x159b) 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:42:21.992 Found 0000:31:00.1 (0x8086 - 0x159b) 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:42:21.992 Found net devices under 0000:31:00.0: cvl_0_0 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:42:21.992 Found net devices under 0000:31:00.1: cvl_0_1 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # is_hw=yes 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:21.992 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:21.992 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.569 ms 00:42:21.992 00:42:21.992 --- 10.0.0.2 ping statistics --- 00:42:21.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:21.992 rtt min/avg/max/mdev = 0.569/0.569/0.569/0.000 ms 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:21.992 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:21.992 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:42:21.992 00:42:21.992 --- 10.0.0.1 ping statistics --- 00:42:21.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:21.992 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:21.992 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # return 0 00:42:21.993 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:42:21.993 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:21.993 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:42:21.993 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:42:21.993 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:21.993 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:42:21.993 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:42:21.993 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:42:21.993 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:42:21.993 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:21.993 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:21.993 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=702351 00:42:21.993 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 702351 00:42:21.993 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:42:21.993 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 702351 ']' 00:42:21.993 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:21.993 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:21.993 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:21.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:21.993 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:21.993 16:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:21.993 [2024-09-27 16:01:01.655703] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:21.993 [2024-09-27 16:01:01.656664] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:42:21.993 [2024-09-27 16:01:01.656702] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:21.993 [2024-09-27 16:01:01.743950] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:21.993 [2024-09-27 16:01:01.774831] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:21.993 [2024-09-27 16:01:01.774866] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:21.993 [2024-09-27 16:01:01.774874] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:21.993 [2024-09-27 16:01:01.774881] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:21.993 [2024-09-27 16:01:01.774886] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:21.993 [2024-09-27 16:01:01.774917] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:42:21.993 [2024-09-27 16:01:01.822689] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:21.993 [2024-09-27 16:01:01.822958] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:21.993 16:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:21.993 16:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:42:21.993 16:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:42:21.993 16:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:21.993 16:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:22.254 16:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:22.254 16:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:22.254 16:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:22.254 16:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:22.254 [2024-09-27 16:01:02.507677] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:22.254 16:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:22.254 16:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:22.254 16:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:22.254 16:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:22.254 Malloc0 00:42:22.254 16:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:22.254 16:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:42:22.254 16:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:22.254 16:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:22.254 16:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:22.254 16:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:22.254 16:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:22.254 16:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:22.254 16:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:22.254 16:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:22.254 16:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:22.254 16:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:22.254 [2024-09-27 16:01:02.583782] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:22.254 16:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:22.254 16:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=702673 00:42:22.254 16:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:42:22.254 16:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:42:22.254 16:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 702673 /var/tmp/bdevperf.sock 00:42:22.254 16:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 702673 ']' 00:42:22.254 16:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:42:22.254 16:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:22.254 16:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:42:22.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:42:22.254 16:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:22.254 16:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:22.254 [2024-09-27 16:01:02.640998] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:42:22.254 [2024-09-27 16:01:02.641073] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid702673 ] 00:42:22.254 [2024-09-27 16:01:02.726682] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:22.514 [2024-09-27 16:01:02.771424] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:42:23.085 16:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:23.085 16:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:42:23.085 16:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:42:23.085 16:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:23.085 16:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:23.345 NVMe0n1 00:42:23.345 16:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:23.345 16:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:42:23.345 Running I/O for 10 seconds... 00:42:33.640 9207.00 IOPS, 35.96 MiB/s 9183.50 IOPS, 35.87 MiB/s 9223.67 IOPS, 36.03 MiB/s 9659.25 IOPS, 37.73 MiB/s 10431.40 IOPS, 40.75 MiB/s 10932.00 IOPS, 42.70 MiB/s 11397.86 IOPS, 44.52 MiB/s 11667.00 IOPS, 45.57 MiB/s 11951.78 IOPS, 46.69 MiB/s 12182.20 IOPS, 47.59 MiB/s 00:42:33.640 Latency(us) 00:42:33.640 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:33.640 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:42:33.640 Verification LBA range: start 0x0 length 0x4000 00:42:33.640 NVMe0n1 : 10.06 12205.66 47.68 0.00 0.00 83601.11 24576.00 67283.63 00:42:33.640 =================================================================================================================== 00:42:33.640 Total : 12205.66 47.68 0.00 0.00 83601.11 24576.00 67283.63 00:42:33.640 { 00:42:33.640 "results": [ 00:42:33.640 { 00:42:33.640 "job": "NVMe0n1", 00:42:33.640 "core_mask": "0x1", 00:42:33.640 "workload": "verify", 00:42:33.640 "status": "finished", 00:42:33.640 "verify_range": { 00:42:33.640 "start": 0, 00:42:33.640 "length": 16384 00:42:33.640 }, 00:42:33.640 "queue_depth": 1024, 00:42:33.640 "io_size": 4096, 00:42:33.640 "runtime": 10.061313, 00:42:33.640 "iops": 12205.663415898103, 00:42:33.640 "mibps": 47.678372718351966, 00:42:33.640 "io_failed": 0, 00:42:33.640 "io_timeout": 0, 00:42:33.640 "avg_latency_us": 83601.11496003148, 00:42:33.640 "min_latency_us": 24576.0, 00:42:33.640 "max_latency_us": 67283.62666666666 00:42:33.640 } 00:42:33.640 ], 00:42:33.640 "core_count": 1 00:42:33.640 } 00:42:33.640 16:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 702673 00:42:33.640 16:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 702673 ']' 00:42:33.640 16:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 702673 00:42:33.640 16:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:42:33.640 16:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:33.640 16:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 702673 00:42:33.640 16:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:33.640 16:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:33.640 16:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 702673' 00:42:33.640 killing process with pid 702673 00:42:33.640 16:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 702673 00:42:33.640 Received shutdown signal, test time was about 10.000000 seconds 00:42:33.640 00:42:33.640 Latency(us) 00:42:33.640 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:33.640 =================================================================================================================== 00:42:33.640 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:33.640 16:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 702673 00:42:33.640 16:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:42:33.640 16:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:42:33.640 16:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:42:33.640 16:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:42:33.640 16:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:33.640 16:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:42:33.640 16:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:33.640 16:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:33.640 rmmod nvme_tcp 00:42:33.640 rmmod nvme_fabrics 00:42:33.640 rmmod nvme_keyring 00:42:33.640 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:33.640 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:42:33.640 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:42:33.640 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 702351 ']' 00:42:33.640 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 702351 00:42:33.640 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 702351 ']' 00:42:33.640 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 702351 00:42:33.640 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:42:33.640 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:33.640 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 702351 00:42:33.640 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:42:33.640 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:42:33.640 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 702351' 00:42:33.640 killing process with pid 702351 00:42:33.640 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 702351 00:42:33.640 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 702351 00:42:33.900 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:42:33.900 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:42:33.900 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:42:33.900 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:42:33.900 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-save 00:42:33.900 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:42:33.900 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-restore 00:42:33.900 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:33.900 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:33.900 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:33.900 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:33.900 16:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:36.440 00:42:36.440 real 0m22.382s 00:42:36.440 user 0m24.706s 00:42:36.440 sys 0m7.224s 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:36.440 ************************************ 00:42:36.440 END TEST nvmf_queue_depth 00:42:36.440 ************************************ 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:36.440 ************************************ 00:42:36.440 START TEST nvmf_target_multipath 00:42:36.440 ************************************ 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:42:36.440 * Looking for test storage... 00:42:36.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:42:36.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:36.440 --rc genhtml_branch_coverage=1 00:42:36.440 --rc genhtml_function_coverage=1 00:42:36.440 --rc genhtml_legend=1 00:42:36.440 --rc geninfo_all_blocks=1 00:42:36.440 --rc geninfo_unexecuted_blocks=1 00:42:36.440 00:42:36.440 ' 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:42:36.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:36.440 --rc genhtml_branch_coverage=1 00:42:36.440 --rc genhtml_function_coverage=1 00:42:36.440 --rc genhtml_legend=1 00:42:36.440 --rc geninfo_all_blocks=1 00:42:36.440 --rc geninfo_unexecuted_blocks=1 00:42:36.440 00:42:36.440 ' 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:42:36.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:36.440 --rc genhtml_branch_coverage=1 00:42:36.440 --rc genhtml_function_coverage=1 00:42:36.440 --rc genhtml_legend=1 00:42:36.440 --rc geninfo_all_blocks=1 00:42:36.440 --rc geninfo_unexecuted_blocks=1 00:42:36.440 00:42:36.440 ' 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:42:36.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:36.440 --rc genhtml_branch_coverage=1 00:42:36.440 --rc genhtml_function_coverage=1 00:42:36.440 --rc genhtml_legend=1 00:42:36.440 --rc geninfo_all_blocks=1 00:42:36.440 --rc geninfo_unexecuted_blocks=1 00:42:36.440 00:42:36.440 ' 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:36.440 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:36.441 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:36.441 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:36.441 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:36.441 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:36.441 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:36.441 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:36.441 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:42:36.441 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:36.441 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:36.441 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:36.441 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:36.441 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:36.441 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:36.441 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:42:36.441 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:36.441 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:42:36.441 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:36.441 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:36.441 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:36.441 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:36.441 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:36.441 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:36.441 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:36.441 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:36.441 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:36.441 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:36.441 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:36.441 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:36.441 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:42:36.441 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:36.441 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:42:36.441 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:42:36.441 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:36.441 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:42:36.441 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:42:36.441 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:42:36.441 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:36.441 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:36.441 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:36.441 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:42:36.441 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:42:36.441 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:42:36.441 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:42:44.575 Found 0000:31:00.0 (0x8086 - 0x159b) 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:42:44.575 Found 0000:31:00.1 (0x8086 - 0x159b) 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:42:44.575 Found net devices under 0000:31:00.0: cvl_0_0 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:42:44.575 Found net devices under 0000:31:00.1: cvl_0_1 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # is_hw=yes 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:44.575 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:44.576 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:44.576 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:44.576 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:44.576 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:44.576 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:44.576 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:44.576 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:44.576 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:44.576 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:44.576 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:44.576 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:44.576 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:44.576 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:44.576 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:44.576 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:44.576 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:44.576 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:44.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:44.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.592 ms 00:42:44.576 00:42:44.576 --- 10.0.0.2 ping statistics --- 00:42:44.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:44.576 rtt min/avg/max/mdev = 0.592/0.592/0.592/0.000 ms 00:42:44.576 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:44.576 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:44.576 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:42:44.576 00:42:44.576 --- 10.0.0.1 ping statistics --- 00:42:44.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:44.576 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:42:44.576 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:44.576 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # return 0 00:42:44.576 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:42:44.576 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:44.576 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:42:44.576 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:42:44.576 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:44.576 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:42:44.576 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:42:44.576 16:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:42:44.576 16:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:42:44.576 only one NIC for nvmf test 00:42:44.576 16:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:42:44.576 16:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:42:44.576 16:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:42:44.576 16:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:44.576 16:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:42:44.576 16:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:44.576 16:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:44.576 rmmod nvme_tcp 00:42:44.576 rmmod nvme_fabrics 00:42:44.576 rmmod nvme_keyring 00:42:44.576 16:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:44.576 16:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:42:44.576 16:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:42:44.576 16:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:42:44.576 16:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:42:44.576 16:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:42:44.576 16:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:42:44.576 16:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:42:44.576 16:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:42:44.576 16:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:42:44.576 16:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:42:44.576 16:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:44.576 16:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:44.576 16:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:44.576 16:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:44.576 16:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:45.959 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:45.959 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:42:45.959 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:42:45.959 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:42:45.959 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:42:45.959 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:45.959 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:42:45.959 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:45.959 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:45.959 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:45.959 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:42:45.959 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:42:45.959 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:42:45.959 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:42:45.959 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:42:45.959 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:42:45.959 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:42:45.959 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:42:45.959 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:42:45.959 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:42:45.959 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:45.959 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:45.959 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:45.959 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:45.959 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:45.959 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:45.959 00:42:45.959 real 0m9.846s 00:42:45.959 user 0m2.178s 00:42:45.959 sys 0m5.584s 00:42:45.959 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:45.959 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:42:45.959 ************************************ 00:42:45.959 END TEST nvmf_target_multipath 00:42:45.959 ************************************ 00:42:45.959 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:42:45.959 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:42:45.959 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:45.959 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:45.959 ************************************ 00:42:45.959 START TEST nvmf_zcopy 00:42:45.959 ************************************ 00:42:45.959 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:42:45.959 * Looking for test storage... 00:42:45.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:45.959 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:42:45.960 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:42:45.960 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:42:46.220 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:42:46.220 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:42:46.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:46.221 --rc genhtml_branch_coverage=1 00:42:46.221 --rc genhtml_function_coverage=1 00:42:46.221 --rc genhtml_legend=1 00:42:46.221 --rc geninfo_all_blocks=1 00:42:46.221 --rc geninfo_unexecuted_blocks=1 00:42:46.221 00:42:46.221 ' 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:42:46.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:46.221 --rc genhtml_branch_coverage=1 00:42:46.221 --rc genhtml_function_coverage=1 00:42:46.221 --rc genhtml_legend=1 00:42:46.221 --rc geninfo_all_blocks=1 00:42:46.221 --rc geninfo_unexecuted_blocks=1 00:42:46.221 00:42:46.221 ' 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:42:46.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:46.221 --rc genhtml_branch_coverage=1 00:42:46.221 --rc genhtml_function_coverage=1 00:42:46.221 --rc genhtml_legend=1 00:42:46.221 --rc geninfo_all_blocks=1 00:42:46.221 --rc geninfo_unexecuted_blocks=1 00:42:46.221 00:42:46.221 ' 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:42:46.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:46.221 --rc genhtml_branch_coverage=1 00:42:46.221 --rc genhtml_function_coverage=1 00:42:46.221 --rc genhtml_legend=1 00:42:46.221 --rc geninfo_all_blocks=1 00:42:46.221 --rc geninfo_unexecuted_blocks=1 00:42:46.221 00:42:46.221 ' 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:46.221 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:46.222 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:46.222 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:46.222 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:42:46.222 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:42:46.222 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:46.222 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:42:46.222 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:42:46.222 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:42:46.222 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:46.222 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:46.222 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:46.222 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:42:46.222 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:42:46.222 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:42:46.222 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:42:54.357 Found 0000:31:00.0 (0x8086 - 0x159b) 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:42:54.357 Found 0000:31:00.1 (0x8086 - 0x159b) 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:42:54.357 Found net devices under 0000:31:00.0: cvl_0_0 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:42:54.357 Found net devices under 0000:31:00.1: cvl_0_1 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # is_hw=yes 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:54.357 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:54.358 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:54.358 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:54.358 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:54.358 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:54.358 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:54.358 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:54.358 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:54.358 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:54.358 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:54.358 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:54.358 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:54.358 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:54.358 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:54.358 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:54.358 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:54.358 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:54.358 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:54.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:54.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:42:54.358 00:42:54.358 --- 10.0.0.2 ping statistics --- 00:42:54.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:54.358 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:42:54.358 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:54.358 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:54.358 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:42:54.358 00:42:54.358 --- 10.0.0.1 ping statistics --- 00:42:54.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:54.358 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:42:54.358 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:54.358 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # return 0 00:42:54.358 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:42:54.358 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:54.358 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:42:54.358 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:42:54.358 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:54.358 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:42:54.358 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:42:54.358 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:42:54.358 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:42:54.358 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:54.358 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:54.358 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=713142 00:42:54.358 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 713142 00:42:54.358 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:42:54.358 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 713142 ']' 00:42:54.358 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:54.358 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:54.358 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:54.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:54.358 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:54.358 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:54.358 [2024-09-27 16:01:34.041966] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:54.358 [2024-09-27 16:01:34.042958] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:42:54.358 [2024-09-27 16:01:34.042996] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:54.358 [2024-09-27 16:01:34.126105] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:54.358 [2024-09-27 16:01:34.159730] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:54.358 [2024-09-27 16:01:34.159770] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:54.358 [2024-09-27 16:01:34.159782] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:54.358 [2024-09-27 16:01:34.159788] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:54.358 [2024-09-27 16:01:34.159794] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:54.358 [2024-09-27 16:01:34.159814] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:42:54.358 [2024-09-27 16:01:34.212721] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:54.358 [2024-09-27 16:01:34.213001] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:54.358 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:54.358 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:42:54.358 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:42:54.358 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:54.358 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:54.619 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:54.619 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:42:54.619 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:42:54.619 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:54.619 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:54.619 [2024-09-27 16:01:34.876565] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:54.619 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:54.619 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:42:54.619 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:54.619 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:54.619 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:54.619 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:54.619 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:54.619 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:54.619 [2024-09-27 16:01:34.904792] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:54.619 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:54.619 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:42:54.619 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:54.619 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:54.619 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:54.619 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:42:54.619 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:54.619 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:54.619 malloc0 00:42:54.619 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:54.619 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:42:54.619 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:54.619 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:54.619 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:54.619 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:42:54.619 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:42:54.619 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:42:54.619 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:42:54.619 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:42:54.620 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:42:54.620 { 00:42:54.620 "params": { 00:42:54.620 "name": "Nvme$subsystem", 00:42:54.620 "trtype": "$TEST_TRANSPORT", 00:42:54.620 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:54.620 "adrfam": "ipv4", 00:42:54.620 "trsvcid": "$NVMF_PORT", 00:42:54.620 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:54.620 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:54.620 "hdgst": ${hdgst:-false}, 00:42:54.620 "ddgst": ${ddgst:-false} 00:42:54.620 }, 00:42:54.620 "method": "bdev_nvme_attach_controller" 00:42:54.620 } 00:42:54.620 EOF 00:42:54.620 )") 00:42:54.620 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:42:54.620 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:42:54.620 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:42:54.620 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:42:54.620 "params": { 00:42:54.620 "name": "Nvme1", 00:42:54.620 "trtype": "tcp", 00:42:54.620 "traddr": "10.0.0.2", 00:42:54.620 "adrfam": "ipv4", 00:42:54.620 "trsvcid": "4420", 00:42:54.620 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:54.620 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:54.620 "hdgst": false, 00:42:54.620 "ddgst": false 00:42:54.620 }, 00:42:54.620 "method": "bdev_nvme_attach_controller" 00:42:54.620 }' 00:42:54.620 [2024-09-27 16:01:35.015983] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:42:54.620 [2024-09-27 16:01:35.016040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid713386 ] 00:42:54.620 [2024-09-27 16:01:35.094908] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:54.879 [2024-09-27 16:01:35.126908] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:42:54.879 Running I/O for 10 seconds... 00:43:05.173 6497.00 IOPS, 50.76 MiB/s 6463.50 IOPS, 50.50 MiB/s 6514.33 IOPS, 50.89 MiB/s 6518.75 IOPS, 50.93 MiB/s 6730.20 IOPS, 52.58 MiB/s 7198.33 IOPS, 56.24 MiB/s 7530.00 IOPS, 58.83 MiB/s 7781.75 IOPS, 60.79 MiB/s 7973.22 IOPS, 62.29 MiB/s 8131.70 IOPS, 63.53 MiB/s 00:43:05.173 Latency(us) 00:43:05.173 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:05.173 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:43:05.173 Verification LBA range: start 0x0 length 0x1000 00:43:05.173 Nvme1n1 : 10.01 8135.70 63.56 0.00 0.00 15687.43 1925.12 27088.21 00:43:05.173 =================================================================================================================== 00:43:05.173 Total : 8135.70 63.56 0.00 0.00 15687.43 1925.12 27088.21 00:43:05.173 16:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=715204 00:43:05.173 16:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:43:05.173 16:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:05.173 16:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:43:05.173 16:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:43:05.173 16:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:43:05.173 16:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:43:05.173 16:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:43:05.173 16:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:43:05.173 { 00:43:05.173 "params": { 00:43:05.173 "name": "Nvme$subsystem", 00:43:05.173 "trtype": "$TEST_TRANSPORT", 00:43:05.173 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:05.173 "adrfam": "ipv4", 00:43:05.173 "trsvcid": "$NVMF_PORT", 00:43:05.173 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:05.173 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:05.173 "hdgst": ${hdgst:-false}, 00:43:05.173 "ddgst": ${ddgst:-false} 00:43:05.173 }, 00:43:05.173 "method": "bdev_nvme_attach_controller" 00:43:05.173 } 00:43:05.173 EOF 00:43:05.173 )") 00:43:05.173 [2024-09-27 16:01:45.452157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.173 [2024-09-27 16:01:45.452184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.173 16:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:43:05.173 16:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:43:05.173 16:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:43:05.173 16:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:43:05.173 "params": { 00:43:05.173 "name": "Nvme1", 00:43:05.173 "trtype": "tcp", 00:43:05.173 "traddr": "10.0.0.2", 00:43:05.173 "adrfam": "ipv4", 00:43:05.173 "trsvcid": "4420", 00:43:05.173 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:05.173 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:05.173 "hdgst": false, 00:43:05.173 "ddgst": false 00:43:05.173 }, 00:43:05.173 "method": "bdev_nvme_attach_controller" 00:43:05.173 }' 00:43:05.173 [2024-09-27 16:01:45.464128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.173 [2024-09-27 16:01:45.464137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.173 [2024-09-27 16:01:45.476126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.173 [2024-09-27 16:01:45.476134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.173 [2024-09-27 16:01:45.488125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.173 [2024-09-27 16:01:45.488133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.173 [2024-09-27 16:01:45.500124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.173 [2024-09-27 16:01:45.500132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.173 [2024-09-27 16:01:45.503835] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:43:05.173 [2024-09-27 16:01:45.503891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid715204 ] 00:43:05.173 [2024-09-27 16:01:45.512124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.173 [2024-09-27 16:01:45.512132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.173 [2024-09-27 16:01:45.524125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.173 [2024-09-27 16:01:45.524132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.173 [2024-09-27 16:01:45.536125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.173 [2024-09-27 16:01:45.536133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.173 [2024-09-27 16:01:45.548124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.173 [2024-09-27 16:01:45.548132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.173 [2024-09-27 16:01:45.560124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.173 [2024-09-27 16:01:45.560132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.173 [2024-09-27 16:01:45.572125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.173 [2024-09-27 16:01:45.572133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.174 [2024-09-27 16:01:45.583107] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:05.174 [2024-09-27 16:01:45.584125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.174 [2024-09-27 16:01:45.584132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.174 [2024-09-27 16:01:45.596126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.174 [2024-09-27 16:01:45.596137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.174 [2024-09-27 16:01:45.608127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.174 [2024-09-27 16:01:45.608142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.174 [2024-09-27 16:01:45.611301] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:43:05.174 [2024-09-27 16:01:45.620126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.174 [2024-09-27 16:01:45.620134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.174 [2024-09-27 16:01:45.632129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.174 [2024-09-27 16:01:45.632145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.174 [2024-09-27 16:01:45.644127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.174 [2024-09-27 16:01:45.644138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.174 [2024-09-27 16:01:45.656128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.174 [2024-09-27 16:01:45.656137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.434 [2024-09-27 16:01:45.668125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.434 [2024-09-27 16:01:45.668134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.434 [2024-09-27 16:01:45.680133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.434 [2024-09-27 16:01:45.680149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.434 [2024-09-27 16:01:45.692158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.434 [2024-09-27 16:01:45.692167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.434 [2024-09-27 16:01:45.704128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.434 [2024-09-27 16:01:45.704138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.434 [2024-09-27 16:01:45.716126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.434 [2024-09-27 16:01:45.716136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.434 [2024-09-27 16:01:45.728133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.434 [2024-09-27 16:01:45.728148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.434 Running I/O for 5 seconds... 00:43:05.434 [2024-09-27 16:01:45.740153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.434 [2024-09-27 16:01:45.740169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.434 [2024-09-27 16:01:45.755673] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.434 [2024-09-27 16:01:45.755689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.434 [2024-09-27 16:01:45.768778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.434 [2024-09-27 16:01:45.768795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.434 [2024-09-27 16:01:45.782931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.434 [2024-09-27 16:01:45.782948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.434 [2024-09-27 16:01:45.795764] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.434 [2024-09-27 16:01:45.795780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.434 [2024-09-27 16:01:45.808545] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.434 [2024-09-27 16:01:45.808560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.434 [2024-09-27 16:01:45.823711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.434 [2024-09-27 16:01:45.823726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.434 [2024-09-27 16:01:45.836407] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.434 [2024-09-27 16:01:45.836422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.434 [2024-09-27 16:01:45.851412] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.434 [2024-09-27 16:01:45.851427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.434 [2024-09-27 16:01:45.864353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.434 [2024-09-27 16:01:45.864368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.434 [2024-09-27 16:01:45.877027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.434 [2024-09-27 16:01:45.877041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.434 [2024-09-27 16:01:45.891346] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.434 [2024-09-27 16:01:45.891361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.434 [2024-09-27 16:01:45.904071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.434 [2024-09-27 16:01:45.904087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.434 [2024-09-27 16:01:45.916597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.434 [2024-09-27 16:01:45.916611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.694 [2024-09-27 16:01:45.931207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.694 [2024-09-27 16:01:45.931222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.694 [2024-09-27 16:01:45.944036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.694 [2024-09-27 16:01:45.944051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.694 [2024-09-27 16:01:45.956319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.694 [2024-09-27 16:01:45.956334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.694 [2024-09-27 16:01:45.968829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.694 [2024-09-27 16:01:45.968844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.694 [2024-09-27 16:01:45.983503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.694 [2024-09-27 16:01:45.983518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.694 [2024-09-27 16:01:45.996283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.694 [2024-09-27 16:01:45.996302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.695 [2024-09-27 16:01:46.010670] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.695 [2024-09-27 16:01:46.010685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.695 [2024-09-27 16:01:46.023957] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.695 [2024-09-27 16:01:46.023971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.695 [2024-09-27 16:01:46.036769] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.695 [2024-09-27 16:01:46.036783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.695 [2024-09-27 16:01:46.051376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.695 [2024-09-27 16:01:46.051391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.695 [2024-09-27 16:01:46.063998] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.695 [2024-09-27 16:01:46.064014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.695 [2024-09-27 16:01:46.076134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.695 [2024-09-27 16:01:46.076149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.695 [2024-09-27 16:01:46.088382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.695 [2024-09-27 16:01:46.088396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.695 [2024-09-27 16:01:46.103322] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.695 [2024-09-27 16:01:46.103337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.695 [2024-09-27 16:01:46.116468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.695 [2024-09-27 16:01:46.116482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.695 [2024-09-27 16:01:46.131306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.695 [2024-09-27 16:01:46.131321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.695 [2024-09-27 16:01:46.144569] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.695 [2024-09-27 16:01:46.144583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.695 [2024-09-27 16:01:46.158865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.695 [2024-09-27 16:01:46.158879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.695 [2024-09-27 16:01:46.172109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.695 [2024-09-27 16:01:46.172124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.955 [2024-09-27 16:01:46.184727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.955 [2024-09-27 16:01:46.184742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.955 [2024-09-27 16:01:46.199121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.955 [2024-09-27 16:01:46.199136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.955 [2024-09-27 16:01:46.211715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.955 [2024-09-27 16:01:46.211729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.955 [2024-09-27 16:01:46.224216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.955 [2024-09-27 16:01:46.224230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.955 [2024-09-27 16:01:46.236972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.955 [2024-09-27 16:01:46.236986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.955 [2024-09-27 16:01:46.251793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.955 [2024-09-27 16:01:46.251811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.955 [2024-09-27 16:01:46.264283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.955 [2024-09-27 16:01:46.264298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.955 [2024-09-27 16:01:46.276033] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.955 [2024-09-27 16:01:46.276048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.955 [2024-09-27 16:01:46.288762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.955 [2024-09-27 16:01:46.288776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.956 [2024-09-27 16:01:46.303569] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.956 [2024-09-27 16:01:46.303583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.956 [2024-09-27 16:01:46.316771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.956 [2024-09-27 16:01:46.316786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.956 [2024-09-27 16:01:46.331049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.956 [2024-09-27 16:01:46.331065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.956 [2024-09-27 16:01:46.344210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.956 [2024-09-27 16:01:46.344224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.956 [2024-09-27 16:01:46.357198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.956 [2024-09-27 16:01:46.357212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.956 [2024-09-27 16:01:46.371139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.956 [2024-09-27 16:01:46.371153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.956 [2024-09-27 16:01:46.383586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.956 [2024-09-27 16:01:46.383600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.956 [2024-09-27 16:01:46.395965] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.956 [2024-09-27 16:01:46.395980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.956 [2024-09-27 16:01:46.408630] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.956 [2024-09-27 16:01:46.408644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.956 [2024-09-27 16:01:46.423844] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.956 [2024-09-27 16:01:46.423859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:05.956 [2024-09-27 16:01:46.436368] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:05.956 [2024-09-27 16:01:46.436383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.217 [2024-09-27 16:01:46.447723] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.217 [2024-09-27 16:01:46.447737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.217 [2024-09-27 16:01:46.460714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.217 [2024-09-27 16:01:46.460728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.217 [2024-09-27 16:01:46.475455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.217 [2024-09-27 16:01:46.475470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.217 [2024-09-27 16:01:46.487971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.217 [2024-09-27 16:01:46.487985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.217 [2024-09-27 16:01:46.500450] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.217 [2024-09-27 16:01:46.500471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.217 [2024-09-27 16:01:46.514936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.217 [2024-09-27 16:01:46.514951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.217 [2024-09-27 16:01:46.527574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.217 [2024-09-27 16:01:46.527589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.217 [2024-09-27 16:01:46.540215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.217 [2024-09-27 16:01:46.540230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.217 [2024-09-27 16:01:46.553104] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.217 [2024-09-27 16:01:46.553119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.217 [2024-09-27 16:01:46.566876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.217 [2024-09-27 16:01:46.566891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.217 [2024-09-27 16:01:46.579770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.217 [2024-09-27 16:01:46.579785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.217 [2024-09-27 16:01:46.592757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.217 [2024-09-27 16:01:46.592771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.217 [2024-09-27 16:01:46.607313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.217 [2024-09-27 16:01:46.607327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.217 [2024-09-27 16:01:46.620453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.217 [2024-09-27 16:01:46.620468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.217 [2024-09-27 16:01:46.635374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.217 [2024-09-27 16:01:46.635389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.217 [2024-09-27 16:01:46.648306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.217 [2024-09-27 16:01:46.648319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.217 [2024-09-27 16:01:46.663377] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.217 [2024-09-27 16:01:46.663391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.217 [2024-09-27 16:01:46.676684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.217 [2024-09-27 16:01:46.676698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.217 [2024-09-27 16:01:46.691870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.217 [2024-09-27 16:01:46.691885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.217 [2024-09-27 16:01:46.704906] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.217 [2024-09-27 16:01:46.704920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.477 [2024-09-27 16:01:46.720140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.477 [2024-09-27 16:01:46.720154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.477 [2024-09-27 16:01:46.732810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.477 [2024-09-27 16:01:46.732824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.477 18883.00 IOPS, 147.52 MiB/s [2024-09-27 16:01:46.747133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.477 [2024-09-27 16:01:46.747147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.477 [2024-09-27 16:01:46.759708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.477 [2024-09-27 16:01:46.759723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.477 [2024-09-27 16:01:46.772910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.477 [2024-09-27 16:01:46.772924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.477 [2024-09-27 16:01:46.787766] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.478 [2024-09-27 16:01:46.787782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.478 [2024-09-27 16:01:46.800466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.478 [2024-09-27 16:01:46.800480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.478 [2024-09-27 16:01:46.815112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.478 [2024-09-27 16:01:46.815126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.478 [2024-09-27 16:01:46.828098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.478 [2024-09-27 16:01:46.828113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.478 [2024-09-27 16:01:46.840970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.478 [2024-09-27 16:01:46.840984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.478 [2024-09-27 16:01:46.856106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.478 [2024-09-27 16:01:46.856121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.478 [2024-09-27 16:01:46.868615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.478 [2024-09-27 16:01:46.868629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.478 [2024-09-27 16:01:46.883187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.478 [2024-09-27 16:01:46.883201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.478 [2024-09-27 16:01:46.895812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.478 [2024-09-27 16:01:46.895827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.478 [2024-09-27 16:01:46.908310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.478 [2024-09-27 16:01:46.908324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.478 [2024-09-27 16:01:46.921129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.478 [2024-09-27 16:01:46.921142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.478 [2024-09-27 16:01:46.935426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.478 [2024-09-27 16:01:46.935440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.478 [2024-09-27 16:01:46.948459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.478 [2024-09-27 16:01:46.948473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.478 [2024-09-27 16:01:46.963000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.478 [2024-09-27 16:01:46.963015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.739 [2024-09-27 16:01:46.976292] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.739 [2024-09-27 16:01:46.976307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.739 [2024-09-27 16:01:46.988433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.739 [2024-09-27 16:01:46.988447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.739 [2024-09-27 16:01:47.003144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.739 [2024-09-27 16:01:47.003159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.739 [2024-09-27 16:01:47.016224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.739 [2024-09-27 16:01:47.016238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.739 [2024-09-27 16:01:47.028563] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.739 [2024-09-27 16:01:47.028577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.739 [2024-09-27 16:01:47.043024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.739 [2024-09-27 16:01:47.043038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.739 [2024-09-27 16:01:47.056096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.739 [2024-09-27 16:01:47.056110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.739 [2024-09-27 16:01:47.068871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.739 [2024-09-27 16:01:47.068885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.739 [2024-09-27 16:01:47.083419] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.739 [2024-09-27 16:01:47.083434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.739 [2024-09-27 16:01:47.096518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.739 [2024-09-27 16:01:47.096532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.739 [2024-09-27 16:01:47.111236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.739 [2024-09-27 16:01:47.111251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.739 [2024-09-27 16:01:47.123802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.739 [2024-09-27 16:01:47.123817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.739 [2024-09-27 16:01:47.136320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.739 [2024-09-27 16:01:47.136333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.739 [2024-09-27 16:01:47.151536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.739 [2024-09-27 16:01:47.151551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.739 [2024-09-27 16:01:47.164370] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.739 [2024-09-27 16:01:47.164383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.739 [2024-09-27 16:01:47.179783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.739 [2024-09-27 16:01:47.179798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.739 [2024-09-27 16:01:47.192538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.739 [2024-09-27 16:01:47.192553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.739 [2024-09-27 16:01:47.207492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.739 [2024-09-27 16:01:47.207506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:06.739 [2024-09-27 16:01:47.220326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:06.739 [2024-09-27 16:01:47.220340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.000 [2024-09-27 16:01:47.232795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.000 [2024-09-27 16:01:47.232809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.000 [2024-09-27 16:01:47.247905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.000 [2024-09-27 16:01:47.247919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.000 [2024-09-27 16:01:47.260316] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.000 [2024-09-27 16:01:47.260329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.000 [2024-09-27 16:01:47.275435] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.000 [2024-09-27 16:01:47.275450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.000 [2024-09-27 16:01:47.288274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.000 [2024-09-27 16:01:47.288289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.001 [2024-09-27 16:01:47.300716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.001 [2024-09-27 16:01:47.300730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.001 [2024-09-27 16:01:47.315474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.001 [2024-09-27 16:01:47.315488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.001 [2024-09-27 16:01:47.328367] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.001 [2024-09-27 16:01:47.328381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.001 [2024-09-27 16:01:47.341163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.001 [2024-09-27 16:01:47.341178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.001 [2024-09-27 16:01:47.355010] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.001 [2024-09-27 16:01:47.355025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.001 [2024-09-27 16:01:47.367739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.001 [2024-09-27 16:01:47.367753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.001 [2024-09-27 16:01:47.380149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.001 [2024-09-27 16:01:47.380163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.001 [2024-09-27 16:01:47.393219] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.001 [2024-09-27 16:01:47.393233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.001 [2024-09-27 16:01:47.408039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.001 [2024-09-27 16:01:47.408053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.001 [2024-09-27 16:01:47.420490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.001 [2024-09-27 16:01:47.420504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.001 [2024-09-27 16:01:47.435328] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.001 [2024-09-27 16:01:47.435343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.001 [2024-09-27 16:01:47.448277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.001 [2024-09-27 16:01:47.448292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.001 [2024-09-27 16:01:47.459918] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.001 [2024-09-27 16:01:47.459934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.001 [2024-09-27 16:01:47.473132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.001 [2024-09-27 16:01:47.473149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.001 [2024-09-27 16:01:47.487117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.001 [2024-09-27 16:01:47.487132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.261 [2024-09-27 16:01:47.499831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.261 [2024-09-27 16:01:47.499847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.261 [2024-09-27 16:01:47.513383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.261 [2024-09-27 16:01:47.513401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.261 [2024-09-27 16:01:47.527596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.261 [2024-09-27 16:01:47.527611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.261 [2024-09-27 16:01:47.540305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.261 [2024-09-27 16:01:47.540322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.261 [2024-09-27 16:01:47.552947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.261 [2024-09-27 16:01:47.552962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.261 [2024-09-27 16:01:47.567307] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.261 [2024-09-27 16:01:47.567322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.262 [2024-09-27 16:01:47.580090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.262 [2024-09-27 16:01:47.580106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.262 [2024-09-27 16:01:47.591883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.262 [2024-09-27 16:01:47.591903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.262 [2024-09-27 16:01:47.604735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.262 [2024-09-27 16:01:47.604750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.262 [2024-09-27 16:01:47.619769] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.262 [2024-09-27 16:01:47.619785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.262 [2024-09-27 16:01:47.632086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.262 [2024-09-27 16:01:47.632102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.262 [2024-09-27 16:01:47.644770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.262 [2024-09-27 16:01:47.644785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.262 [2024-09-27 16:01:47.659302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.262 [2024-09-27 16:01:47.659318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.262 [2024-09-27 16:01:47.672412] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.262 [2024-09-27 16:01:47.672427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.262 [2024-09-27 16:01:47.686644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.262 [2024-09-27 16:01:47.686659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.262 [2024-09-27 16:01:47.699523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.262 [2024-09-27 16:01:47.699538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.262 [2024-09-27 16:01:47.711972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.262 [2024-09-27 16:01:47.711987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.262 [2024-09-27 16:01:47.724955] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.262 [2024-09-27 16:01:47.724970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.262 [2024-09-27 16:01:47.739774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.262 [2024-09-27 16:01:47.739790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.522 18895.00 IOPS, 147.62 MiB/s [2024-09-27 16:01:47.752516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.522 [2024-09-27 16:01:47.752531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.522 [2024-09-27 16:01:47.767411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.522 [2024-09-27 16:01:47.767429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.522 [2024-09-27 16:01:47.780263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.522 [2024-09-27 16:01:47.780278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.522 [2024-09-27 16:01:47.792766] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.522 [2024-09-27 16:01:47.792782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.522 [2024-09-27 16:01:47.807747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.522 [2024-09-27 16:01:47.807762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.522 [2024-09-27 16:01:47.820739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.522 [2024-09-27 16:01:47.820755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.522 [2024-09-27 16:01:47.835875] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.522 [2024-09-27 16:01:47.835891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.522 [2024-09-27 16:01:47.848848] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.522 [2024-09-27 16:01:47.848863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.522 [2024-09-27 16:01:47.863511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.522 [2024-09-27 16:01:47.863526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.522 [2024-09-27 16:01:47.876288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.522 [2024-09-27 16:01:47.876303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.522 [2024-09-27 16:01:47.888936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.522 [2024-09-27 16:01:47.888951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.522 [2024-09-27 16:01:47.903679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.522 [2024-09-27 16:01:47.903694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.522 [2024-09-27 16:01:47.916384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.522 [2024-09-27 16:01:47.916399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.522 [2024-09-27 16:01:47.931449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.522 [2024-09-27 16:01:47.931465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.522 [2024-09-27 16:01:47.944252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.522 [2024-09-27 16:01:47.944267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.522 [2024-09-27 16:01:47.958945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.522 [2024-09-27 16:01:47.958960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.522 [2024-09-27 16:01:47.972096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.522 [2024-09-27 16:01:47.972112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.522 [2024-09-27 16:01:47.984110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.522 [2024-09-27 16:01:47.984126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.522 [2024-09-27 16:01:47.997122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.522 [2024-09-27 16:01:47.997138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.783 [2024-09-27 16:01:48.011846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.783 [2024-09-27 16:01:48.011862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.783 [2024-09-27 16:01:48.024720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.783 [2024-09-27 16:01:48.024738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.784 [2024-09-27 16:01:48.039353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.784 [2024-09-27 16:01:48.039368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.784 [2024-09-27 16:01:48.051994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.784 [2024-09-27 16:01:48.052009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.784 [2024-09-27 16:01:48.064689] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.784 [2024-09-27 16:01:48.064703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.784 [2024-09-27 16:01:48.079368] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.784 [2024-09-27 16:01:48.079383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.784 [2024-09-27 16:01:48.091973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.784 [2024-09-27 16:01:48.091988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.784 [2024-09-27 16:01:48.104772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.784 [2024-09-27 16:01:48.104787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.784 [2024-09-27 16:01:48.119585] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.784 [2024-09-27 16:01:48.119600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.784 [2024-09-27 16:01:48.132271] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.784 [2024-09-27 16:01:48.132286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.784 [2024-09-27 16:01:48.144761] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.784 [2024-09-27 16:01:48.144776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.784 [2024-09-27 16:01:48.159343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.784 [2024-09-27 16:01:48.159357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.784 [2024-09-27 16:01:48.172493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.784 [2024-09-27 16:01:48.172507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.784 [2024-09-27 16:01:48.187353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.784 [2024-09-27 16:01:48.187368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.784 [2024-09-27 16:01:48.200396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.784 [2024-09-27 16:01:48.200410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.784 [2024-09-27 16:01:48.215264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.784 [2024-09-27 16:01:48.215279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.784 [2024-09-27 16:01:48.227968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.784 [2024-09-27 16:01:48.227984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.784 [2024-09-27 16:01:48.240614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.784 [2024-09-27 16:01:48.240629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.784 [2024-09-27 16:01:48.255469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.784 [2024-09-27 16:01:48.255485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:07.784 [2024-09-27 16:01:48.268788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:07.784 [2024-09-27 16:01:48.268803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.044 [2024-09-27 16:01:48.283477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.044 [2024-09-27 16:01:48.283492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.044 [2024-09-27 16:01:48.296230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.044 [2024-09-27 16:01:48.296245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.044 [2024-09-27 16:01:48.308714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.044 [2024-09-27 16:01:48.308729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.044 [2024-09-27 16:01:48.323603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.045 [2024-09-27 16:01:48.323618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.045 [2024-09-27 16:01:48.336857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.045 [2024-09-27 16:01:48.336872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.045 [2024-09-27 16:01:48.351450] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.045 [2024-09-27 16:01:48.351465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.045 [2024-09-27 16:01:48.364097] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.045 [2024-09-27 16:01:48.364112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.045 [2024-09-27 16:01:48.375784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.045 [2024-09-27 16:01:48.375799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.045 [2024-09-27 16:01:48.389049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.045 [2024-09-27 16:01:48.389064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.045 [2024-09-27 16:01:48.403045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.045 [2024-09-27 16:01:48.403060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.045 [2024-09-27 16:01:48.415654] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.045 [2024-09-27 16:01:48.415669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.045 [2024-09-27 16:01:48.428197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.045 [2024-09-27 16:01:48.428212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.045 [2024-09-27 16:01:48.440943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.045 [2024-09-27 16:01:48.440958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.045 [2024-09-27 16:01:48.455676] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.045 [2024-09-27 16:01:48.455691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.045 [2024-09-27 16:01:48.468643] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.045 [2024-09-27 16:01:48.468658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.045 [2024-09-27 16:01:48.483685] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.045 [2024-09-27 16:01:48.483701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.045 [2024-09-27 16:01:48.496753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.045 [2024-09-27 16:01:48.496768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.045 [2024-09-27 16:01:48.511505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.045 [2024-09-27 16:01:48.511520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.045 [2024-09-27 16:01:48.524107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.045 [2024-09-27 16:01:48.524122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.306 [2024-09-27 16:01:48.536985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.306 [2024-09-27 16:01:48.537001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.306 [2024-09-27 16:01:48.551482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.306 [2024-09-27 16:01:48.551497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.306 [2024-09-27 16:01:48.564846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.306 [2024-09-27 16:01:48.564861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.306 [2024-09-27 16:01:48.579596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.306 [2024-09-27 16:01:48.579611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.306 [2024-09-27 16:01:48.592327] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.306 [2024-09-27 16:01:48.592342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.306 [2024-09-27 16:01:48.604930] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.306 [2024-09-27 16:01:48.604945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.306 [2024-09-27 16:01:48.619257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.306 [2024-09-27 16:01:48.619272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.306 [2024-09-27 16:01:48.632513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.306 [2024-09-27 16:01:48.632527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.306 [2024-09-27 16:01:48.647788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.306 [2024-09-27 16:01:48.647803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.306 [2024-09-27 16:01:48.660554] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.306 [2024-09-27 16:01:48.660568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.306 [2024-09-27 16:01:48.675171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.306 [2024-09-27 16:01:48.675187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.306 [2024-09-27 16:01:48.688470] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.306 [2024-09-27 16:01:48.688486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.306 [2024-09-27 16:01:48.703090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.306 [2024-09-27 16:01:48.703106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.306 [2024-09-27 16:01:48.716043] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.306 [2024-09-27 16:01:48.716058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.306 [2024-09-27 16:01:48.728993] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.306 [2024-09-27 16:01:48.729009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.306 [2024-09-27 16:01:48.742997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.306 [2024-09-27 16:01:48.743012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.306 18906.33 IOPS, 147.71 MiB/s [2024-09-27 16:01:48.755849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.306 [2024-09-27 16:01:48.755864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.306 [2024-09-27 16:01:48.768383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.306 [2024-09-27 16:01:48.768398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.306 [2024-09-27 16:01:48.780630] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.306 [2024-09-27 16:01:48.780648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.568 [2024-09-27 16:01:48.795256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.568 [2024-09-27 16:01:48.795271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.568 [2024-09-27 16:01:48.808266] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.568 [2024-09-27 16:01:48.808281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.568 [2024-09-27 16:01:48.820221] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.568 [2024-09-27 16:01:48.820236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.568 [2024-09-27 16:01:48.832936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.568 [2024-09-27 16:01:48.832950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.568 [2024-09-27 16:01:48.846612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.568 [2024-09-27 16:01:48.846627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.568 [2024-09-27 16:01:48.859347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.568 [2024-09-27 16:01:48.859362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.568 [2024-09-27 16:01:48.872539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.568 [2024-09-27 16:01:48.872553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.568 [2024-09-27 16:01:48.887585] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.568 [2024-09-27 16:01:48.887600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.568 [2024-09-27 16:01:48.900136] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.568 [2024-09-27 16:01:48.900151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.568 [2024-09-27 16:01:48.912854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.568 [2024-09-27 16:01:48.912869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.568 [2024-09-27 16:01:48.927543] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.568 [2024-09-27 16:01:48.927558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.568 [2024-09-27 16:01:48.940781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.568 [2024-09-27 16:01:48.940795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.568 [2024-09-27 16:01:48.954788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.568 [2024-09-27 16:01:48.954804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.568 [2024-09-27 16:01:48.968021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.568 [2024-09-27 16:01:48.968036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.568 [2024-09-27 16:01:48.980388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.568 [2024-09-27 16:01:48.980403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.568 [2024-09-27 16:01:48.995500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.568 [2024-09-27 16:01:48.995515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.568 [2024-09-27 16:01:49.008221] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.568 [2024-09-27 16:01:49.008237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.568 [2024-09-27 16:01:49.020139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.568 [2024-09-27 16:01:49.020155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.568 [2024-09-27 16:01:49.033284] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.568 [2024-09-27 16:01:49.033303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.568 [2024-09-27 16:01:49.047147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.568 [2024-09-27 16:01:49.047163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.829 [2024-09-27 16:01:49.060042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.829 [2024-09-27 16:01:49.060057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.829 [2024-09-27 16:01:49.072610] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.829 [2024-09-27 16:01:49.072625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.829 [2024-09-27 16:01:49.087150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.829 [2024-09-27 16:01:49.087165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.829 [2024-09-27 16:01:49.100242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.829 [2024-09-27 16:01:49.100257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.829 [2024-09-27 16:01:49.113105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.829 [2024-09-27 16:01:49.113120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.829 [2024-09-27 16:01:49.127776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.829 [2024-09-27 16:01:49.127791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.829 [2024-09-27 16:01:49.140314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.829 [2024-09-27 16:01:49.140329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.829 [2024-09-27 16:01:49.155770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.829 [2024-09-27 16:01:49.155786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.829 [2024-09-27 16:01:49.168531] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.829 [2024-09-27 16:01:49.168547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.829 [2024-09-27 16:01:49.182960] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.829 [2024-09-27 16:01:49.182975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.829 [2024-09-27 16:01:49.196096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.829 [2024-09-27 16:01:49.196112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.829 [2024-09-27 16:01:49.208675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.829 [2024-09-27 16:01:49.208690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.829 [2024-09-27 16:01:49.223635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.829 [2024-09-27 16:01:49.223651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.829 [2024-09-27 16:01:49.236786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.829 [2024-09-27 16:01:49.236801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.829 [2024-09-27 16:01:49.251891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.829 [2024-09-27 16:01:49.251910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.829 [2024-09-27 16:01:49.264605] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.829 [2024-09-27 16:01:49.264620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.829 [2024-09-27 16:01:49.279397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.829 [2024-09-27 16:01:49.279412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.829 [2024-09-27 16:01:49.292130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.829 [2024-09-27 16:01:49.292149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.829 [2024-09-27 16:01:49.303980] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.829 [2024-09-27 16:01:49.303995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:08.829 [2024-09-27 16:01:49.316600] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:08.829 [2024-09-27 16:01:49.316615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.090 [2024-09-27 16:01:49.331369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.090 [2024-09-27 16:01:49.331385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.090 [2024-09-27 16:01:49.344707] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.090 [2024-09-27 16:01:49.344722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.090 [2024-09-27 16:01:49.359777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.090 [2024-09-27 16:01:49.359792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.090 [2024-09-27 16:01:49.372538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.090 [2024-09-27 16:01:49.372553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.090 [2024-09-27 16:01:49.387515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.090 [2024-09-27 16:01:49.387531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.090 [2024-09-27 16:01:49.400549] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.090 [2024-09-27 16:01:49.400564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.090 [2024-09-27 16:01:49.415703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.090 [2024-09-27 16:01:49.415719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.090 [2024-09-27 16:01:49.428381] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.090 [2024-09-27 16:01:49.428396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.090 [2024-09-27 16:01:49.443168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.090 [2024-09-27 16:01:49.443184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.090 [2024-09-27 16:01:49.456213] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.090 [2024-09-27 16:01:49.456229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.090 [2024-09-27 16:01:49.468590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.090 [2024-09-27 16:01:49.468605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.090 [2024-09-27 16:01:49.483016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.090 [2024-09-27 16:01:49.483031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.090 [2024-09-27 16:01:49.496331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.090 [2024-09-27 16:01:49.496346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.090 [2024-09-27 16:01:49.511231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.090 [2024-09-27 16:01:49.511246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.090 [2024-09-27 16:01:49.523680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.090 [2024-09-27 16:01:49.523695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.090 [2024-09-27 16:01:49.536080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.090 [2024-09-27 16:01:49.536096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.090 [2024-09-27 16:01:49.548941] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.090 [2024-09-27 16:01:49.548961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.090 [2024-09-27 16:01:49.563528] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.090 [2024-09-27 16:01:49.563544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.090 [2024-09-27 16:01:49.576374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.090 [2024-09-27 16:01:49.576389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.350 [2024-09-27 16:01:49.591426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.350 [2024-09-27 16:01:49.591442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.350 [2024-09-27 16:01:49.604432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.350 [2024-09-27 16:01:49.604448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.350 [2024-09-27 16:01:49.618930] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.350 [2024-09-27 16:01:49.618946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.350 [2024-09-27 16:01:49.632415] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.350 [2024-09-27 16:01:49.632430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.350 [2024-09-27 16:01:49.647165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.350 [2024-09-27 16:01:49.647180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.350 [2024-09-27 16:01:49.660254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.350 [2024-09-27 16:01:49.660270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.350 [2024-09-27 16:01:49.671671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.350 [2024-09-27 16:01:49.671686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.350 [2024-09-27 16:01:49.684692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.350 [2024-09-27 16:01:49.684707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.350 [2024-09-27 16:01:49.699996] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.350 [2024-09-27 16:01:49.700011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.350 [2024-09-27 16:01:49.712900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.350 [2024-09-27 16:01:49.712915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.350 [2024-09-27 16:01:49.727648] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.350 [2024-09-27 16:01:49.727664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.350 [2024-09-27 16:01:49.740675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.350 [2024-09-27 16:01:49.740690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.350 18899.00 IOPS, 147.65 MiB/s [2024-09-27 16:01:49.755306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.350 [2024-09-27 16:01:49.755321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.350 [2024-09-27 16:01:49.768599] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.350 [2024-09-27 16:01:49.768613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.350 [2024-09-27 16:01:49.783230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.350 [2024-09-27 16:01:49.783246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.350 [2024-09-27 16:01:49.796638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.351 [2024-09-27 16:01:49.796653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.351 [2024-09-27 16:01:49.811055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.351 [2024-09-27 16:01:49.811071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.351 [2024-09-27 16:01:49.824116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.351 [2024-09-27 16:01:49.824132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.351 [2024-09-27 16:01:49.836839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.351 [2024-09-27 16:01:49.836853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.611 [2024-09-27 16:01:49.851185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.611 [2024-09-27 16:01:49.851201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.611 [2024-09-27 16:01:49.864235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.611 [2024-09-27 16:01:49.864251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.611 [2024-09-27 16:01:49.876145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.611 [2024-09-27 16:01:49.876160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.611 [2024-09-27 16:01:49.889011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.611 [2024-09-27 16:01:49.889026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.611 [2024-09-27 16:01:49.903179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.611 [2024-09-27 16:01:49.903195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.611 [2024-09-27 16:01:49.916424] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.611 [2024-09-27 16:01:49.916439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.611 [2024-09-27 16:01:49.930930] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.611 [2024-09-27 16:01:49.930945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.611 [2024-09-27 16:01:49.944070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.611 [2024-09-27 16:01:49.944086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.611 [2024-09-27 16:01:49.956606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.611 [2024-09-27 16:01:49.956620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.611 [2024-09-27 16:01:49.971331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.611 [2024-09-27 16:01:49.971346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.611 [2024-09-27 16:01:49.984326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.611 [2024-09-27 16:01:49.984341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.611 [2024-09-27 16:01:49.996485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.611 [2024-09-27 16:01:49.996499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.611 [2024-09-27 16:01:50.011578] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.611 [2024-09-27 16:01:50.011594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.611 [2024-09-27 16:01:50.024262] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.611 [2024-09-27 16:01:50.024277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.611 [2024-09-27 16:01:50.039388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.611 [2024-09-27 16:01:50.039404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.611 [2024-09-27 16:01:50.052173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.611 [2024-09-27 16:01:50.052189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.611 [2024-09-27 16:01:50.064875] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.611 [2024-09-27 16:01:50.064889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.611 [2024-09-27 16:01:50.076150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.612 [2024-09-27 16:01:50.076166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.612 [2024-09-27 16:01:50.088654] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.612 [2024-09-27 16:01:50.088669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.872 [2024-09-27 16:01:50.103136] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.872 [2024-09-27 16:01:50.103152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.872 [2024-09-27 16:01:50.116391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.872 [2024-09-27 16:01:50.116406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.872 [2024-09-27 16:01:50.131312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.872 [2024-09-27 16:01:50.131328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.872 [2024-09-27 16:01:50.143821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.872 [2024-09-27 16:01:50.143836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.872 [2024-09-27 16:01:50.156249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.872 [2024-09-27 16:01:50.156264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.872 [2024-09-27 16:01:50.168978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.872 [2024-09-27 16:01:50.168993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.872 [2024-09-27 16:01:50.183344] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.872 [2024-09-27 16:01:50.183358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.872 [2024-09-27 16:01:50.196103] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.872 [2024-09-27 16:01:50.196118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.872 [2024-09-27 16:01:50.209303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.872 [2024-09-27 16:01:50.209317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.872 [2024-09-27 16:01:50.223374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.872 [2024-09-27 16:01:50.223389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.872 [2024-09-27 16:01:50.236395] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.872 [2024-09-27 16:01:50.236410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.872 [2024-09-27 16:01:50.251254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.872 [2024-09-27 16:01:50.251269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.872 [2024-09-27 16:01:50.263969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.872 [2024-09-27 16:01:50.263984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.872 [2024-09-27 16:01:50.276534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.872 [2024-09-27 16:01:50.276549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.872 [2024-09-27 16:01:50.291026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.872 [2024-09-27 16:01:50.291040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.872 [2024-09-27 16:01:50.303778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.872 [2024-09-27 16:01:50.303793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.872 [2024-09-27 16:01:50.316368] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.872 [2024-09-27 16:01:50.316383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.872 [2024-09-27 16:01:50.328487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.872 [2024-09-27 16:01:50.328502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.872 [2024-09-27 16:01:50.343656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.872 [2024-09-27 16:01:50.343672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:09.872 [2024-09-27 16:01:50.356160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:09.872 [2024-09-27 16:01:50.356174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.133 [2024-09-27 16:01:50.368824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.133 [2024-09-27 16:01:50.368839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.133 [2024-09-27 16:01:50.383194] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.133 [2024-09-27 16:01:50.383209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.133 [2024-09-27 16:01:50.396365] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.133 [2024-09-27 16:01:50.396380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.133 [2024-09-27 16:01:50.408667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.133 [2024-09-27 16:01:50.408681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.133 [2024-09-27 16:01:50.423373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.133 [2024-09-27 16:01:50.423389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.133 [2024-09-27 16:01:50.435886] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.133 [2024-09-27 16:01:50.435905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.133 [2024-09-27 16:01:50.448452] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.133 [2024-09-27 16:01:50.448466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.134 [2024-09-27 16:01:50.462874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.134 [2024-09-27 16:01:50.462889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.134 [2024-09-27 16:01:50.476105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.134 [2024-09-27 16:01:50.476120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.134 [2024-09-27 16:01:50.488914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.134 [2024-09-27 16:01:50.488929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.134 [2024-09-27 16:01:50.503348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.134 [2024-09-27 16:01:50.503362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.134 [2024-09-27 16:01:50.516108] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.134 [2024-09-27 16:01:50.516123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.134 [2024-09-27 16:01:50.527812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.134 [2024-09-27 16:01:50.527828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.134 [2024-09-27 16:01:50.540783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.134 [2024-09-27 16:01:50.540798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.134 [2024-09-27 16:01:50.555736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.134 [2024-09-27 16:01:50.555754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.134 [2024-09-27 16:01:50.568253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.134 [2024-09-27 16:01:50.568267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.134 [2024-09-27 16:01:50.583796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.134 [2024-09-27 16:01:50.583812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.134 [2024-09-27 16:01:50.596529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.134 [2024-09-27 16:01:50.596543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.134 [2024-09-27 16:01:50.611147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.134 [2024-09-27 16:01:50.611162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.395 [2024-09-27 16:01:50.623823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.395 [2024-09-27 16:01:50.623838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.395 [2024-09-27 16:01:50.636174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.395 [2024-09-27 16:01:50.636189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.395 [2024-09-27 16:01:50.648751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.395 [2024-09-27 16:01:50.648766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.395 [2024-09-27 16:01:50.663365] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.395 [2024-09-27 16:01:50.663380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.395 [2024-09-27 16:01:50.676290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.395 [2024-09-27 16:01:50.676306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.395 [2024-09-27 16:01:50.691091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.395 [2024-09-27 16:01:50.691107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.395 [2024-09-27 16:01:50.704301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.395 [2024-09-27 16:01:50.704316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.395 [2024-09-27 16:01:50.716540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.395 [2024-09-27 16:01:50.716555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.395 [2024-09-27 16:01:50.731331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.395 [2024-09-27 16:01:50.731345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.395 [2024-09-27 16:01:50.743968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.395 [2024-09-27 16:01:50.743983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.395 18899.20 IOPS, 147.65 MiB/s [2024-09-27 16:01:50.756039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.395 [2024-09-27 16:01:50.756056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.395 00:43:10.395 Latency(us) 00:43:10.395 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:10.395 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:43:10.395 Nvme1n1 : 5.01 18897.49 147.64 0.00 0.00 6766.67 2662.40 11304.96 00:43:10.395 =================================================================================================================== 00:43:10.395 Total : 18897.49 147.64 0.00 0.00 6766.67 2662.40 11304.96 00:43:10.395 [2024-09-27 16:01:50.764129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.395 [2024-09-27 16:01:50.764147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.395 [2024-09-27 16:01:50.776130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.395 [2024-09-27 16:01:50.776144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.395 [2024-09-27 16:01:50.788130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.395 [2024-09-27 16:01:50.788143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.395 [2024-09-27 16:01:50.800131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.395 [2024-09-27 16:01:50.800144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.395 [2024-09-27 16:01:50.812129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.395 [2024-09-27 16:01:50.812140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.395 [2024-09-27 16:01:50.824126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.395 [2024-09-27 16:01:50.824136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.395 [2024-09-27 16:01:50.836127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.395 [2024-09-27 16:01:50.836138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.395 [2024-09-27 16:01:50.848128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.395 [2024-09-27 16:01:50.848140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.395 [2024-09-27 16:01:50.860127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.395 [2024-09-27 16:01:50.860137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.395 [2024-09-27 16:01:50.872124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:10.395 [2024-09-27 16:01:50.872133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:10.395 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (715204) - No such process 00:43:10.395 16:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 715204 00:43:10.395 16:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:10.395 16:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:10.395 16:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:10.656 16:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:10.656 16:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:43:10.656 16:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:10.656 16:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:10.656 delay0 00:43:10.656 16:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:10.656 16:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:43:10.656 16:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:10.656 16:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:10.656 16:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:10.656 16:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:43:10.656 [2024-09-27 16:01:51.020563] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:43:17.236 [2024-09-27 16:01:57.382706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2587260 is same with the state(6) to be set 00:43:17.236 [2024-09-27 16:01:57.382741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2587260 is same with the state(6) to be set 00:43:17.236 Initializing NVMe Controllers 00:43:17.236 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:43:17.236 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:43:17.236 Initialization complete. Launching workers. 00:43:17.236 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 296, failed: 10458 00:43:17.236 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 10676, failed to submit 78 00:43:17.236 success 10550, unsuccessful 126, failed 0 00:43:17.236 16:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:43:17.236 16:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:43:17.236 16:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:43:17.236 16:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:43:17.236 16:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:17.236 16:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:43:17.236 16:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:17.236 16:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:17.236 rmmod nvme_tcp 00:43:17.236 rmmod nvme_fabrics 00:43:17.236 rmmod nvme_keyring 00:43:17.236 16:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:17.236 16:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:43:17.236 16:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:43:17.236 16:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 713142 ']' 00:43:17.236 16:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 713142 00:43:17.236 16:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 713142 ']' 00:43:17.236 16:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 713142 00:43:17.236 16:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:43:17.236 16:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:17.236 16:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 713142 00:43:17.236 16:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:43:17.236 16:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:43:17.236 16:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 713142' 00:43:17.236 killing process with pid 713142 00:43:17.236 16:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 713142 00:43:17.236 16:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 713142 00:43:17.236 16:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:43:17.236 16:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:43:17.236 16:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:43:17.236 16:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:43:17.236 16:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-save 00:43:17.236 16:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:43:17.236 16:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-restore 00:43:17.236 16:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:17.236 16:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:17.236 16:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:17.236 16:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:17.236 16:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:19.777 16:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:19.777 00:43:19.777 real 0m33.409s 00:43:19.777 user 0m42.374s 00:43:19.777 sys 0m12.055s 00:43:19.777 16:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:19.777 16:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:19.777 ************************************ 00:43:19.777 END TEST nvmf_zcopy 00:43:19.777 ************************************ 00:43:19.777 16:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:43:19.777 16:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:43:19.777 16:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:19.777 16:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:19.777 ************************************ 00:43:19.777 START TEST nvmf_nmic 00:43:19.777 ************************************ 00:43:19.777 16:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:43:19.777 * Looking for test storage... 00:43:19.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:19.777 16:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:43:19.777 16:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:43:19.777 16:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:43:19.777 16:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:43:19.777 16:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:19.777 16:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:19.777 16:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:19.777 16:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:43:19.777 16:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:43:19.777 16:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:43:19.777 16:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:43:19.777 16:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:43:19.777 16:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:43:19.777 16:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:43:19.777 16:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:19.777 16:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:43:19.777 16:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:43:19.777 16:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:19.777 16:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:19.777 16:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:43:19.778 16:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:43:19.778 16:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:19.778 16:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:43:19.778 16:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:43:19.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:19.778 --rc genhtml_branch_coverage=1 00:43:19.778 --rc genhtml_function_coverage=1 00:43:19.778 --rc genhtml_legend=1 00:43:19.778 --rc geninfo_all_blocks=1 00:43:19.778 --rc geninfo_unexecuted_blocks=1 00:43:19.778 00:43:19.778 ' 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:43:19.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:19.778 --rc genhtml_branch_coverage=1 00:43:19.778 --rc genhtml_function_coverage=1 00:43:19.778 --rc genhtml_legend=1 00:43:19.778 --rc geninfo_all_blocks=1 00:43:19.778 --rc geninfo_unexecuted_blocks=1 00:43:19.778 00:43:19.778 ' 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:43:19.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:19.778 --rc genhtml_branch_coverage=1 00:43:19.778 --rc genhtml_function_coverage=1 00:43:19.778 --rc genhtml_legend=1 00:43:19.778 --rc geninfo_all_blocks=1 00:43:19.778 --rc geninfo_unexecuted_blocks=1 00:43:19.778 00:43:19.778 ' 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:43:19.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:19.778 --rc genhtml_branch_coverage=1 00:43:19.778 --rc genhtml_function_coverage=1 00:43:19.778 --rc genhtml_legend=1 00:43:19.778 --rc geninfo_all_blocks=1 00:43:19.778 --rc geninfo_unexecuted_blocks=1 00:43:19.778 00:43:19.778 ' 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:19.778 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:43:19.779 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:43:19.779 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:43:19.779 16:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:27.908 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:27.908 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:43:27.908 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:27.908 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:27.908 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:27.908 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:27.908 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:27.908 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:43:27.908 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:27.908 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:43:27.908 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:43:27.908 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:43:27.908 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:43:27.908 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:43:27.908 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:43:27.908 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:27.908 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:27.908 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:27.908 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:27.908 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:27.908 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:27.908 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:43:27.909 Found 0000:31:00.0 (0x8086 - 0x159b) 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:43:27.909 Found 0000:31:00.1 (0x8086 - 0x159b) 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:43:27.909 Found net devices under 0000:31:00.0: cvl_0_0 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:43:27.909 Found net devices under 0000:31:00.1: cvl_0_1 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # is_hw=yes 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:27.909 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:27.909 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:43:27.909 00:43:27.909 --- 10.0.0.2 ping statistics --- 00:43:27.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:27.909 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:27.909 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:27.909 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:43:27.909 00:43:27.909 --- 10.0.0.1 ping statistics --- 00:43:27.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:27.909 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # return 0 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=721674 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 721674 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 721674 ']' 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:27.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:27.909 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:27.909 [2024-09-27 16:02:07.531048] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:27.909 [2024-09-27 16:02:07.532046] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:43:27.909 [2024-09-27 16:02:07.532082] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:27.909 [2024-09-27 16:02:07.616621] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:27.909 [2024-09-27 16:02:07.650330] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:27.909 [2024-09-27 16:02:07.650369] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:27.909 [2024-09-27 16:02:07.650378] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:27.909 [2024-09-27 16:02:07.650385] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:27.909 [2024-09-27 16:02:07.650391] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:27.909 [2024-09-27 16:02:07.650534] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:43:27.909 [2024-09-27 16:02:07.650688] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:43:27.909 [2024-09-27 16:02:07.650838] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:43:27.909 [2024-09-27 16:02:07.650839] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:43:27.909 [2024-09-27 16:02:07.716740] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:27.909 [2024-09-27 16:02:07.717409] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:27.909 [2024-09-27 16:02:07.718065] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:43:27.909 [2024-09-27 16:02:07.718710] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:43:27.909 [2024-09-27 16:02:07.718749] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:43:27.909 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:27.909 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:43:27.909 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:43:27.909 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:27.909 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:27.909 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:27.909 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:27.909 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:27.909 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:27.909 [2024-09-27 16:02:08.375891] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:28.169 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:28.169 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:43:28.169 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:28.169 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:28.169 Malloc0 00:43:28.169 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:28.169 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:43:28.169 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:28.169 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:28.169 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:28.169 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:28.169 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:28.169 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:28.169 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:28.169 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:28.169 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:28.169 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:28.169 [2024-09-27 16:02:08.452165] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:28.169 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:28.169 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:43:28.169 test case1: single bdev can't be used in multiple subsystems 00:43:28.169 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:43:28.169 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:28.169 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:28.169 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:28.169 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:43:28.169 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:28.169 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:28.169 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:28.169 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:43:28.169 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:43:28.169 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:28.169 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:28.169 [2024-09-27 16:02:08.487505] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:43:28.169 [2024-09-27 16:02:08.487526] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:43:28.169 [2024-09-27 16:02:08.487534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.169 request: 00:43:28.169 { 00:43:28.169 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:43:28.169 "namespace": { 00:43:28.169 "bdev_name": "Malloc0", 00:43:28.169 "no_auto_visible": false 00:43:28.169 }, 00:43:28.169 "method": "nvmf_subsystem_add_ns", 00:43:28.169 "req_id": 1 00:43:28.169 } 00:43:28.169 Got JSON-RPC error response 00:43:28.169 response: 00:43:28.169 { 00:43:28.169 "code": -32602, 00:43:28.169 "message": "Invalid parameters" 00:43:28.169 } 00:43:28.169 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:43:28.169 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:43:28.169 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:43:28.169 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:43:28.169 Adding namespace failed - expected result. 00:43:28.169 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:43:28.169 test case2: host connect to nvmf target in multiple paths 00:43:28.169 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:43:28.169 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:28.169 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:28.169 [2024-09-27 16:02:08.499618] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:43:28.169 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:28.169 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:43:28.739 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:43:28.998 16:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:43:28.998 16:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:43:28.998 16:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:43:28.998 16:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:43:28.998 16:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:43:30.906 16:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:43:30.906 16:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:43:30.906 16:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:43:30.906 16:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:43:30.906 16:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:43:30.906 16:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:43:30.906 16:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:43:30.906 [global] 00:43:30.906 thread=1 00:43:30.906 invalidate=1 00:43:30.906 rw=write 00:43:30.906 time_based=1 00:43:30.906 runtime=1 00:43:30.906 ioengine=libaio 00:43:30.906 direct=1 00:43:30.906 bs=4096 00:43:30.906 iodepth=1 00:43:30.906 norandommap=0 00:43:30.906 numjobs=1 00:43:30.907 00:43:30.907 verify_dump=1 00:43:30.907 verify_backlog=512 00:43:30.907 verify_state_save=0 00:43:30.907 do_verify=1 00:43:30.907 verify=crc32c-intel 00:43:30.907 [job0] 00:43:30.907 filename=/dev/nvme0n1 00:43:31.183 Could not set queue depth (nvme0n1) 00:43:31.447 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:31.447 fio-3.35 00:43:31.447 Starting 1 thread 00:43:32.828 00:43:32.828 job0: (groupid=0, jobs=1): err= 0: pid=722784: Fri Sep 27 16:02:12 2024 00:43:32.828 read: IOPS=645, BW=2581KiB/s (2643kB/s)(2584KiB/1001msec) 00:43:32.828 slat (nsec): min=6329, max=58898, avg=22996.22, stdev=8096.80 00:43:32.828 clat (usec): min=304, max=897, avg=688.29, stdev=90.19 00:43:32.828 lat (usec): min=331, max=923, avg=711.29, stdev=93.96 00:43:32.828 clat percentiles (usec): 00:43:32.828 | 1.00th=[ 433], 5.00th=[ 529], 10.00th=[ 562], 20.00th=[ 619], 00:43:32.828 | 30.00th=[ 652], 40.00th=[ 676], 50.00th=[ 701], 60.00th=[ 734], 00:43:32.828 | 70.00th=[ 750], 80.00th=[ 766], 90.00th=[ 791], 95.00th=[ 807], 00:43:32.828 | 99.00th=[ 832], 99.50th=[ 848], 99.90th=[ 898], 99.95th=[ 898], 00:43:32.828 | 99.99th=[ 898] 00:43:32.828 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:43:32.828 slat (nsec): min=9111, max=65119, avg=29917.73, stdev=8947.74 00:43:32.828 clat (usec): min=160, max=1567, avg=486.80, stdev=109.61 00:43:32.828 lat (usec): min=187, max=1605, avg=516.72, stdev=113.22 00:43:32.828 clat percentiles (usec): 00:43:32.828 | 1.00th=[ 241], 5.00th=[ 302], 10.00th=[ 355], 20.00th=[ 392], 00:43:32.828 | 30.00th=[ 441], 40.00th=[ 465], 50.00th=[ 486], 60.00th=[ 515], 00:43:32.828 | 70.00th=[ 545], 80.00th=[ 570], 90.00th=[ 627], 95.00th=[ 652], 00:43:32.828 | 99.00th=[ 701], 99.50th=[ 734], 99.90th=[ 742], 99.95th=[ 1565], 00:43:32.828 | 99.99th=[ 1565] 00:43:32.828 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:43:32.828 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:32.828 lat (usec) : 250=1.08%, 500=35.69%, 750=50.90%, 1000=12.28% 00:43:32.828 lat (msec) : 2=0.06% 00:43:32.828 cpu : usr=4.00%, sys=5.30%, ctx=1670, majf=0, minf=1 00:43:32.828 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:32.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:32.828 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:32.828 issued rwts: total=646,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:32.828 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:32.828 00:43:32.828 Run status group 0 (all jobs): 00:43:32.828 READ: bw=2581KiB/s (2643kB/s), 2581KiB/s-2581KiB/s (2643kB/s-2643kB/s), io=2584KiB (2646kB), run=1001-1001msec 00:43:32.828 WRITE: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:43:32.828 00:43:32.828 Disk stats (read/write): 00:43:32.828 nvme0n1: ios=562/1008, merge=0/0, ticks=385/413, in_queue=798, util=93.69% 00:43:32.828 16:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:43:32.828 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:43:32.828 16:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:43:32.828 16:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:43:32.828 16:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:43:32.828 16:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:32.828 16:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:43:32.828 16:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:32.828 16:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:43:32.828 16:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:43:32.828 16:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:43:32.828 16:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:43:32.828 16:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:43:32.828 16:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:32.828 16:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:43:32.828 16:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:32.828 16:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:32.828 rmmod nvme_tcp 00:43:32.828 rmmod nvme_fabrics 00:43:32.828 rmmod nvme_keyring 00:43:32.828 16:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:32.828 16:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:43:32.828 16:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:43:32.828 16:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 721674 ']' 00:43:32.828 16:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 721674 00:43:32.828 16:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 721674 ']' 00:43:32.828 16:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 721674 00:43:32.828 16:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:43:32.828 16:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:32.828 16:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 721674 00:43:32.828 16:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:43:32.828 16:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:43:32.828 16:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 721674' 00:43:32.828 killing process with pid 721674 00:43:32.828 16:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 721674 00:43:32.828 16:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 721674 00:43:33.088 16:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:43:33.088 16:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:43:33.088 16:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:43:33.088 16:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:43:33.088 16:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-restore 00:43:33.088 16:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-save 00:43:33.088 16:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:43:33.088 16:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:33.088 16:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:33.088 16:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:33.088 16:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:33.088 16:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:34.995 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:34.995 00:43:34.995 real 0m15.645s 00:43:34.995 user 0m32.150s 00:43:34.995 sys 0m7.457s 00:43:34.995 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:34.995 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:34.995 ************************************ 00:43:34.995 END TEST nvmf_nmic 00:43:34.995 ************************************ 00:43:35.255 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:43:35.255 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:43:35.255 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:35.255 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:35.255 ************************************ 00:43:35.255 START TEST nvmf_fio_target 00:43:35.255 ************************************ 00:43:35.255 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:43:35.255 * Looking for test storage... 00:43:35.255 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:35.255 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:43:35.255 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:43:35.255 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:43:35.255 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:43:35.255 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:35.255 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:35.255 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:35.255 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:43:35.255 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:43:35.255 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:43:35.255 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:43:35.255 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:43:35.255 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:43:35.255 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:43:35.255 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:35.255 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:43:35.255 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:43:35.255 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:35.255 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:35.255 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:43:35.255 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:43:35.255 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:35.255 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:43:35.255 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:43:35.255 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:43:35.255 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:43:35.255 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:35.255 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:43:35.255 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:43:35.255 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:35.255 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:35.255 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:43:35.255 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:35.255 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:43:35.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:35.255 --rc genhtml_branch_coverage=1 00:43:35.255 --rc genhtml_function_coverage=1 00:43:35.255 --rc genhtml_legend=1 00:43:35.255 --rc geninfo_all_blocks=1 00:43:35.255 --rc geninfo_unexecuted_blocks=1 00:43:35.255 00:43:35.255 ' 00:43:35.255 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:43:35.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:35.255 --rc genhtml_branch_coverage=1 00:43:35.255 --rc genhtml_function_coverage=1 00:43:35.255 --rc genhtml_legend=1 00:43:35.255 --rc geninfo_all_blocks=1 00:43:35.255 --rc geninfo_unexecuted_blocks=1 00:43:35.255 00:43:35.255 ' 00:43:35.255 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:43:35.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:35.255 --rc genhtml_branch_coverage=1 00:43:35.255 --rc genhtml_function_coverage=1 00:43:35.255 --rc genhtml_legend=1 00:43:35.255 --rc geninfo_all_blocks=1 00:43:35.255 --rc geninfo_unexecuted_blocks=1 00:43:35.255 00:43:35.255 ' 00:43:35.255 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:43:35.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:35.255 --rc genhtml_branch_coverage=1 00:43:35.255 --rc genhtml_function_coverage=1 00:43:35.255 --rc genhtml_legend=1 00:43:35.255 --rc geninfo_all_blocks=1 00:43:35.255 --rc geninfo_unexecuted_blocks=1 00:43:35.255 00:43:35.255 ' 00:43:35.255 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:35.256 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:43:35.256 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:35.256 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:35.256 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:35.256 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:35.256 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:35.256 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:35.256 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:35.256 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:35.256 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:35.256 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:35.515 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:43:35.515 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:43:35.515 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:35.515 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:35.515 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:35.515 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:35.515 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:35.515 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:43:35.515 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:35.516 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:35.516 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:35.516 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:35.516 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:35.516 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:35.516 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:43:35.516 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:35.516 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:43:35.516 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:35.516 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:35.516 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:35.516 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:35.516 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:35.516 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:35.516 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:35.516 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:35.516 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:35.516 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:35.516 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:35.516 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:35.516 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:43:35.516 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:43:35.516 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:43:35.516 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:35.516 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:43:35.516 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:43:35.516 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:43:35.516 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:35.516 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:35.516 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:35.516 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:43:35.516 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:43:35.516 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:43:35.516 16:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:43:43.639 Found 0000:31:00.0 (0x8086 - 0x159b) 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:43:43.639 Found 0000:31:00.1 (0x8086 - 0x159b) 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:43:43.639 Found net devices under 0000:31:00.0: cvl_0_0 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:43:43.639 Found net devices under 0000:31:00.1: cvl_0_1 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # is_hw=yes 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:43.639 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:43.640 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:43.640 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:43.640 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:43.640 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:43.640 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:43.640 16:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:43.640 16:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:43.640 16:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:43.640 16:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:43.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:43.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.507 ms 00:43:43.640 00:43:43.640 --- 10.0.0.2 ping statistics --- 00:43:43.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:43.640 rtt min/avg/max/mdev = 0.507/0.507/0.507/0.000 ms 00:43:43.640 16:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:43.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:43.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:43:43.640 00:43:43.640 --- 10.0.0.1 ping statistics --- 00:43:43.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:43.640 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:43:43.640 16:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:43.640 16:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # return 0 00:43:43.640 16:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:43:43.640 16:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:43.640 16:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:43:43.640 16:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:43:43.640 16:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:43.640 16:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:43:43.640 16:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:43:43.640 16:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:43:43.640 16:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:43:43.640 16:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:43.640 16:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:43.640 16:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=727200 00:43:43.640 16:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 727200 00:43:43.640 16:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:43:43.640 16:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 727200 ']' 00:43:43.640 16:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:43.640 16:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:43.640 16:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:43.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:43.640 16:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:43.640 16:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:43.640 [2024-09-27 16:02:23.241089] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:43.640 [2024-09-27 16:02:23.242438] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:43:43.640 [2024-09-27 16:02:23.242488] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:43.640 [2024-09-27 16:02:23.327927] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:43.640 [2024-09-27 16:02:23.360414] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:43.640 [2024-09-27 16:02:23.360450] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:43.640 [2024-09-27 16:02:23.360458] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:43.640 [2024-09-27 16:02:23.360465] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:43.640 [2024-09-27 16:02:23.360474] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:43.640 [2024-09-27 16:02:23.360615] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:43:43.640 [2024-09-27 16:02:23.360765] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:43:43.640 [2024-09-27 16:02:23.360936] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:43:43.640 [2024-09-27 16:02:23.360938] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:43:43.640 [2024-09-27 16:02:23.420325] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:43.640 [2024-09-27 16:02:23.421677] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:43:43.640 [2024-09-27 16:02:23.421831] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:43.640 [2024-09-27 16:02:23.422654] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:43:43.640 [2024-09-27 16:02:23.422707] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:43:43.640 16:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:43.640 16:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:43:43.640 16:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:43:43.640 16:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:43.640 16:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:43.640 16:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:43.640 16:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:43:43.899 [2024-09-27 16:02:24.205762] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:43.899 16:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:44.158 16:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:43:44.158 16:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:44.158 16:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:43:44.158 16:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:44.418 16:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:43:44.418 16:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:44.677 16:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:43:44.677 16:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:43:44.677 16:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:44.938 16:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:43:44.938 16:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:45.198 16:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:43:45.198 16:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:45.198 16:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:43:45.198 16:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:43:45.458 16:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:43:45.716 16:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:43:45.716 16:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:45.716 16:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:43:45.716 16:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:43:45.976 16:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:46.235 [2024-09-27 16:02:26.521574] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:46.235 16:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:43:46.494 16:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:43:46.494 16:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:43:47.063 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:43:47.063 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:43:47.063 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:43:47.063 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:43:47.063 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:43:47.063 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:43:48.973 16:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:43:48.973 16:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:43:48.973 16:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:43:48.973 16:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:43:48.973 16:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:43:48.973 16:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:43:48.973 16:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:43:48.973 [global] 00:43:48.973 thread=1 00:43:48.973 invalidate=1 00:43:48.973 rw=write 00:43:48.973 time_based=1 00:43:48.973 runtime=1 00:43:48.973 ioengine=libaio 00:43:48.973 direct=1 00:43:48.973 bs=4096 00:43:48.973 iodepth=1 00:43:48.973 norandommap=0 00:43:48.973 numjobs=1 00:43:48.973 00:43:48.973 verify_dump=1 00:43:48.973 verify_backlog=512 00:43:48.973 verify_state_save=0 00:43:48.973 do_verify=1 00:43:48.973 verify=crc32c-intel 00:43:48.973 [job0] 00:43:48.973 filename=/dev/nvme0n1 00:43:48.973 [job1] 00:43:48.973 filename=/dev/nvme0n2 00:43:48.973 [job2] 00:43:48.973 filename=/dev/nvme0n3 00:43:48.973 [job3] 00:43:48.973 filename=/dev/nvme0n4 00:43:48.973 Could not set queue depth (nvme0n1) 00:43:48.973 Could not set queue depth (nvme0n2) 00:43:48.973 Could not set queue depth (nvme0n3) 00:43:48.973 Could not set queue depth (nvme0n4) 00:43:49.232 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:49.232 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:49.232 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:49.232 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:49.232 fio-3.35 00:43:49.232 Starting 4 threads 00:43:50.614 00:43:50.614 job0: (groupid=0, jobs=1): err= 0: pid=728600: Fri Sep 27 16:02:30 2024 00:43:50.614 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:43:50.614 slat (nsec): min=25455, max=59244, avg=26387.72, stdev=3074.93 00:43:50.614 clat (usec): min=706, max=1350, avg=1085.92, stdev=85.38 00:43:50.614 lat (usec): min=731, max=1376, avg=1112.30, stdev=85.26 00:43:50.614 clat percentiles (usec): 00:43:50.614 | 1.00th=[ 807], 5.00th=[ 930], 10.00th=[ 988], 20.00th=[ 1037], 00:43:50.614 | 30.00th=[ 1057], 40.00th=[ 1074], 50.00th=[ 1090], 60.00th=[ 1106], 00:43:50.614 | 70.00th=[ 1123], 80.00th=[ 1156], 90.00th=[ 1188], 95.00th=[ 1205], 00:43:50.614 | 99.00th=[ 1270], 99.50th=[ 1287], 99.90th=[ 1352], 99.95th=[ 1352], 00:43:50.614 | 99.99th=[ 1352] 00:43:50.614 write: IOPS=622, BW=2490KiB/s (2549kB/s)(2492KiB/1001msec); 0 zone resets 00:43:50.614 slat (nsec): min=9164, max=55974, avg=29556.13, stdev=10002.13 00:43:50.614 clat (usec): min=242, max=1333, avg=647.62, stdev=139.85 00:43:50.614 lat (usec): min=274, max=1369, avg=677.17, stdev=144.51 00:43:50.614 clat percentiles (usec): 00:43:50.614 | 1.00th=[ 338], 5.00th=[ 396], 10.00th=[ 465], 20.00th=[ 515], 00:43:50.614 | 30.00th=[ 578], 40.00th=[ 619], 50.00th=[ 668], 60.00th=[ 693], 00:43:50.614 | 70.00th=[ 725], 80.00th=[ 766], 90.00th=[ 816], 95.00th=[ 857], 00:43:50.614 | 99.00th=[ 930], 99.50th=[ 955], 99.90th=[ 1336], 99.95th=[ 1336], 00:43:50.614 | 99.99th=[ 1336] 00:43:50.614 bw ( KiB/s): min= 4096, max= 4096, per=45.68%, avg=4096.00, stdev= 0.00, samples=1 00:43:50.614 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:50.614 lat (usec) : 250=0.09%, 500=8.99%, 750=32.69%, 1000=18.41% 00:43:50.614 lat (msec) : 2=39.82% 00:43:50.614 cpu : usr=1.60%, sys=5.10%, ctx=1135, majf=0, minf=1 00:43:50.614 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:50.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.614 issued rwts: total=512,623,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:50.614 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:50.614 job1: (groupid=0, jobs=1): err= 0: pid=728613: Fri Sep 27 16:02:30 2024 00:43:50.614 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:43:50.614 slat (nsec): min=25835, max=58176, avg=27205.53, stdev=3361.60 00:43:50.614 clat (usec): min=808, max=1449, avg=1164.75, stdev=81.68 00:43:50.614 lat (usec): min=836, max=1476, avg=1191.96, stdev=81.68 00:43:50.614 clat percentiles (usec): 00:43:50.614 | 1.00th=[ 938], 5.00th=[ 1012], 10.00th=[ 1057], 20.00th=[ 1106], 00:43:50.614 | 30.00th=[ 1139], 40.00th=[ 1156], 50.00th=[ 1172], 60.00th=[ 1188], 00:43:50.614 | 70.00th=[ 1205], 80.00th=[ 1237], 90.00th=[ 1254], 95.00th=[ 1287], 00:43:50.614 | 99.00th=[ 1336], 99.50th=[ 1369], 99.90th=[ 1450], 99.95th=[ 1450], 00:43:50.614 | 99.99th=[ 1450] 00:43:50.614 write: IOPS=625, BW=2501KiB/s (2562kB/s)(2504KiB/1001msec); 0 zone resets 00:43:50.614 slat (nsec): min=9226, max=69662, avg=30118.43, stdev=9480.79 00:43:50.614 clat (usec): min=194, max=1104, avg=577.70, stdev=124.56 00:43:50.614 lat (usec): min=227, max=1113, avg=607.82, stdev=128.08 00:43:50.614 clat percentiles (usec): 00:43:50.614 | 1.00th=[ 258], 5.00th=[ 363], 10.00th=[ 404], 20.00th=[ 474], 00:43:50.614 | 30.00th=[ 523], 40.00th=[ 553], 50.00th=[ 586], 60.00th=[ 619], 00:43:50.614 | 70.00th=[ 652], 80.00th=[ 685], 90.00th=[ 725], 95.00th=[ 766], 00:43:50.614 | 99.00th=[ 840], 99.50th=[ 848], 99.90th=[ 1106], 99.95th=[ 1106], 00:43:50.614 | 99.99th=[ 1106] 00:43:50.614 bw ( KiB/s): min= 4096, max= 4096, per=45.68%, avg=4096.00, stdev= 0.00, samples=1 00:43:50.614 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:50.614 lat (usec) : 250=0.35%, 500=13.62%, 750=37.61%, 1000=5.18% 00:43:50.614 lat (msec) : 2=43.23% 00:43:50.614 cpu : usr=3.00%, sys=3.90%, ctx=1138, majf=0, minf=1 00:43:50.614 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:50.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.614 issued rwts: total=512,626,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:50.614 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:50.614 job2: (groupid=0, jobs=1): err= 0: pid=728629: Fri Sep 27 16:02:30 2024 00:43:50.614 read: IOPS=16, BW=67.3KiB/s (68.9kB/s)(68.0KiB/1010msec) 00:43:50.614 slat (nsec): min=10424, max=25941, avg=24783.41, stdev=3702.20 00:43:50.614 clat (usec): min=13730, max=42067, avg=40207.39, stdev=6828.30 00:43:50.614 lat (usec): min=13756, max=42092, avg=40232.17, stdev=6828.13 00:43:50.614 clat percentiles (usec): 00:43:50.614 | 1.00th=[13698], 5.00th=[13698], 10.00th=[41157], 20.00th=[41681], 00:43:50.614 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:43:50.614 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:43:50.614 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:50.614 | 99.99th=[42206] 00:43:50.614 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:43:50.614 slat (nsec): min=10008, max=52682, avg=30990.99, stdev=8387.10 00:43:50.614 clat (usec): min=141, max=1015, avg=599.40, stdev=145.33 00:43:50.614 lat (usec): min=175, max=1049, avg=630.39, stdev=146.66 00:43:50.614 clat percentiles (usec): 00:43:50.614 | 1.00th=[ 289], 5.00th=[ 359], 10.00th=[ 416], 20.00th=[ 469], 00:43:50.614 | 30.00th=[ 523], 40.00th=[ 570], 50.00th=[ 603], 60.00th=[ 644], 00:43:50.614 | 70.00th=[ 685], 80.00th=[ 725], 90.00th=[ 775], 95.00th=[ 840], 00:43:50.614 | 99.00th=[ 914], 99.50th=[ 963], 99.90th=[ 1020], 99.95th=[ 1020], 00:43:50.614 | 99.99th=[ 1020] 00:43:50.614 bw ( KiB/s): min= 4096, max= 4096, per=45.68%, avg=4096.00, stdev= 0.00, samples=1 00:43:50.614 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:50.614 lat (usec) : 250=0.57%, 500=23.82%, 750=58.79%, 1000=13.23% 00:43:50.614 lat (msec) : 2=0.38%, 20=0.19%, 50=3.02% 00:43:50.614 cpu : usr=0.59%, sys=1.68%, ctx=529, majf=0, minf=1 00:43:50.614 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:50.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.614 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:50.614 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:50.614 job3: (groupid=0, jobs=1): err= 0: pid=728635: Fri Sep 27 16:02:30 2024 00:43:50.614 read: IOPS=16, BW=67.1KiB/s (68.7kB/s)(68.0KiB/1014msec) 00:43:50.614 slat (nsec): min=25227, max=26449, avg=25562.35, stdev=261.77 00:43:50.614 clat (usec): min=1129, max=42255, avg=39528.92, stdev=9896.59 00:43:50.614 lat (usec): min=1155, max=42280, avg=39554.49, stdev=9896.37 00:43:50.614 clat percentiles (usec): 00:43:50.614 | 1.00th=[ 1123], 5.00th=[ 1123], 10.00th=[41681], 20.00th=[41681], 00:43:50.614 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:43:50.614 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:43:50.614 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:50.614 | 99.99th=[42206] 00:43:50.614 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:43:50.614 slat (nsec): min=9678, max=66661, avg=31087.53, stdev=8316.09 00:43:50.614 clat (usec): min=255, max=1381, avg=627.86, stdev=146.28 00:43:50.614 lat (usec): min=277, max=1417, avg=658.95, stdev=148.25 00:43:50.614 clat percentiles (usec): 00:43:50.614 | 1.00th=[ 306], 5.00th=[ 375], 10.00th=[ 437], 20.00th=[ 506], 00:43:50.614 | 30.00th=[ 562], 40.00th=[ 603], 50.00th=[ 635], 60.00th=[ 668], 00:43:50.614 | 70.00th=[ 701], 80.00th=[ 750], 90.00th=[ 799], 95.00th=[ 832], 00:43:50.614 | 99.00th=[ 947], 99.50th=[ 1254], 99.90th=[ 1385], 99.95th=[ 1385], 00:43:50.614 | 99.99th=[ 1385] 00:43:50.614 bw ( KiB/s): min= 4096, max= 4096, per=45.68%, avg=4096.00, stdev= 0.00, samples=1 00:43:50.614 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:50.614 lat (usec) : 500=17.20%, 750=61.25%, 1000=17.58% 00:43:50.614 lat (msec) : 2=0.95%, 50=3.02% 00:43:50.614 cpu : usr=0.79%, sys=1.48%, ctx=529, majf=0, minf=1 00:43:50.614 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:50.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:50.615 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:50.615 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:50.615 00:43:50.615 Run status group 0 (all jobs): 00:43:50.615 READ: bw=4174KiB/s (4274kB/s), 67.1KiB/s-2046KiB/s (68.7kB/s-2095kB/s), io=4232KiB (4334kB), run=1001-1014msec 00:43:50.615 WRITE: bw=8966KiB/s (9182kB/s), 2020KiB/s-2501KiB/s (2068kB/s-2562kB/s), io=9092KiB (9310kB), run=1001-1014msec 00:43:50.615 00:43:50.615 Disk stats (read/write): 00:43:50.615 nvme0n1: ios=478/512, merge=0/0, ticks=887/275, in_queue=1162, util=95.69% 00:43:50.615 nvme0n2: ios=468/512, merge=0/0, ticks=515/226, in_queue=741, util=87.84% 00:43:50.615 nvme0n3: ios=12/512, merge=0/0, ticks=503/295, in_queue=798, util=88.35% 00:43:50.615 nvme0n4: ios=12/512, merge=0/0, ticks=463/303, in_queue=766, util=89.48% 00:43:50.615 16:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:43:50.615 [global] 00:43:50.615 thread=1 00:43:50.615 invalidate=1 00:43:50.615 rw=randwrite 00:43:50.615 time_based=1 00:43:50.615 runtime=1 00:43:50.615 ioengine=libaio 00:43:50.615 direct=1 00:43:50.615 bs=4096 00:43:50.615 iodepth=1 00:43:50.615 norandommap=0 00:43:50.615 numjobs=1 00:43:50.615 00:43:50.615 verify_dump=1 00:43:50.615 verify_backlog=512 00:43:50.615 verify_state_save=0 00:43:50.615 do_verify=1 00:43:50.615 verify=crc32c-intel 00:43:50.615 [job0] 00:43:50.615 filename=/dev/nvme0n1 00:43:50.615 [job1] 00:43:50.615 filename=/dev/nvme0n2 00:43:50.615 [job2] 00:43:50.615 filename=/dev/nvme0n3 00:43:50.615 [job3] 00:43:50.615 filename=/dev/nvme0n4 00:43:50.615 Could not set queue depth (nvme0n1) 00:43:50.615 Could not set queue depth (nvme0n2) 00:43:50.615 Could not set queue depth (nvme0n3) 00:43:50.615 Could not set queue depth (nvme0n4) 00:43:51.182 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:51.182 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:51.182 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:51.182 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:51.182 fio-3.35 00:43:51.182 Starting 4 threads 00:43:52.566 00:43:52.566 job0: (groupid=0, jobs=1): err= 0: pid=729051: Fri Sep 27 16:02:32 2024 00:43:52.566 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:43:52.566 slat (nsec): min=8000, max=43161, avg=25276.25, stdev=2632.84 00:43:52.566 clat (usec): min=607, max=2395, avg=1010.85, stdev=98.16 00:43:52.566 lat (usec): min=632, max=2421, avg=1036.13, stdev=98.15 00:43:52.566 clat percentiles (usec): 00:43:52.566 | 1.00th=[ 816], 5.00th=[ 873], 10.00th=[ 906], 20.00th=[ 955], 00:43:52.566 | 30.00th=[ 979], 40.00th=[ 1004], 50.00th=[ 1012], 60.00th=[ 1029], 00:43:52.566 | 70.00th=[ 1045], 80.00th=[ 1074], 90.00th=[ 1090], 95.00th=[ 1123], 00:43:52.566 | 99.00th=[ 1172], 99.50th=[ 1205], 99.90th=[ 2409], 99.95th=[ 2409], 00:43:52.566 | 99.99th=[ 2409] 00:43:52.566 write: IOPS=705, BW=2821KiB/s (2889kB/s)(2824KiB/1001msec); 0 zone resets 00:43:52.566 slat (nsec): min=9068, max=52841, avg=28138.00, stdev=8645.25 00:43:52.566 clat (usec): min=182, max=964, avg=623.77, stdev=114.01 00:43:52.566 lat (usec): min=211, max=995, avg=651.91, stdev=117.75 00:43:52.566 clat percentiles (usec): 00:43:52.566 | 1.00th=[ 338], 5.00th=[ 408], 10.00th=[ 465], 20.00th=[ 537], 00:43:52.566 | 30.00th=[ 578], 40.00th=[ 611], 50.00th=[ 635], 60.00th=[ 660], 00:43:52.566 | 70.00th=[ 685], 80.00th=[ 717], 90.00th=[ 750], 95.00th=[ 791], 00:43:52.566 | 99.00th=[ 889], 99.50th=[ 922], 99.90th=[ 963], 99.95th=[ 963], 00:43:52.566 | 99.99th=[ 963] 00:43:52.566 bw ( KiB/s): min= 4096, max= 4096, per=33.98%, avg=4096.00, stdev= 0.00, samples=1 00:43:52.566 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:52.566 lat (usec) : 250=0.16%, 500=8.54%, 750=43.92%, 1000=22.33% 00:43:52.566 lat (msec) : 2=24.96%, 4=0.08% 00:43:52.566 cpu : usr=1.50%, sys=3.80%, ctx=1218, majf=0, minf=1 00:43:52.566 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:52.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.566 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.566 issued rwts: total=512,706,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:52.566 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:52.566 job1: (groupid=0, jobs=1): err= 0: pid=729078: Fri Sep 27 16:02:32 2024 00:43:52.566 read: IOPS=616, BW=2466KiB/s (2525kB/s)(2468KiB/1001msec) 00:43:52.566 slat (nsec): min=6721, max=44350, avg=23071.27, stdev=6997.04 00:43:52.566 clat (usec): min=367, max=1013, avg=738.20, stdev=106.42 00:43:52.566 lat (usec): min=374, max=1039, avg=761.27, stdev=108.48 00:43:52.566 clat percentiles (usec): 00:43:52.566 | 1.00th=[ 429], 5.00th=[ 562], 10.00th=[ 603], 20.00th=[ 660], 00:43:52.566 | 30.00th=[ 693], 40.00th=[ 725], 50.00th=[ 742], 60.00th=[ 775], 00:43:52.566 | 70.00th=[ 799], 80.00th=[ 824], 90.00th=[ 865], 95.00th=[ 906], 00:43:52.566 | 99.00th=[ 955], 99.50th=[ 971], 99.90th=[ 1012], 99.95th=[ 1012], 00:43:52.566 | 99.99th=[ 1012] 00:43:52.566 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:43:52.566 slat (nsec): min=8990, max=63293, avg=26431.15, stdev=10243.98 00:43:52.566 clat (usec): min=129, max=734, avg=480.57, stdev=118.30 00:43:52.566 lat (usec): min=138, max=766, avg=507.00, stdev=124.47 00:43:52.566 clat percentiles (usec): 00:43:52.566 | 1.00th=[ 180], 5.00th=[ 277], 10.00th=[ 302], 20.00th=[ 363], 00:43:52.566 | 30.00th=[ 433], 40.00th=[ 465], 50.00th=[ 498], 60.00th=[ 529], 00:43:52.566 | 70.00th=[ 553], 80.00th=[ 578], 90.00th=[ 627], 95.00th=[ 660], 00:43:52.566 | 99.00th=[ 701], 99.50th=[ 709], 99.90th=[ 717], 99.95th=[ 734], 00:43:52.566 | 99.99th=[ 734] 00:43:52.566 bw ( KiB/s): min= 4096, max= 4096, per=33.98%, avg=4096.00, stdev= 0.00, samples=1 00:43:52.566 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:52.566 lat (usec) : 250=1.40%, 500=31.38%, 750=49.42%, 1000=17.73% 00:43:52.566 lat (msec) : 2=0.06% 00:43:52.566 cpu : usr=2.80%, sys=3.70%, ctx=1641, majf=0, minf=1 00:43:52.566 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:52.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.566 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.566 issued rwts: total=617,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:52.566 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:52.566 job2: (groupid=0, jobs=1): err= 0: pid=729109: Fri Sep 27 16:02:32 2024 00:43:52.566 read: IOPS=16, BW=66.3KiB/s (67.9kB/s)(68.0KiB/1025msec) 00:43:52.566 slat (nsec): min=25085, max=25916, avg=25436.12, stdev=222.41 00:43:52.566 clat (usec): min=1254, max=42066, avg=39532.62, stdev=9865.36 00:43:52.566 lat (usec): min=1280, max=42091, avg=39558.05, stdev=9865.31 00:43:52.566 clat percentiles (usec): 00:43:52.566 | 1.00th=[ 1254], 5.00th=[ 1254], 10.00th=[41157], 20.00th=[41681], 00:43:52.566 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:43:52.566 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:43:52.566 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:52.566 | 99.99th=[42206] 00:43:52.566 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:43:52.566 slat (nsec): min=9482, max=56450, avg=28523.58, stdev=8525.06 00:43:52.566 clat (usec): min=339, max=1045, avg=652.29, stdev=113.02 00:43:52.566 lat (usec): min=349, max=1078, avg=680.82, stdev=116.40 00:43:52.566 clat percentiles (usec): 00:43:52.566 | 1.00th=[ 363], 5.00th=[ 445], 10.00th=[ 502], 20.00th=[ 570], 00:43:52.566 | 30.00th=[ 603], 40.00th=[ 627], 50.00th=[ 660], 60.00th=[ 693], 00:43:52.566 | 70.00th=[ 725], 80.00th=[ 742], 90.00th=[ 783], 95.00th=[ 807], 00:43:52.566 | 99.00th=[ 906], 99.50th=[ 963], 99.90th=[ 1045], 99.95th=[ 1045], 00:43:52.566 | 99.99th=[ 1045] 00:43:52.566 bw ( KiB/s): min= 4096, max= 4096, per=33.98%, avg=4096.00, stdev= 0.00, samples=1 00:43:52.566 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:52.566 lat (usec) : 500=9.45%, 750=69.19%, 1000=17.96% 00:43:52.566 lat (msec) : 2=0.38%, 50=3.02% 00:43:52.566 cpu : usr=0.78%, sys=1.37%, ctx=529, majf=0, minf=1 00:43:52.566 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:52.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.566 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.566 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:52.566 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:52.566 job3: (groupid=0, jobs=1): err= 0: pid=729121: Fri Sep 27 16:02:32 2024 00:43:52.566 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:43:52.566 slat (nsec): min=8550, max=74150, avg=29179.93, stdev=5221.78 00:43:52.566 clat (usec): min=282, max=1399, avg=945.80, stdev=104.65 00:43:52.566 lat (usec): min=311, max=1452, avg=974.98, stdev=105.43 00:43:52.566 clat percentiles (usec): 00:43:52.566 | 1.00th=[ 685], 5.00th=[ 766], 10.00th=[ 824], 20.00th=[ 873], 00:43:52.566 | 30.00th=[ 898], 40.00th=[ 922], 50.00th=[ 955], 60.00th=[ 971], 00:43:52.566 | 70.00th=[ 1004], 80.00th=[ 1029], 90.00th=[ 1074], 95.00th=[ 1106], 00:43:52.566 | 99.00th=[ 1139], 99.50th=[ 1172], 99.90th=[ 1401], 99.95th=[ 1401], 00:43:52.566 | 99.99th=[ 1401] 00:43:52.566 write: IOPS=846, BW=3385KiB/s (3466kB/s)(3388KiB/1001msec); 0 zone resets 00:43:52.566 slat (nsec): min=9406, max=72092, avg=31034.60, stdev=11014.56 00:43:52.566 clat (usec): min=164, max=921, avg=547.01, stdev=112.97 00:43:52.566 lat (usec): min=197, max=954, avg=578.04, stdev=116.07 00:43:52.566 clat percentiles (usec): 00:43:52.566 | 1.00th=[ 277], 5.00th=[ 363], 10.00th=[ 404], 20.00th=[ 461], 00:43:52.566 | 30.00th=[ 490], 40.00th=[ 519], 50.00th=[ 553], 60.00th=[ 570], 00:43:52.567 | 70.00th=[ 603], 80.00th=[ 635], 90.00th=[ 693], 95.00th=[ 742], 00:43:52.567 | 99.00th=[ 832], 99.50th=[ 857], 99.90th=[ 922], 99.95th=[ 922], 00:43:52.567 | 99.99th=[ 922] 00:43:52.567 bw ( KiB/s): min= 4096, max= 4096, per=33.98%, avg=4096.00, stdev= 0.00, samples=1 00:43:52.567 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:52.567 lat (usec) : 250=0.15%, 500=20.38%, 750=41.06%, 1000=27.08% 00:43:52.567 lat (msec) : 2=11.33% 00:43:52.567 cpu : usr=3.50%, sys=4.80%, ctx=1359, majf=0, minf=1 00:43:52.567 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:52.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.567 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.567 issued rwts: total=512,847,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:52.567 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:52.567 00:43:52.567 Run status group 0 (all jobs): 00:43:52.567 READ: bw=6470KiB/s (6626kB/s), 66.3KiB/s-2466KiB/s (67.9kB/s-2525kB/s), io=6632KiB (6791kB), run=1001-1025msec 00:43:52.567 WRITE: bw=11.8MiB/s (12.3MB/s), 1998KiB/s-4092KiB/s (2046kB/s-4190kB/s), io=12.1MiB (12.7MB), run=1001-1025msec 00:43:52.567 00:43:52.567 Disk stats (read/write): 00:43:52.567 nvme0n1: ios=475/512, merge=0/0, ticks=570/310, in_queue=880, util=87.37% 00:43:52.567 nvme0n2: ios=525/763, merge=0/0, ticks=384/352, in_queue=736, util=81.49% 00:43:52.567 nvme0n3: ios=11/512, merge=0/0, ticks=421/328, in_queue=749, util=86.73% 00:43:52.567 nvme0n4: ios=488/512, merge=0/0, ticks=444/246, in_queue=690, util=88.99% 00:43:52.567 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:43:52.567 [global] 00:43:52.567 thread=1 00:43:52.567 invalidate=1 00:43:52.567 rw=write 00:43:52.567 time_based=1 00:43:52.567 runtime=1 00:43:52.567 ioengine=libaio 00:43:52.567 direct=1 00:43:52.567 bs=4096 00:43:52.567 iodepth=128 00:43:52.567 norandommap=0 00:43:52.567 numjobs=1 00:43:52.567 00:43:52.567 verify_dump=1 00:43:52.567 verify_backlog=512 00:43:52.567 verify_state_save=0 00:43:52.567 do_verify=1 00:43:52.567 verify=crc32c-intel 00:43:52.567 [job0] 00:43:52.567 filename=/dev/nvme0n1 00:43:52.567 [job1] 00:43:52.567 filename=/dev/nvme0n2 00:43:52.567 [job2] 00:43:52.567 filename=/dev/nvme0n3 00:43:52.567 [job3] 00:43:52.567 filename=/dev/nvme0n4 00:43:52.567 Could not set queue depth (nvme0n1) 00:43:52.567 Could not set queue depth (nvme0n2) 00:43:52.567 Could not set queue depth (nvme0n3) 00:43:52.567 Could not set queue depth (nvme0n4) 00:43:52.827 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:52.827 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:52.827 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:52.827 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:52.827 fio-3.35 00:43:52.827 Starting 4 threads 00:43:54.212 00:43:54.212 job0: (groupid=0, jobs=1): err= 0: pid=729535: Fri Sep 27 16:02:34 2024 00:43:54.212 read: IOPS=3165, BW=12.4MiB/s (13.0MB/s)(12.5MiB/1010msec) 00:43:54.212 slat (nsec): min=1330, max=12815k, avg=127477.42, stdev=892210.92 00:43:54.212 clat (usec): min=3394, max=81260, avg=16188.80, stdev=8918.98 00:43:54.212 lat (usec): min=6250, max=81267, avg=16316.28, stdev=8993.87 00:43:54.212 clat percentiles (usec): 00:43:54.212 | 1.00th=[ 6718], 5.00th=[ 7767], 10.00th=[ 9110], 20.00th=[11338], 00:43:54.212 | 30.00th=[13173], 40.00th=[13698], 50.00th=[14877], 60.00th=[15926], 00:43:54.212 | 70.00th=[16909], 80.00th=[18744], 90.00th=[21365], 95.00th=[25822], 00:43:54.212 | 99.00th=[65799], 99.50th=[73925], 99.90th=[81265], 99.95th=[81265], 00:43:54.212 | 99.99th=[81265] 00:43:54.212 write: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec); 0 zone resets 00:43:54.212 slat (nsec): min=1812, max=16751k, avg=153387.59, stdev=998059.27 00:43:54.212 clat (usec): min=3193, max=81254, avg=21254.00, stdev=20634.78 00:43:54.212 lat (usec): min=3227, max=81267, avg=21407.39, stdev=20782.09 00:43:54.212 clat percentiles (usec): 00:43:54.212 | 1.00th=[ 3294], 5.00th=[ 7046], 10.00th=[ 8717], 20.00th=[10421], 00:43:54.212 | 30.00th=[10945], 40.00th=[12387], 50.00th=[13829], 60.00th=[15533], 00:43:54.212 | 70.00th=[17695], 80.00th=[19006], 90.00th=[72877], 95.00th=[77071], 00:43:54.212 | 99.00th=[79168], 99.50th=[80217], 99.90th=[80217], 99.95th=[81265], 00:43:54.212 | 99.99th=[81265] 00:43:54.212 bw ( KiB/s): min=12088, max=16560, per=14.12%, avg=14324.00, stdev=3162.18, samples=2 00:43:54.212 iops : min= 3022, max= 4140, avg=3581.00, stdev=790.55, samples=2 00:43:54.212 lat (msec) : 4=1.19%, 10=13.77%, 20=68.18%, 50=9.67%, 100=7.18% 00:43:54.213 cpu : usr=2.38%, sys=3.87%, ctx=230, majf=0, minf=2 00:43:54.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:43:54.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:54.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:54.213 issued rwts: total=3197,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:54.213 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:54.213 job1: (groupid=0, jobs=1): err= 0: pid=729544: Fri Sep 27 16:02:34 2024 00:43:54.213 read: IOPS=7900, BW=30.9MiB/s (32.4MB/s)(31.0MiB/1004msec) 00:43:54.213 slat (nsec): min=944, max=10481k, avg=63168.69, stdev=495217.68 00:43:54.213 clat (usec): min=1929, max=29268, avg=8457.87, stdev=3578.64 00:43:54.213 lat (usec): min=3446, max=29276, avg=8521.04, stdev=3606.95 00:43:54.213 clat percentiles (usec): 00:43:54.213 | 1.00th=[ 4293], 5.00th=[ 5276], 10.00th=[ 5604], 20.00th=[ 5932], 00:43:54.213 | 30.00th=[ 6456], 40.00th=[ 6849], 50.00th=[ 7242], 60.00th=[ 8160], 00:43:54.213 | 70.00th=[ 8848], 80.00th=[10028], 90.00th=[13042], 95.00th=[14746], 00:43:54.213 | 99.00th=[22152], 99.50th=[25822], 99.90th=[29230], 99.95th=[29230], 00:43:54.213 | 99.99th=[29230] 00:43:54.213 write: IOPS=8159, BW=31.9MiB/s (33.4MB/s)(32.0MiB/1004msec); 0 zone resets 00:43:54.213 slat (nsec): min=1582, max=9834.6k, avg=56024.49, stdev=418473.35 00:43:54.213 clat (usec): min=1136, max=20454, avg=7351.98, stdev=2371.81 00:43:54.213 lat (usec): min=1147, max=20483, avg=7408.00, stdev=2383.58 00:43:54.213 clat percentiles (usec): 00:43:54.213 | 1.00th=[ 3785], 5.00th=[ 4424], 10.00th=[ 4883], 20.00th=[ 5473], 00:43:54.213 | 30.00th=[ 6128], 40.00th=[ 6521], 50.00th=[ 6915], 60.00th=[ 7242], 00:43:54.213 | 70.00th=[ 8455], 80.00th=[ 8979], 90.00th=[ 9634], 95.00th=[10683], 00:43:54.213 | 99.00th=[17695], 99.50th=[18220], 99.90th=[18220], 99.95th=[18220], 00:43:54.213 | 99.99th=[20579] 00:43:54.213 bw ( KiB/s): min=28672, max=36864, per=32.31%, avg=32768.00, stdev=5792.62, samples=2 00:43:54.213 iops : min= 7168, max= 9216, avg=8192.00, stdev=1448.15, samples=2 00:43:54.213 lat (msec) : 2=0.02%, 4=1.05%, 10=84.57%, 20=12.95%, 50=1.41% 00:43:54.213 cpu : usr=5.58%, sys=7.68%, ctx=486, majf=0, minf=2 00:43:54.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:43:54.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:54.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:54.213 issued rwts: total=7932,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:54.213 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:54.213 job2: (groupid=0, jobs=1): err= 0: pid=729560: Fri Sep 27 16:02:34 2024 00:43:54.213 read: IOPS=7634, BW=29.8MiB/s (31.3MB/s)(30.0MiB/1006msec) 00:43:54.213 slat (nsec): min=981, max=8672.5k, avg=67796.00, stdev=491315.92 00:43:54.213 clat (usec): min=2685, max=16537, avg=8745.55, stdev=2197.06 00:43:54.213 lat (usec): min=2690, max=16553, avg=8813.35, stdev=2218.34 00:43:54.213 clat percentiles (usec): 00:43:54.213 | 1.00th=[ 4146], 5.00th=[ 5473], 10.00th=[ 6063], 20.00th=[ 7046], 00:43:54.213 | 30.00th=[ 7767], 40.00th=[ 8029], 50.00th=[ 8455], 60.00th=[ 8717], 00:43:54.213 | 70.00th=[ 9372], 80.00th=[10552], 90.00th=[11863], 95.00th=[12911], 00:43:54.213 | 99.00th=[14353], 99.50th=[14746], 99.90th=[15533], 99.95th=[15533], 00:43:54.213 | 99.99th=[16581] 00:43:54.213 write: IOPS=7936, BW=31.0MiB/s (32.5MB/s)(31.2MiB/1006msec); 0 zone resets 00:43:54.213 slat (nsec): min=1629, max=6761.2k, avg=53283.99, stdev=340314.34 00:43:54.213 clat (usec): min=1344, max=15448, avg=7571.89, stdev=1822.67 00:43:54.213 lat (usec): min=1387, max=15451, avg=7625.17, stdev=1829.99 00:43:54.213 clat percentiles (usec): 00:43:54.213 | 1.00th=[ 3032], 5.00th=[ 4555], 10.00th=[ 5145], 20.00th=[ 5997], 00:43:54.213 | 30.00th=[ 6652], 40.00th=[ 7635], 50.00th=[ 7963], 60.00th=[ 8160], 00:43:54.213 | 70.00th=[ 8225], 80.00th=[ 8455], 90.00th=[10290], 95.00th=[10945], 00:43:54.213 | 99.00th=[11731], 99.50th=[12256], 99.90th=[14615], 99.95th=[14877], 00:43:54.213 | 99.99th=[15401] 00:43:54.213 bw ( KiB/s): min=30080, max=32768, per=30.98%, avg=31424.00, stdev=1900.70, samples=2 00:43:54.213 iops : min= 7520, max= 8192, avg=7856.00, stdev=475.18, samples=2 00:43:54.213 lat (msec) : 2=0.03%, 4=1.73%, 10=80.56%, 20=17.68% 00:43:54.213 cpu : usr=4.98%, sys=7.06%, ctx=722, majf=0, minf=1 00:43:54.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:43:54.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:54.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:54.213 issued rwts: total=7680,7984,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:54.213 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:54.213 job3: (groupid=0, jobs=1): err= 0: pid=729566: Fri Sep 27 16:02:34 2024 00:43:54.213 read: IOPS=5581, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1009msec) 00:43:54.213 slat (nsec): min=981, max=12155k, avg=85971.64, stdev=691610.91 00:43:54.213 clat (usec): min=3765, max=39264, avg=11668.95, stdev=4623.00 00:43:54.213 lat (usec): min=3771, max=39267, avg=11754.92, stdev=4667.07 00:43:54.213 clat percentiles (usec): 00:43:54.213 | 1.00th=[ 5473], 5.00th=[ 6456], 10.00th=[ 7046], 20.00th=[ 7832], 00:43:54.213 | 30.00th=[ 8356], 40.00th=[ 8848], 50.00th=[10421], 60.00th=[11731], 00:43:54.213 | 70.00th=[13698], 80.00th=[15664], 90.00th=[17957], 95.00th=[19792], 00:43:54.213 | 99.00th=[25560], 99.50th=[26346], 99.90th=[29230], 99.95th=[30540], 00:43:54.213 | 99.99th=[39060] 00:43:54.213 write: IOPS=5795, BW=22.6MiB/s (23.7MB/s)(22.8MiB/1009msec); 0 zone resets 00:43:54.213 slat (nsec): min=1733, max=14617k, avg=83336.86, stdev=673673.76 00:43:54.213 clat (usec): min=1251, max=29099, avg=10654.34, stdev=4458.18 00:43:54.213 lat (usec): min=1261, max=29107, avg=10737.68, stdev=4486.96 00:43:54.213 clat percentiles (usec): 00:43:54.213 | 1.00th=[ 5080], 5.00th=[ 5604], 10.00th=[ 5932], 20.00th=[ 7308], 00:43:54.213 | 30.00th=[ 7963], 40.00th=[ 8291], 50.00th=[ 9241], 60.00th=[10814], 00:43:54.213 | 70.00th=[11731], 80.00th=[13829], 90.00th=[16712], 95.00th=[19792], 00:43:54.213 | 99.00th=[25035], 99.50th=[26346], 99.90th=[28967], 99.95th=[28967], 00:43:54.213 | 99.99th=[29230] 00:43:54.213 bw ( KiB/s): min=21200, max=24568, per=22.56%, avg=22884.00, stdev=2381.54, samples=2 00:43:54.213 iops : min= 5300, max= 6142, avg=5721.00, stdev=595.38, samples=2 00:43:54.213 lat (msec) : 2=0.08%, 4=0.03%, 10=49.93%, 20=45.19%, 50=4.76% 00:43:54.213 cpu : usr=4.17%, sys=6.45%, ctx=280, majf=0, minf=2 00:43:54.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:43:54.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:54.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:54.213 issued rwts: total=5632,5848,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:54.213 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:54.213 00:43:54.213 Run status group 0 (all jobs): 00:43:54.213 READ: bw=94.5MiB/s (99.1MB/s), 12.4MiB/s-30.9MiB/s (13.0MB/s-32.4MB/s), io=95.5MiB (100MB), run=1004-1010msec 00:43:54.213 WRITE: bw=99.0MiB/s (104MB/s), 13.9MiB/s-31.9MiB/s (14.5MB/s-33.4MB/s), io=100MiB (105MB), run=1004-1010msec 00:43:54.213 00:43:54.213 Disk stats (read/write): 00:43:54.213 nvme0n1: ios=2466/2560, merge=0/0, ticks=40711/63096, in_queue=103807, util=98.20% 00:43:54.213 nvme0n2: ios=6578/6656, merge=0/0, ticks=53849/47601, in_queue=101450, util=94.90% 00:43:54.213 nvme0n3: ios=6337/6656, merge=0/0, ticks=54023/48738, in_queue=102761, util=96.41% 00:43:54.213 nvme0n4: ios=5025/5120, merge=0/0, ticks=52479/47998, in_queue=100477, util=89.42% 00:43:54.213 16:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:43:54.213 [global] 00:43:54.213 thread=1 00:43:54.213 invalidate=1 00:43:54.213 rw=randwrite 00:43:54.213 time_based=1 00:43:54.213 runtime=1 00:43:54.213 ioengine=libaio 00:43:54.213 direct=1 00:43:54.213 bs=4096 00:43:54.213 iodepth=128 00:43:54.213 norandommap=0 00:43:54.213 numjobs=1 00:43:54.213 00:43:54.213 verify_dump=1 00:43:54.213 verify_backlog=512 00:43:54.213 verify_state_save=0 00:43:54.213 do_verify=1 00:43:54.213 verify=crc32c-intel 00:43:54.213 [job0] 00:43:54.213 filename=/dev/nvme0n1 00:43:54.213 [job1] 00:43:54.213 filename=/dev/nvme0n2 00:43:54.213 [job2] 00:43:54.213 filename=/dev/nvme0n3 00:43:54.213 [job3] 00:43:54.213 filename=/dev/nvme0n4 00:43:54.213 Could not set queue depth (nvme0n1) 00:43:54.213 Could not set queue depth (nvme0n2) 00:43:54.213 Could not set queue depth (nvme0n3) 00:43:54.213 Could not set queue depth (nvme0n4) 00:43:54.473 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:54.473 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:54.473 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:54.473 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:54.473 fio-3.35 00:43:54.473 Starting 4 threads 00:43:55.882 00:43:55.882 job0: (groupid=0, jobs=1): err= 0: pid=730036: Fri Sep 27 16:02:35 2024 00:43:55.882 read: IOPS=9089, BW=35.5MiB/s (37.2MB/s)(35.6MiB/1003msec) 00:43:55.882 slat (nsec): min=910, max=3820.1k, avg=52669.58, stdev=317457.01 00:43:55.882 clat (usec): min=1560, max=12282, avg=6887.35, stdev=1006.04 00:43:55.882 lat (usec): min=3280, max=12285, avg=6940.02, stdev=1031.96 00:43:55.882 clat percentiles (usec): 00:43:55.882 | 1.00th=[ 4490], 5.00th=[ 5211], 10.00th=[ 5604], 20.00th=[ 6063], 00:43:55.882 | 30.00th=[ 6390], 40.00th=[ 6652], 50.00th=[ 6849], 60.00th=[ 7177], 00:43:55.882 | 70.00th=[ 7373], 80.00th=[ 7635], 90.00th=[ 8160], 95.00th=[ 8586], 00:43:55.882 | 99.00th=[ 9241], 99.50th=[ 9634], 99.90th=[10421], 99.95th=[10552], 00:43:55.882 | 99.99th=[12256] 00:43:55.882 write: IOPS=9188, BW=35.9MiB/s (37.6MB/s)(36.0MiB/1003msec); 0 zone resets 00:43:55.882 slat (nsec): min=1555, max=16354k, avg=52719.89, stdev=347996.47 00:43:55.882 clat (usec): min=2986, max=21454, avg=6947.44, stdev=1814.71 00:43:55.882 lat (usec): min=2989, max=22664, avg=7000.16, stdev=1838.26 00:43:55.882 clat percentiles (usec): 00:43:55.882 | 1.00th=[ 4490], 5.00th=[ 5604], 10.00th=[ 5866], 20.00th=[ 6063], 00:43:55.882 | 30.00th=[ 6325], 40.00th=[ 6521], 50.00th=[ 6783], 60.00th=[ 6980], 00:43:55.882 | 70.00th=[ 7177], 80.00th=[ 7439], 90.00th=[ 7767], 95.00th=[ 8094], 00:43:55.882 | 99.00th=[19792], 99.50th=[20579], 99.90th=[21365], 99.95th=[21365], 00:43:55.882 | 99.99th=[21365] 00:43:55.882 bw ( KiB/s): min=36864, max=36864, per=44.64%, avg=36864.00, stdev= 0.00, samples=2 00:43:55.882 iops : min= 9216, max= 9216, avg=9216.00, stdev= 0.00, samples=2 00:43:55.882 lat (msec) : 2=0.01%, 4=0.38%, 10=98.69%, 20=0.47%, 50=0.46% 00:43:55.882 cpu : usr=4.39%, sys=6.99%, ctx=817, majf=0, minf=1 00:43:55.882 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:43:55.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:55.882 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:55.882 issued rwts: total=9117,9216,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:55.882 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:55.882 job1: (groupid=0, jobs=1): err= 0: pid=730037: Fri Sep 27 16:02:35 2024 00:43:55.882 read: IOPS=1525, BW=6101KiB/s (6248kB/s)(6144KiB/1007msec) 00:43:55.882 slat (nsec): min=1909, max=16495k, avg=271804.92, stdev=1379123.83 00:43:55.882 clat (usec): min=10054, max=64230, avg=33622.18, stdev=8410.42 00:43:55.882 lat (usec): min=10058, max=71097, avg=33893.99, stdev=8495.82 00:43:55.882 clat percentiles (usec): 00:43:55.882 | 1.00th=[11731], 5.00th=[19792], 10.00th=[22414], 20.00th=[27395], 00:43:55.882 | 30.00th=[29492], 40.00th=[31327], 50.00th=[33817], 60.00th=[35914], 00:43:55.882 | 70.00th=[38011], 80.00th=[40109], 90.00th=[43779], 95.00th=[46400], 00:43:55.882 | 99.00th=[53216], 99.50th=[53216], 99.90th=[64226], 99.95th=[64226], 00:43:55.882 | 99.99th=[64226] 00:43:55.882 write: IOPS=1598, BW=6395KiB/s (6549kB/s)(6440KiB/1007msec); 0 zone resets 00:43:55.882 slat (nsec): min=1495, max=14816k, avg=356387.60, stdev=1617230.81 00:43:55.882 clat (msec): min=3, max=134, avg=47.09, stdev=34.94 00:43:55.882 lat (msec): min=7, max=134, avg=47.44, stdev=35.20 00:43:55.882 clat percentiles (msec): 00:43:55.882 | 1.00th=[ 11], 5.00th=[ 12], 10.00th=[ 18], 20.00th=[ 24], 00:43:55.882 | 30.00th=[ 28], 40.00th=[ 30], 50.00th=[ 33], 60.00th=[ 39], 00:43:55.882 | 70.00th=[ 49], 80.00th=[ 72], 90.00th=[ 118], 95.00th=[ 128], 00:43:55.882 | 99.00th=[ 134], 99.50th=[ 134], 99.90th=[ 136], 99.95th=[ 136], 00:43:55.882 | 99.99th=[ 136] 00:43:55.882 bw ( KiB/s): min= 5840, max= 6448, per=7.44%, avg=6144.00, stdev=429.92, samples=2 00:43:55.882 iops : min= 1460, max= 1612, avg=1536.00, stdev=107.48, samples=2 00:43:55.882 lat (msec) : 4=0.03%, 10=0.16%, 20=9.92%, 50=73.94%, 100=9.15% 00:43:55.882 lat (msec) : 250=6.80% 00:43:55.882 cpu : usr=1.29%, sys=1.79%, ctx=162, majf=0, minf=2 00:43:55.882 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:43:55.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:55.882 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:55.882 issued rwts: total=1536,1610,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:55.882 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:55.882 job2: (groupid=0, jobs=1): err= 0: pid=730043: Fri Sep 27 16:02:35 2024 00:43:55.882 read: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec) 00:43:55.882 slat (nsec): min=998, max=9579.8k, avg=107925.18, stdev=726397.38 00:43:55.882 clat (usec): min=4000, max=73672, avg=12673.23, stdev=6993.00 00:43:55.882 lat (usec): min=4939, max=73678, avg=12781.16, stdev=7079.22 00:43:55.882 clat percentiles (usec): 00:43:55.883 | 1.00th=[ 5342], 5.00th=[ 6915], 10.00th=[ 7504], 20.00th=[ 8356], 00:43:55.883 | 30.00th=[ 8848], 40.00th=[ 9634], 50.00th=[10290], 60.00th=[12387], 00:43:55.883 | 70.00th=[14222], 80.00th=[15926], 90.00th=[18744], 95.00th=[22676], 00:43:55.883 | 99.00th=[44827], 99.50th=[60556], 99.90th=[66847], 99.95th=[73925], 00:43:55.883 | 99.99th=[73925] 00:43:55.883 write: IOPS=3556, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec); 0 zone resets 00:43:55.883 slat (usec): min=2, max=9234, avg=164.96, stdev=880.63 00:43:55.883 clat (usec): min=1168, max=78198, avg=22973.64, stdev=24316.01 00:43:55.883 lat (usec): min=1178, max=78214, avg=23138.60, stdev=24484.59 00:43:55.883 clat percentiles (usec): 00:43:55.883 | 1.00th=[ 4228], 5.00th=[ 5997], 10.00th=[ 6587], 20.00th=[ 7504], 00:43:55.883 | 30.00th=[ 8356], 40.00th=[ 9372], 50.00th=[10290], 60.00th=[10683], 00:43:55.883 | 70.00th=[14353], 80.00th=[54264], 90.00th=[69731], 95.00th=[72877], 00:43:55.883 | 99.00th=[76022], 99.50th=[77071], 99.90th=[78119], 99.95th=[78119], 00:43:55.883 | 99.99th=[78119] 00:43:55.883 bw ( KiB/s): min=12288, max=16384, per=17.36%, avg=14336.00, stdev=2896.31, samples=2 00:43:55.883 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:43:55.883 lat (msec) : 2=0.04%, 4=0.32%, 10=46.30%, 20=35.63%, 50=6.93% 00:43:55.883 lat (msec) : 100=10.78% 00:43:55.883 cpu : usr=2.58%, sys=4.57%, ctx=249, majf=0, minf=1 00:43:55.883 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:43:55.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:55.883 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:55.883 issued rwts: total=3584,3585,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:55.883 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:55.883 job3: (groupid=0, jobs=1): err= 0: pid=730047: Fri Sep 27 16:02:35 2024 00:43:55.883 read: IOPS=6077, BW=23.7MiB/s (24.9MB/s)(24.0MiB/1011msec) 00:43:55.883 slat (nsec): min=985, max=10040k, avg=70584.67, stdev=549674.69 00:43:55.883 clat (usec): min=1780, max=28053, avg=9881.99, stdev=4282.07 00:43:55.883 lat (usec): min=1788, max=29592, avg=9952.58, stdev=4317.18 00:43:55.883 clat percentiles (usec): 00:43:55.883 | 1.00th=[ 3785], 5.00th=[ 5080], 10.00th=[ 6194], 20.00th=[ 6980], 00:43:55.883 | 30.00th=[ 7635], 40.00th=[ 7898], 50.00th=[ 8356], 60.00th=[ 8979], 00:43:55.883 | 70.00th=[10421], 80.00th=[12256], 90.00th=[16909], 95.00th=[19006], 00:43:55.883 | 99.00th=[24773], 99.50th=[25035], 99.90th=[27919], 99.95th=[27919], 00:43:55.883 | 99.99th=[28181] 00:43:55.883 write: IOPS=6391, BW=25.0MiB/s (26.2MB/s)(25.2MiB/1011msec); 0 zone resets 00:43:55.883 slat (nsec): min=1553, max=8474.1k, avg=74943.77, stdev=542794.42 00:43:55.883 clat (usec): min=3502, max=78414, avg=10401.13, stdev=10112.45 00:43:55.883 lat (usec): min=3528, max=78424, avg=10476.07, stdev=10180.41 00:43:55.883 clat percentiles (usec): 00:43:55.883 | 1.00th=[ 4490], 5.00th=[ 5080], 10.00th=[ 5604], 20.00th=[ 6652], 00:43:55.883 | 30.00th=[ 7308], 40.00th=[ 7898], 50.00th=[ 8160], 60.00th=[ 8586], 00:43:55.883 | 70.00th=[ 9503], 80.00th=[10552], 90.00th=[12649], 95.00th=[16057], 00:43:55.883 | 99.00th=[70779], 99.50th=[71828], 99.90th=[76022], 99.95th=[78119], 00:43:55.883 | 99.99th=[78119] 00:43:55.883 bw ( KiB/s): min=25208, max=25464, per=30.68%, avg=25336.00, stdev=181.02, samples=2 00:43:55.883 iops : min= 6302, max= 6366, avg=6334.00, stdev=45.25, samples=2 00:43:55.883 lat (msec) : 2=0.05%, 4=0.69%, 10=70.76%, 20=24.54%, 50=2.68% 00:43:55.883 lat (msec) : 100=1.28% 00:43:55.883 cpu : usr=4.36%, sys=7.72%, ctx=328, majf=0, minf=1 00:43:55.883 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:43:55.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:55.883 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:55.883 issued rwts: total=6144,6462,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:55.883 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:55.883 00:43:55.883 Run status group 0 (all jobs): 00:43:55.883 READ: bw=78.7MiB/s (82.6MB/s), 6101KiB/s-35.5MiB/s (6248kB/s-37.2MB/s), io=79.6MiB (83.5MB), run=1003-1011msec 00:43:55.883 WRITE: bw=80.6MiB/s (84.6MB/s), 6395KiB/s-35.9MiB/s (6549kB/s-37.6MB/s), io=81.5MiB (85.5MB), run=1003-1011msec 00:43:55.883 00:43:55.883 Disk stats (read/write): 00:43:55.883 nvme0n1: ios=7475/7680, merge=0/0, ticks=26427/24384, in_queue=50811, util=96.19% 00:43:55.883 nvme0n2: ios=1051/1502, merge=0/0, ticks=10873/23501, in_queue=34374, util=90.72% 00:43:55.883 nvme0n3: ios=2085/2560, merge=0/0, ticks=30456/72520, in_queue=102976, util=88.38% 00:43:55.883 nvme0n4: ios=5671/5983, merge=0/0, ticks=49984/44251, in_queue=94235, util=99.79% 00:43:55.883 16:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:43:55.883 16:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=730368 00:43:55.883 16:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:43:55.883 16:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:43:55.883 [global] 00:43:55.883 thread=1 00:43:55.883 invalidate=1 00:43:55.883 rw=read 00:43:55.883 time_based=1 00:43:55.883 runtime=10 00:43:55.883 ioengine=libaio 00:43:55.883 direct=1 00:43:55.883 bs=4096 00:43:55.883 iodepth=1 00:43:55.883 norandommap=1 00:43:55.883 numjobs=1 00:43:55.883 00:43:55.883 [job0] 00:43:55.883 filename=/dev/nvme0n1 00:43:55.883 [job1] 00:43:55.883 filename=/dev/nvme0n2 00:43:55.883 [job2] 00:43:55.883 filename=/dev/nvme0n3 00:43:55.883 [job3] 00:43:55.883 filename=/dev/nvme0n4 00:43:55.883 Could not set queue depth (nvme0n1) 00:43:55.883 Could not set queue depth (nvme0n2) 00:43:55.883 Could not set queue depth (nvme0n3) 00:43:55.883 Could not set queue depth (nvme0n4) 00:43:56.147 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:56.147 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:56.147 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:56.147 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:56.147 fio-3.35 00:43:56.147 Starting 4 threads 00:43:58.687 16:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:43:58.687 16:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:43:58.947 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=274432, buflen=4096 00:43:58.947 fio: pid=730556, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:43:58.947 16:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:58.947 16:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:43:58.947 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=278528, buflen=4096 00:43:58.947 fio: pid=730554, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:43:59.206 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=1429504, buflen=4096 00:43:59.206 fio: pid=730552, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:43:59.206 16:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:59.206 16:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:43:59.467 16:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:59.467 16:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:43:59.467 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=8548352, buflen=4096 00:43:59.467 fio: pid=730553, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:43:59.467 00:43:59.467 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=730552: Fri Sep 27 16:02:39 2024 00:43:59.467 read: IOPS=117, BW=470KiB/s (481kB/s)(1396KiB/2973msec) 00:43:59.467 slat (usec): min=6, max=223, avg=22.62, stdev=14.31 00:43:59.467 clat (usec): min=449, max=42135, avg=8426.70, stdev=15949.56 00:43:59.467 lat (usec): min=473, max=42160, avg=8449.31, stdev=15952.86 00:43:59.467 clat percentiles (usec): 00:43:59.467 | 1.00th=[ 486], 5.00th=[ 668], 10.00th=[ 693], 20.00th=[ 742], 00:43:59.467 | 30.00th=[ 791], 40.00th=[ 816], 50.00th=[ 840], 60.00th=[ 873], 00:43:59.467 | 70.00th=[ 906], 80.00th=[ 1020], 90.00th=[41681], 95.00th=[42206], 00:43:59.467 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:59.467 | 99.99th=[42206] 00:43:59.467 bw ( KiB/s): min= 96, max= 2240, per=16.70%, avg=540.80, stdev=950.51, samples=5 00:43:59.467 iops : min= 24, max= 560, avg=135.20, stdev=237.63, samples=5 00:43:59.467 lat (usec) : 500=1.71%, 750=19.14%, 1000=58.86% 00:43:59.467 lat (msec) : 2=1.43%, 50=18.57% 00:43:59.467 cpu : usr=0.24%, sys=0.30%, ctx=352, majf=0, minf=1 00:43:59.467 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:59.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:59.467 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:59.467 issued rwts: total=350,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:59.467 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:59.467 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=730553: Fri Sep 27 16:02:39 2024 00:43:59.467 read: IOPS=656, BW=2624KiB/s (2687kB/s)(8348KiB/3181msec) 00:43:59.467 slat (usec): min=6, max=8805, avg=43.52, stdev=348.50 00:43:59.467 clat (usec): min=404, max=42057, avg=1463.79, stdev=4516.72 00:43:59.467 lat (usec): min=432, max=42085, avg=1504.18, stdev=4526.86 00:43:59.467 clat percentiles (usec): 00:43:59.467 | 1.00th=[ 619], 5.00th=[ 750], 10.00th=[ 807], 20.00th=[ 881], 00:43:59.467 | 30.00th=[ 930], 40.00th=[ 955], 50.00th=[ 971], 60.00th=[ 979], 00:43:59.467 | 70.00th=[ 996], 80.00th=[ 1020], 90.00th=[ 1057], 95.00th=[ 1106], 00:43:59.467 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:43:59.467 | 99.99th=[42206] 00:43:59.467 bw ( KiB/s): min= 648, max= 4048, per=83.79%, avg=2709.33, stdev=1421.29, samples=6 00:43:59.467 iops : min= 162, max= 1012, avg=677.33, stdev=355.32, samples=6 00:43:59.467 lat (usec) : 500=0.29%, 750=4.69%, 1000=65.61% 00:43:59.467 lat (msec) : 2=28.02%, 4=0.05%, 50=1.29% 00:43:59.467 cpu : usr=1.32%, sys=2.42%, ctx=2097, majf=0, minf=2 00:43:59.467 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:59.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:59.467 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:59.467 issued rwts: total=2088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:59.467 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:59.467 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=730554: Fri Sep 27 16:02:39 2024 00:43:59.467 read: IOPS=24, BW=97.0KiB/s (99.3kB/s)(272KiB/2805msec) 00:43:59.467 slat (usec): min=26, max=2507, avg=63.99, stdev=298.49 00:43:59.467 clat (usec): min=1066, max=42128, avg=40862.85, stdev=4921.69 00:43:59.467 lat (usec): min=1140, max=43919, avg=40927.39, stdev=4929.34 00:43:59.467 clat percentiles (usec): 00:43:59.467 | 1.00th=[ 1074], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:43:59.467 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:43:59.467 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:43:59.467 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:59.467 | 99.99th=[42206] 00:43:59.467 bw ( KiB/s): min= 96, max= 104, per=3.00%, avg=97.60, stdev= 3.58, samples=5 00:43:59.467 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:43:59.467 lat (msec) : 2=1.45%, 50=97.10% 00:43:59.467 cpu : usr=0.00%, sys=0.14%, ctx=71, majf=0, minf=2 00:43:59.467 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:59.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:59.467 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:59.467 issued rwts: total=69,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:59.467 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:59.467 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=730556: Fri Sep 27 16:02:39 2024 00:43:59.467 read: IOPS=25, BW=102KiB/s (105kB/s)(268KiB/2619msec) 00:43:59.467 slat (nsec): min=8759, max=40671, avg=26686.81, stdev=3571.54 00:43:59.467 clat (usec): min=554, max=42430, avg=38737.37, stdev=10826.37 00:43:59.467 lat (usec): min=562, max=42457, avg=38764.04, stdev=10827.59 00:43:59.467 clat percentiles (usec): 00:43:59.467 | 1.00th=[ 553], 5.00th=[ 996], 10.00th=[41157], 20.00th=[41157], 00:43:59.467 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:43:59.467 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:43:59.467 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:59.467 | 99.99th=[42206] 00:43:59.467 bw ( KiB/s): min= 96, max= 120, per=3.16%, avg=102.40, stdev=10.43, samples=5 00:43:59.467 iops : min= 24, max= 30, avg=25.60, stdev= 2.61, samples=5 00:43:59.467 lat (usec) : 750=1.47%, 1000=4.41% 00:43:59.467 lat (msec) : 2=1.47%, 50=91.18% 00:43:59.467 cpu : usr=0.00%, sys=0.15%, ctx=68, majf=0, minf=2 00:43:59.467 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:59.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:59.467 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:59.467 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:59.467 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:59.467 00:43:59.467 Run status group 0 (all jobs): 00:43:59.467 READ: bw=3233KiB/s (3311kB/s), 97.0KiB/s-2624KiB/s (99.3kB/s-2687kB/s), io=10.0MiB (10.5MB), run=2619-3181msec 00:43:59.467 00:43:59.467 Disk stats (read/write): 00:43:59.467 nvme0n1: ios=346/0, merge=0/0, ticks=2812/0, in_queue=2812, util=94.76% 00:43:59.467 nvme0n2: ios=2106/0, merge=0/0, ticks=3081/0, in_queue=3081, util=99.32% 00:43:59.467 nvme0n3: ios=63/0, merge=0/0, ticks=2572/0, in_queue=2572, util=96.03% 00:43:59.467 nvme0n4: ios=66/0, merge=0/0, ticks=2555/0, in_queue=2555, util=96.42% 00:43:59.467 16:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:59.467 16:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:43:59.727 16:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:59.727 16:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:43:59.986 16:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:59.986 16:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:43:59.986 16:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:59.986 16:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:44:00.246 16:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:44:00.246 16:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 730368 00:44:00.246 16:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:44:00.246 16:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:44:00.246 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:44:00.246 16:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:44:00.246 16:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:44:00.246 16:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:44:00.246 16:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:44:00.246 16:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:44:00.246 16:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:44:00.505 16:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:44:00.506 16:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:44:00.506 16:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:44:00.506 nvmf hotplug test: fio failed as expected 00:44:00.506 16:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:00.506 16:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:44:00.506 16:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:44:00.506 16:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:44:00.506 16:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:44:00.506 16:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:44:00.506 16:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:44:00.506 16:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:44:00.506 16:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:00.506 16:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:44:00.506 16:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:00.506 16:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:00.506 rmmod nvme_tcp 00:44:00.506 rmmod nvme_fabrics 00:44:00.506 rmmod nvme_keyring 00:44:00.765 16:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:00.765 16:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:44:00.765 16:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:44:00.765 16:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 727200 ']' 00:44:00.765 16:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 727200 00:44:00.765 16:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 727200 ']' 00:44:00.765 16:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 727200 00:44:00.765 16:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:44:00.765 16:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:44:00.765 16:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 727200 00:44:00.765 16:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:44:00.765 16:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:44:00.765 16:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 727200' 00:44:00.765 killing process with pid 727200 00:44:00.765 16:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 727200 00:44:00.765 16:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 727200 00:44:00.765 16:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:44:00.765 16:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:44:00.765 16:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:44:00.765 16:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:44:00.765 16:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-save 00:44:00.765 16:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:44:00.765 16:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-restore 00:44:00.765 16:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:00.765 16:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:00.765 16:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:00.765 16:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:00.766 16:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:03.308 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:03.308 00:44:03.308 real 0m27.754s 00:44:03.308 user 2m13.519s 00:44:03.308 sys 0m12.008s 00:44:03.308 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:03.308 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:44:03.308 ************************************ 00:44:03.308 END TEST nvmf_fio_target 00:44:03.308 ************************************ 00:44:03.308 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:44:03.308 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:44:03.308 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:03.308 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:44:03.308 ************************************ 00:44:03.308 START TEST nvmf_bdevio 00:44:03.308 ************************************ 00:44:03.308 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:44:03.308 * Looking for test storage... 00:44:03.308 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:03.308 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:44:03.308 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:44:03.308 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:44:03.308 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:44:03.308 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:03.308 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:03.308 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:03.308 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:44:03.308 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:44:03.308 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:44:03.308 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:44:03.308 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:44:03.308 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:44:03.308 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:44:03.308 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:03.308 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:44:03.308 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:44:03.308 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:03.308 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:03.308 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:44:03.308 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:44:03.308 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:03.308 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:44:03.308 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:44:03.308 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:44:03.308 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:44:03.308 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:03.308 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:44:03.308 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:44:03.308 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:03.308 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:03.308 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:44:03.308 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:03.308 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:44:03.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:03.308 --rc genhtml_branch_coverage=1 00:44:03.308 --rc genhtml_function_coverage=1 00:44:03.308 --rc genhtml_legend=1 00:44:03.308 --rc geninfo_all_blocks=1 00:44:03.308 --rc geninfo_unexecuted_blocks=1 00:44:03.309 00:44:03.309 ' 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:44:03.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:03.309 --rc genhtml_branch_coverage=1 00:44:03.309 --rc genhtml_function_coverage=1 00:44:03.309 --rc genhtml_legend=1 00:44:03.309 --rc geninfo_all_blocks=1 00:44:03.309 --rc geninfo_unexecuted_blocks=1 00:44:03.309 00:44:03.309 ' 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:44:03.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:03.309 --rc genhtml_branch_coverage=1 00:44:03.309 --rc genhtml_function_coverage=1 00:44:03.309 --rc genhtml_legend=1 00:44:03.309 --rc geninfo_all_blocks=1 00:44:03.309 --rc geninfo_unexecuted_blocks=1 00:44:03.309 00:44:03.309 ' 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:44:03.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:03.309 --rc genhtml_branch_coverage=1 00:44:03.309 --rc genhtml_function_coverage=1 00:44:03.309 --rc genhtml_legend=1 00:44:03.309 --rc geninfo_all_blocks=1 00:44:03.309 --rc geninfo_unexecuted_blocks=1 00:44:03.309 00:44:03.309 ' 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:44:03.309 16:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:11.447 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:11.447 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:44:11.448 Found 0000:31:00.0 (0x8086 - 0x159b) 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:44:11.448 Found 0000:31:00.1 (0x8086 - 0x159b) 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:44:11.448 Found net devices under 0000:31:00.0: cvl_0_0 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:44:11.448 Found net devices under 0000:31:00.1: cvl_0_1 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # is_hw=yes 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:11.448 16:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:11.448 16:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:11.448 16:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:11.448 16:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:11.448 16:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:11.448 16:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:11.448 16:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:11.448 16:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:11.448 16:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:11.448 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:11.448 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:44:11.448 00:44:11.449 --- 10.0.0.2 ping statistics --- 00:44:11.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:11.449 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:44:11.449 16:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:11.449 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:11.449 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:44:11.449 00:44:11.449 --- 10.0.0.1 ping statistics --- 00:44:11.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:11.449 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:44:11.449 16:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:11.449 16:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # return 0 00:44:11.449 16:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:44:11.449 16:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:11.449 16:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:44:11.449 16:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:44:11.449 16:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:11.449 16:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:44:11.449 16:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:44:11.449 16:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:44:11.449 16:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:44:11.449 16:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:11.449 16:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:11.449 16:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=735645 00:44:11.449 16:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 735645 00:44:11.449 16:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:44:11.449 16:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 735645 ']' 00:44:11.449 16:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:11.449 16:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:44:11.449 16:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:11.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:11.449 16:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:44:11.449 16:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:11.449 [2024-09-27 16:02:51.316820] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:44:11.449 [2024-09-27 16:02:51.317973] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:44:11.449 [2024-09-27 16:02:51.318021] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:11.449 [2024-09-27 16:02:51.410345] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:11.449 [2024-09-27 16:02:51.462626] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:11.449 [2024-09-27 16:02:51.462684] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:11.449 [2024-09-27 16:02:51.462693] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:11.449 [2024-09-27 16:02:51.462700] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:11.449 [2024-09-27 16:02:51.462706] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:11.449 [2024-09-27 16:02:51.462871] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:44:11.449 [2024-09-27 16:02:51.463037] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:44:11.449 [2024-09-27 16:02:51.463321] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:44:11.449 [2024-09-27 16:02:51.463324] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:44:11.449 [2024-09-27 16:02:51.557621] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:44:11.449 [2024-09-27 16:02:51.557691] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:44:11.449 [2024-09-27 16:02:51.558860] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:44:11.449 [2024-09-27 16:02:51.559123] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:44:11.449 [2024-09-27 16:02:51.559185] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:44:11.709 16:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:44:11.709 16:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:44:11.709 16:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:44:11.709 16:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:11.709 16:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:11.970 16:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:11.971 16:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:11.971 16:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:11.971 16:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:11.971 [2024-09-27 16:02:52.220345] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:11.971 16:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:11.971 16:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:44:11.971 16:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:11.971 16:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:11.971 Malloc0 00:44:11.971 16:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:11.971 16:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:44:11.971 16:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:11.971 16:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:11.971 16:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:11.971 16:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:44:11.971 16:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:11.971 16:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:11.971 16:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:11.971 16:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:11.971 16:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:11.971 16:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:11.971 [2024-09-27 16:02:52.304648] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:11.971 16:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:11.971 16:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:44:11.971 16:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:44:11.971 16:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:44:11.971 16:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:44:11.971 16:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:44:11.971 16:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:44:11.971 { 00:44:11.971 "params": { 00:44:11.971 "name": "Nvme$subsystem", 00:44:11.971 "trtype": "$TEST_TRANSPORT", 00:44:11.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:11.971 "adrfam": "ipv4", 00:44:11.971 "trsvcid": "$NVMF_PORT", 00:44:11.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:11.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:11.971 "hdgst": ${hdgst:-false}, 00:44:11.971 "ddgst": ${ddgst:-false} 00:44:11.971 }, 00:44:11.971 "method": "bdev_nvme_attach_controller" 00:44:11.971 } 00:44:11.971 EOF 00:44:11.971 )") 00:44:11.971 16:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:44:11.971 16:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:44:11.971 16:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:44:11.971 16:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:44:11.971 "params": { 00:44:11.971 "name": "Nvme1", 00:44:11.971 "trtype": "tcp", 00:44:11.971 "traddr": "10.0.0.2", 00:44:11.971 "adrfam": "ipv4", 00:44:11.971 "trsvcid": "4420", 00:44:11.971 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:11.971 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:11.971 "hdgst": false, 00:44:11.971 "ddgst": false 00:44:11.971 }, 00:44:11.971 "method": "bdev_nvme_attach_controller" 00:44:11.971 }' 00:44:11.971 [2024-09-27 16:02:52.369130] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:44:11.971 [2024-09-27 16:02:52.369206] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid735995 ] 00:44:11.971 [2024-09-27 16:02:52.454376] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:44:12.233 [2024-09-27 16:02:52.502974] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:44:12.233 [2024-09-27 16:02:52.503140] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:44:12.233 [2024-09-27 16:02:52.503140] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:44:12.233 I/O targets: 00:44:12.233 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:44:12.233 00:44:12.233 00:44:12.233 CUnit - A unit testing framework for C - Version 2.1-3 00:44:12.233 http://cunit.sourceforge.net/ 00:44:12.233 00:44:12.233 00:44:12.233 Suite: bdevio tests on: Nvme1n1 00:44:12.233 Test: blockdev write read block ...passed 00:44:12.494 Test: blockdev write zeroes read block ...passed 00:44:12.494 Test: blockdev write zeroes read no split ...passed 00:44:12.494 Test: blockdev write zeroes read split ...passed 00:44:12.494 Test: blockdev write zeroes read split partial ...passed 00:44:12.494 Test: blockdev reset ...[2024-09-27 16:02:52.821941] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:12.494 [2024-09-27 16:02:52.822033] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x53a7d0 (9): Bad file descriptor 00:44:12.494 [2024-09-27 16:02:52.916744] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:44:12.494 passed 00:44:12.494 Test: blockdev write read 8 blocks ...passed 00:44:12.755 Test: blockdev write read size > 128k ...passed 00:44:12.755 Test: blockdev write read invalid size ...passed 00:44:12.755 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:44:12.755 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:44:12.755 Test: blockdev write read max offset ...passed 00:44:12.755 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:44:12.755 Test: blockdev writev readv 8 blocks ...passed 00:44:12.755 Test: blockdev writev readv 30 x 1block ...passed 00:44:12.755 Test: blockdev writev readv block ...passed 00:44:12.755 Test: blockdev writev readv size > 128k ...passed 00:44:12.755 Test: blockdev writev readv size > 128k in two iovs ...passed 00:44:12.755 Test: blockdev comparev and writev ...[2024-09-27 16:02:53.225934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:12.755 [2024-09-27 16:02:53.226002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:44:12.755 [2024-09-27 16:02:53.226021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:12.755 [2024-09-27 16:02:53.226030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:44:12.755 [2024-09-27 16:02:53.226701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:12.755 [2024-09-27 16:02:53.226718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:44:12.755 [2024-09-27 16:02:53.226732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:12.755 [2024-09-27 16:02:53.226742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:44:12.755 [2024-09-27 16:02:53.227422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:12.755 [2024-09-27 16:02:53.227438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:44:12.755 [2024-09-27 16:02:53.227454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:12.755 [2024-09-27 16:02:53.227463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:44:12.755 [2024-09-27 16:02:53.228164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:12.755 [2024-09-27 16:02:53.228179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:44:12.755 [2024-09-27 16:02:53.228193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:12.755 [2024-09-27 16:02:53.228202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:44:13.016 passed 00:44:13.016 Test: blockdev nvme passthru rw ...passed 00:44:13.016 Test: blockdev nvme passthru vendor specific ...[2024-09-27 16:02:53.312874] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:44:13.016 [2024-09-27 16:02:53.312898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:44:13.016 [2024-09-27 16:02:53.313296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:44:13.016 [2024-09-27 16:02:53.313310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:44:13.016 [2024-09-27 16:02:53.313704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:44:13.016 [2024-09-27 16:02:53.313719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:44:13.016 [2024-09-27 16:02:53.314109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:44:13.016 [2024-09-27 16:02:53.314122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:44:13.016 passed 00:44:13.016 Test: blockdev nvme admin passthru ...passed 00:44:13.016 Test: blockdev copy ...passed 00:44:13.016 00:44:13.016 Run Summary: Type Total Ran Passed Failed Inactive 00:44:13.016 suites 1 1 n/a 0 0 00:44:13.016 tests 23 23 23 0 0 00:44:13.016 asserts 152 152 152 0 n/a 00:44:13.016 00:44:13.016 Elapsed time = 1.462 seconds 00:44:13.277 16:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:13.277 16:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:13.277 16:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:13.277 16:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:13.277 16:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:44:13.277 16:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:44:13.277 16:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:44:13.277 16:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:44:13.277 16:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:13.277 16:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:44:13.277 16:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:13.277 16:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:13.277 rmmod nvme_tcp 00:44:13.277 rmmod nvme_fabrics 00:44:13.277 rmmod nvme_keyring 00:44:13.277 16:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:13.277 16:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:44:13.277 16:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:44:13.277 16:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 735645 ']' 00:44:13.277 16:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 735645 00:44:13.277 16:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 735645 ']' 00:44:13.277 16:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 735645 00:44:13.277 16:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:44:13.277 16:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:44:13.277 16:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 735645 00:44:13.277 16:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:44:13.277 16:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:44:13.277 16:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 735645' 00:44:13.277 killing process with pid 735645 00:44:13.277 16:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 735645 00:44:13.277 16:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 735645 00:44:13.539 16:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:44:13.539 16:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:44:13.539 16:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:44:13.539 16:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:44:13.539 16:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-save 00:44:13.539 16:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:44:13.540 16:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-restore 00:44:13.540 16:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:13.540 16:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:13.540 16:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:13.540 16:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:13.540 16:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:16.088 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:16.088 00:44:16.088 real 0m12.608s 00:44:16.088 user 0m10.334s 00:44:16.088 sys 0m6.551s 00:44:16.088 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:16.088 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:16.088 ************************************ 00:44:16.088 END TEST nvmf_bdevio 00:44:16.088 ************************************ 00:44:16.088 16:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:44:16.088 00:44:16.088 real 5m0.102s 00:44:16.088 user 10m5.513s 00:44:16.088 sys 2m5.036s 00:44:16.088 16:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:16.088 16:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:44:16.088 ************************************ 00:44:16.088 END TEST nvmf_target_core_interrupt_mode 00:44:16.088 ************************************ 00:44:16.088 16:02:56 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:44:16.088 16:02:56 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:44:16.088 16:02:56 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:16.088 16:02:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:16.088 ************************************ 00:44:16.088 START TEST nvmf_interrupt 00:44:16.088 ************************************ 00:44:16.088 16:02:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:44:16.088 * Looking for test storage... 00:44:16.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:16.088 16:02:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:44:16.088 16:02:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lcov --version 00:44:16.088 16:02:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:44:16.088 16:02:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:44:16.088 16:02:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:16.088 16:02:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:16.088 16:02:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:16.088 16:02:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:44:16.088 16:02:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:44:16.088 16:02:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:44:16.088 16:02:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:44:16.088 16:02:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:44:16.088 16:02:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:44:16.088 16:02:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:44:16.088 16:02:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:16.088 16:02:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:44:16.088 16:02:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:44:16.088 16:02:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:16.088 16:02:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:44:16.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:16.089 --rc genhtml_branch_coverage=1 00:44:16.089 --rc genhtml_function_coverage=1 00:44:16.089 --rc genhtml_legend=1 00:44:16.089 --rc geninfo_all_blocks=1 00:44:16.089 --rc geninfo_unexecuted_blocks=1 00:44:16.089 00:44:16.089 ' 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:44:16.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:16.089 --rc genhtml_branch_coverage=1 00:44:16.089 --rc genhtml_function_coverage=1 00:44:16.089 --rc genhtml_legend=1 00:44:16.089 --rc geninfo_all_blocks=1 00:44:16.089 --rc geninfo_unexecuted_blocks=1 00:44:16.089 00:44:16.089 ' 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:44:16.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:16.089 --rc genhtml_branch_coverage=1 00:44:16.089 --rc genhtml_function_coverage=1 00:44:16.089 --rc genhtml_legend=1 00:44:16.089 --rc geninfo_all_blocks=1 00:44:16.089 --rc geninfo_unexecuted_blocks=1 00:44:16.089 00:44:16.089 ' 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:44:16.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:16.089 --rc genhtml_branch_coverage=1 00:44:16.089 --rc genhtml_function_coverage=1 00:44:16.089 --rc genhtml_legend=1 00:44:16.089 --rc geninfo_all_blocks=1 00:44:16.089 --rc geninfo_unexecuted_blocks=1 00:44:16.089 00:44:16.089 ' 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # prepare_net_devs 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@434 -- # local -g is_hw=no 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # remove_spdk_ns 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:44:16.089 16:02:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:24.232 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:44:24.233 Found 0000:31:00.0 (0x8086 - 0x159b) 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:44:24.233 Found 0000:31:00.1 (0x8086 - 0x159b) 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ up == up ]] 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:44:24.233 Found net devices under 0000:31:00.0: cvl_0_0 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ up == up ]] 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:44:24.233 Found net devices under 0000:31:00.1: cvl_0_1 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # is_hw=yes 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:24.233 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:24.233 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:44:24.233 00:44:24.233 --- 10.0.0.2 ping statistics --- 00:44:24.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:24.233 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:24.233 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:24.233 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:44:24.233 00:44:24.233 --- 10.0.0.1 ping statistics --- 00:44:24.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:24.233 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # return 0 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:44:24.233 16:03:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:44:24.233 16:03:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:44:24.233 16:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:44:24.233 16:03:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:24.234 16:03:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:24.234 16:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # nvmfpid=740518 00:44:24.234 16:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # waitforlisten 740518 00:44:24.234 16:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:44:24.234 16:03:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 740518 ']' 00:44:24.234 16:03:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:24.234 16:03:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:44:24.234 16:03:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:24.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:24.234 16:03:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:44:24.234 16:03:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:24.234 [2024-09-27 16:03:04.065057] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:44:24.234 [2024-09-27 16:03:04.066189] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:44:24.234 [2024-09-27 16:03:04.066240] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:24.234 [2024-09-27 16:03:04.154105] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:44:24.234 [2024-09-27 16:03:04.202610] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:24.234 [2024-09-27 16:03:04.202677] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:24.234 [2024-09-27 16:03:04.202687] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:24.234 [2024-09-27 16:03:04.202694] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:24.234 [2024-09-27 16:03:04.202701] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:24.234 [2024-09-27 16:03:04.202862] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:44:24.234 [2024-09-27 16:03:04.202862] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:44:24.234 [2024-09-27 16:03:04.267292] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:44:24.234 [2024-09-27 16:03:04.267878] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:44:24.234 [2024-09-27 16:03:04.268221] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:44:24.495 16:03:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:44:24.495 16:03:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:44:24.495 16:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:44:24.495 16:03:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:24.495 16:03:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:24.495 16:03:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:24.495 16:03:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:44:24.495 16:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:44:24.495 16:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:44:24.495 16:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:44:24.495 5000+0 records in 00:44:24.495 5000+0 records out 00:44:24.495 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0179241 s, 571 MB/s 00:44:24.495 16:03:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:44:24.495 16:03:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:24.495 16:03:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:24.757 AIO0 00:44:24.757 16:03:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:24.757 16:03:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:44:24.757 16:03:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:24.757 16:03:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:24.757 [2024-09-27 16:03:05.004046] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:24.757 16:03:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:24.757 16:03:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:44:24.757 16:03:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:24.757 16:03:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:24.757 16:03:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:24.757 16:03:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:44:24.757 16:03:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:24.757 16:03:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:24.757 16:03:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:24.757 16:03:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:24.757 16:03:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:24.757 16:03:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:24.757 [2024-09-27 16:03:05.056490] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:24.757 16:03:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:24.757 16:03:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:44:24.757 16:03:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 740518 0 00:44:24.757 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 740518 0 idle 00:44:24.757 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=740518 00:44:24.757 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:44:24.757 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:44:24.757 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:44:24.757 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:44:24.757 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:44:24.757 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:44:24.757 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:44:24.757 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:44:24.757 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:24.757 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 740518 -w 256 00:44:24.757 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:44:24.757 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 740518 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.28 reactor_0' 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 740518 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.28 reactor_0 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 740518 1 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 740518 1 idle 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=740518 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 740518 -w 256 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 740522 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1' 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 740522 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=740892 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 740518 0 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 740518 0 busy 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=740518 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 740518 -w 256 00:44:25.019 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:44:25.280 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 740518 root 20 0 128.2g 44928 32256 R 75.0 0.0 0:00.40 reactor_0' 00:44:25.280 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 740518 root 20 0 128.2g 44928 32256 R 75.0 0.0 0:00.40 reactor_0 00:44:25.280 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:25.280 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:25.280 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=75.0 00:44:25.280 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=75 00:44:25.280 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:44:25.280 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:44:25.280 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:44:25.280 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:44:25.280 16:03:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:44:25.280 16:03:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:44:25.280 16:03:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 740518 1 00:44:25.280 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 740518 1 busy 00:44:25.280 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=740518 00:44:25.280 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:44:25.280 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:44:25.280 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:44:25.280 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:44:25.280 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:44:25.280 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:44:25.280 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:44:25.280 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:25.280 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 740518 -w 256 00:44:25.280 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:44:25.542 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 740522 root 20 0 128.2g 44928 32256 R 93.8 0.0 0:00.24 reactor_1' 00:44:25.542 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 740522 root 20 0 128.2g 44928 32256 R 93.8 0.0 0:00.24 reactor_1 00:44:25.542 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:25.542 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:25.542 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.8 00:44:25.542 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:44:25.542 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:44:25.542 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:44:25.542 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:44:25.542 16:03:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:44:25.542 16:03:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 740892 00:44:35.547 Initializing NVMe Controllers 00:44:35.547 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:44:35.547 Controller IO queue size 256, less than required. 00:44:35.547 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:44:35.547 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:44:35.547 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:44:35.547 Initialization complete. Launching workers. 00:44:35.547 ======================================================== 00:44:35.547 Latency(us) 00:44:35.547 Device Information : IOPS MiB/s Average min max 00:44:35.547 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 18465.30 72.13 13868.90 4716.60 31313.71 00:44:35.547 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 19677.80 76.87 13013.92 4184.49 31821.13 00:44:35.547 ======================================================== 00:44:35.547 Total : 38143.09 149.00 13427.82 4184.49 31821.13 00:44:35.547 00:44:35.547 [2024-09-27 16:03:15.605858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121b60 is same with the state(6) to be set 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 740518 0 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 740518 0 idle 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=740518 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 740518 -w 256 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 740518 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.26 reactor_0' 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 740518 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.26 reactor_0 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 740518 1 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 740518 1 idle 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=740518 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 740518 -w 256 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 740522 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:09.99 reactor_1' 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 740522 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:09.99 reactor_1 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:44:35.547 16:03:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:44:36.120 16:03:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:44:36.120 16:03:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:44:36.120 16:03:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:44:36.120 16:03:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:44:36.120 16:03:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:44:38.031 16:03:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:44:38.031 16:03:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:44:38.031 16:03:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:44:38.031 16:03:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:44:38.031 16:03:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:44:38.031 16:03:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:44:38.031 16:03:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:44:38.031 16:03:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 740518 0 00:44:38.031 16:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 740518 0 idle 00:44:38.031 16:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=740518 00:44:38.031 16:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:44:38.031 16:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:44:38.031 16:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:44:38.031 16:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:44:38.031 16:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:44:38.031 16:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:44:38.031 16:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:44:38.031 16:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:44:38.031 16:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:38.031 16:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 740518 -w 256 00:44:38.031 16:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:44:38.290 16:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 740518 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.51 reactor_0' 00:44:38.290 16:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 740518 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.51 reactor_0 00:44:38.290 16:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:38.290 16:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:38.290 16:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:44:38.290 16:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:44:38.290 16:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:44:38.290 16:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:44:38.290 16:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:44:38.290 16:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:44:38.290 16:03:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:44:38.290 16:03:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 740518 1 00:44:38.290 16:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 740518 1 idle 00:44:38.290 16:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=740518 00:44:38.290 16:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:44:38.290 16:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:44:38.290 16:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:44:38.290 16:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:44:38.290 16:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:44:38.290 16:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:44:38.290 16:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:44:38.290 16:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:44:38.290 16:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:38.290 16:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 740518 -w 256 00:44:38.290 16:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:44:38.556 16:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 740522 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.13 reactor_1' 00:44:38.556 16:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 740522 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.13 reactor_1 00:44:38.556 16:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:38.556 16:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:38.556 16:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:44:38.556 16:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:44:38.556 16:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:44:38.556 16:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:44:38.556 16:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:44:38.556 16:03:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:44:38.556 16:03:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:44:38.556 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:44:38.556 16:03:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:44:38.556 16:03:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:44:38.556 16:03:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:44:38.556 16:03:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:44:38.556 16:03:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:44:38.556 16:03:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:44:38.556 16:03:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:44:38.556 16:03:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:44:38.556 16:03:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:44:38.851 16:03:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # nvmfcleanup 00:44:38.851 16:03:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:44:38.851 16:03:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:38.851 16:03:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:44:38.851 16:03:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:38.851 16:03:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:38.851 rmmod nvme_tcp 00:44:38.851 rmmod nvme_fabrics 00:44:38.851 rmmod nvme_keyring 00:44:38.851 16:03:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:38.851 16:03:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:44:38.851 16:03:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:44:38.851 16:03:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@513 -- # '[' -n 740518 ']' 00:44:38.851 16:03:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # killprocess 740518 00:44:38.851 16:03:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 740518 ']' 00:44:38.851 16:03:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 740518 00:44:38.851 16:03:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:44:38.851 16:03:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:44:38.851 16:03:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 740518 00:44:38.851 16:03:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:44:38.851 16:03:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:44:38.851 16:03:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 740518' 00:44:38.851 killing process with pid 740518 00:44:38.851 16:03:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 740518 00:44:38.851 16:03:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 740518 00:44:38.851 16:03:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:44:38.851 16:03:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:44:38.851 16:03:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:44:38.851 16:03:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:44:39.140 16:03:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@787 -- # iptables-save 00:44:39.140 16:03:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:44:39.140 16:03:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@787 -- # iptables-restore 00:44:39.140 16:03:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:39.140 16:03:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:39.140 16:03:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:39.140 16:03:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:44:39.140 16:03:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:41.207 16:03:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:41.207 00:44:41.207 real 0m25.311s 00:44:41.207 user 0m40.300s 00:44:41.207 sys 0m9.694s 00:44:41.207 16:03:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:41.207 16:03:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:41.207 ************************************ 00:44:41.207 END TEST nvmf_interrupt 00:44:41.207 ************************************ 00:44:41.207 00:44:41.207 real 38m21.014s 00:44:41.207 user 92m10.856s 00:44:41.207 sys 11m23.130s 00:44:41.207 16:03:21 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:41.207 16:03:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:41.207 ************************************ 00:44:41.207 END TEST nvmf_tcp 00:44:41.207 ************************************ 00:44:41.207 16:03:21 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:44:41.207 16:03:21 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:44:41.207 16:03:21 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:44:41.207 16:03:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:41.207 16:03:21 -- common/autotest_common.sh@10 -- # set +x 00:44:41.207 ************************************ 00:44:41.207 START TEST spdkcli_nvmf_tcp 00:44:41.207 ************************************ 00:44:41.207 16:03:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:44:41.207 * Looking for test storage... 00:44:41.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:44:41.207 16:03:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:44:41.207 16:03:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:44:41.207 16:03:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:44:41.478 16:03:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:44:41.478 16:03:21 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:41.478 16:03:21 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:41.478 16:03:21 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:41.478 16:03:21 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:44:41.478 16:03:21 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:44:41.478 16:03:21 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:44:41.478 16:03:21 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:44:41.478 16:03:21 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:44:41.478 16:03:21 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:44:41.478 16:03:21 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:44:41.478 16:03:21 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:41.478 16:03:21 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:44:41.478 16:03:21 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:44:41.478 16:03:21 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:41.478 16:03:21 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:41.478 16:03:21 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:44:41.478 16:03:21 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:44:41.478 16:03:21 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:41.478 16:03:21 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:44:41.478 16:03:21 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:44:41.478 16:03:21 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:44:41.478 16:03:21 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:44:41.478 16:03:21 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:41.478 16:03:21 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:44:41.478 16:03:21 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:44:41.478 16:03:21 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:41.478 16:03:21 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:41.478 16:03:21 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:44:41.478 16:03:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:41.478 16:03:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:44:41.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:41.478 --rc genhtml_branch_coverage=1 00:44:41.478 --rc genhtml_function_coverage=1 00:44:41.478 --rc genhtml_legend=1 00:44:41.478 --rc geninfo_all_blocks=1 00:44:41.478 --rc geninfo_unexecuted_blocks=1 00:44:41.478 00:44:41.478 ' 00:44:41.478 16:03:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:44:41.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:41.478 --rc genhtml_branch_coverage=1 00:44:41.478 --rc genhtml_function_coverage=1 00:44:41.478 --rc genhtml_legend=1 00:44:41.478 --rc geninfo_all_blocks=1 00:44:41.478 --rc geninfo_unexecuted_blocks=1 00:44:41.478 00:44:41.478 ' 00:44:41.478 16:03:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:44:41.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:41.478 --rc genhtml_branch_coverage=1 00:44:41.478 --rc genhtml_function_coverage=1 00:44:41.478 --rc genhtml_legend=1 00:44:41.478 --rc geninfo_all_blocks=1 00:44:41.478 --rc geninfo_unexecuted_blocks=1 00:44:41.478 00:44:41.478 ' 00:44:41.478 16:03:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:44:41.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:41.479 --rc genhtml_branch_coverage=1 00:44:41.479 --rc genhtml_function_coverage=1 00:44:41.479 --rc genhtml_legend=1 00:44:41.479 --rc geninfo_all_blocks=1 00:44:41.479 --rc geninfo_unexecuted_blocks=1 00:44:41.479 00:44:41.479 ' 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:41.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=744481 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 744481 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 744481 ']' 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:41.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:44:41.479 16:03:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:41.479 [2024-09-27 16:03:21.844658] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:44:41.479 [2024-09-27 16:03:21.844732] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid744481 ] 00:44:41.479 [2024-09-27 16:03:21.928339] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:44:41.739 [2024-09-27 16:03:21.976980] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:44:41.739 [2024-09-27 16:03:21.977001] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:44:42.308 16:03:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:44:42.308 16:03:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:44:42.308 16:03:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:44:42.308 16:03:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:42.308 16:03:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:42.308 16:03:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:44:42.308 16:03:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:44:42.308 16:03:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:44:42.308 16:03:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:42.308 16:03:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:42.308 16:03:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:44:42.308 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:44:42.308 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:44:42.308 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:44:42.308 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:44:42.308 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:44:42.308 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:44:42.308 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:44:42.308 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:44:42.308 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:44:42.308 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:44:42.308 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:44:42.308 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:44:42.308 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:44:42.308 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:44:42.308 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:44:42.308 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:44:42.308 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:44:42.308 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:44:42.308 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:44:42.308 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:44:42.308 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:44:42.308 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:44:42.308 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:44:42.308 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:44:42.308 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:44:42.308 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:44:42.308 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:44:42.308 ' 00:44:45.603 [2024-09-27 16:03:25.469834] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:46.542 [2024-09-27 16:03:26.834081] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:44:49.081 [2024-09-27 16:03:29.353086] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:44:51.622 [2024-09-27 16:03:31.579584] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:44:53.003 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:44:53.003 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:44:53.003 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:44:53.003 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:44:53.003 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:44:53.003 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:44:53.003 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:44:53.003 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:44:53.003 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:44:53.003 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:44:53.003 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:44:53.003 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:53.003 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:44:53.003 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:44:53.004 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:53.004 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:44:53.004 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:44:53.004 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:44:53.004 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:44:53.004 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:53.004 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:44:53.004 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:44:53.004 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:44:53.004 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:44:53.004 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:53.004 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:44:53.004 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:44:53.004 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:44:53.004 16:03:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:44:53.004 16:03:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:53.004 16:03:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:53.004 16:03:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:44:53.004 16:03:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:53.004 16:03:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:53.004 16:03:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:44:53.004 16:03:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:44:53.574 16:03:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:44:53.574 16:03:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:44:53.574 16:03:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:44:53.574 16:03:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:53.574 16:03:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:53.574 16:03:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:44:53.574 16:03:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:53.574 16:03:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:53.574 16:03:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:44:53.574 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:44:53.574 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:44:53.574 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:44:53.574 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:44:53.574 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:44:53.574 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:44:53.574 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:44:53.574 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:44:53.574 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:44:53.574 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:44:53.574 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:44:53.574 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:44:53.574 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:44:53.574 ' 00:45:00.161 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:45:00.161 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:45:00.161 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:45:00.161 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:45:00.161 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:45:00.161 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:45:00.161 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:45:00.161 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:45:00.161 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:45:00.161 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:45:00.161 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:45:00.161 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:45:00.161 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:45:00.161 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:45:00.161 16:03:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:45:00.161 16:03:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:00.161 16:03:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:00.161 16:03:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 744481 00:45:00.161 16:03:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 744481 ']' 00:45:00.161 16:03:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 744481 00:45:00.161 16:03:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:45:00.161 16:03:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:00.161 16:03:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 744481 00:45:00.161 16:03:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:45:00.161 16:03:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:45:00.161 16:03:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 744481' 00:45:00.161 killing process with pid 744481 00:45:00.161 16:03:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 744481 00:45:00.161 16:03:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 744481 00:45:00.161 16:03:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:45:00.162 16:03:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:45:00.162 16:03:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 744481 ']' 00:45:00.162 16:03:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 744481 00:45:00.162 16:03:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 744481 ']' 00:45:00.162 16:03:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 744481 00:45:00.162 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (744481) - No such process 00:45:00.162 16:03:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 744481 is not found' 00:45:00.162 Process with pid 744481 is not found 00:45:00.162 16:03:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:45:00.162 16:03:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:45:00.162 16:03:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:45:00.162 00:45:00.162 real 0m18.243s 00:45:00.162 user 0m40.437s 00:45:00.162 sys 0m0.983s 00:45:00.162 16:03:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:00.162 16:03:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:00.162 ************************************ 00:45:00.162 END TEST spdkcli_nvmf_tcp 00:45:00.162 ************************************ 00:45:00.162 16:03:39 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:45:00.162 16:03:39 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:45:00.162 16:03:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:00.162 16:03:39 -- common/autotest_common.sh@10 -- # set +x 00:45:00.162 ************************************ 00:45:00.162 START TEST nvmf_identify_passthru 00:45:00.162 ************************************ 00:45:00.162 16:03:39 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:45:00.162 * Looking for test storage... 00:45:00.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:45:00.162 16:03:39 nvmf_identify_passthru -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:45:00.162 16:03:39 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lcov --version 00:45:00.162 16:03:39 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:45:00.162 16:03:40 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:45:00.162 16:03:40 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:00.162 16:03:40 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:00.162 16:03:40 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:00.162 16:03:40 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:45:00.162 16:03:40 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:45:00.162 16:03:40 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:45:00.162 16:03:40 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:45:00.162 16:03:40 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:45:00.162 16:03:40 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:45:00.162 16:03:40 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:45:00.162 16:03:40 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:00.162 16:03:40 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:45:00.162 16:03:40 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:45:00.162 16:03:40 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:00.162 16:03:40 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:00.162 16:03:40 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:45:00.162 16:03:40 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:45:00.162 16:03:40 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:00.162 16:03:40 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:45:00.162 16:03:40 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:45:00.162 16:03:40 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:45:00.162 16:03:40 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:45:00.162 16:03:40 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:00.162 16:03:40 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:45:00.162 16:03:40 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:45:00.162 16:03:40 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:00.162 16:03:40 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:00.162 16:03:40 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:45:00.162 16:03:40 nvmf_identify_passthru -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:00.162 16:03:40 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:45:00.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:00.162 --rc genhtml_branch_coverage=1 00:45:00.162 --rc genhtml_function_coverage=1 00:45:00.162 --rc genhtml_legend=1 00:45:00.162 --rc geninfo_all_blocks=1 00:45:00.162 --rc geninfo_unexecuted_blocks=1 00:45:00.162 00:45:00.162 ' 00:45:00.162 16:03:40 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:45:00.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:00.162 --rc genhtml_branch_coverage=1 00:45:00.162 --rc genhtml_function_coverage=1 00:45:00.162 --rc genhtml_legend=1 00:45:00.162 --rc geninfo_all_blocks=1 00:45:00.162 --rc geninfo_unexecuted_blocks=1 00:45:00.162 00:45:00.162 ' 00:45:00.162 16:03:40 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:45:00.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:00.162 --rc genhtml_branch_coverage=1 00:45:00.162 --rc genhtml_function_coverage=1 00:45:00.162 --rc genhtml_legend=1 00:45:00.162 --rc geninfo_all_blocks=1 00:45:00.162 --rc geninfo_unexecuted_blocks=1 00:45:00.162 00:45:00.162 ' 00:45:00.162 16:03:40 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:45:00.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:00.162 --rc genhtml_branch_coverage=1 00:45:00.162 --rc genhtml_function_coverage=1 00:45:00.162 --rc genhtml_legend=1 00:45:00.162 --rc geninfo_all_blocks=1 00:45:00.162 --rc geninfo_unexecuted_blocks=1 00:45:00.162 00:45:00.162 ' 00:45:00.162 16:03:40 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:00.162 16:03:40 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:45:00.162 16:03:40 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:00.162 16:03:40 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:00.162 16:03:40 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:00.162 16:03:40 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:00.162 16:03:40 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:00.162 16:03:40 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:00.162 16:03:40 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:00.162 16:03:40 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:00.162 16:03:40 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:00.162 16:03:40 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:00.162 16:03:40 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:45:00.162 16:03:40 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:45:00.162 16:03:40 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:00.162 16:03:40 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:00.162 16:03:40 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:00.162 16:03:40 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:00.162 16:03:40 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:00.162 16:03:40 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:45:00.162 16:03:40 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:00.162 16:03:40 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:00.162 16:03:40 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:00.162 16:03:40 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:00.162 16:03:40 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:00.162 16:03:40 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:00.162 16:03:40 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:45:00.162 16:03:40 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:00.162 16:03:40 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:45:00.162 16:03:40 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:00.162 16:03:40 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:00.162 16:03:40 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:00.162 16:03:40 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:00.162 16:03:40 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:00.162 16:03:40 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:00.162 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:00.162 16:03:40 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:00.162 16:03:40 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:00.162 16:03:40 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:00.162 16:03:40 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:00.162 16:03:40 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:45:00.162 16:03:40 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:00.162 16:03:40 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:00.162 16:03:40 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:00.162 16:03:40 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:00.162 16:03:40 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:00.162 16:03:40 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:00.162 16:03:40 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:45:00.162 16:03:40 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:00.162 16:03:40 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:45:00.162 16:03:40 nvmf_identify_passthru -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:45:00.162 16:03:40 nvmf_identify_passthru -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:00.162 16:03:40 nvmf_identify_passthru -- nvmf/common.sh@472 -- # prepare_net_devs 00:45:00.162 16:03:40 nvmf_identify_passthru -- nvmf/common.sh@434 -- # local -g is_hw=no 00:45:00.162 16:03:40 nvmf_identify_passthru -- nvmf/common.sh@436 -- # remove_spdk_ns 00:45:00.162 16:03:40 nvmf_identify_passthru -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:00.162 16:03:40 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:00.162 16:03:40 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:00.162 16:03:40 nvmf_identify_passthru -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:45:00.162 16:03:40 nvmf_identify_passthru -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:45:00.162 16:03:40 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:45:00.162 16:03:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:08.297 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:45:08.297 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:45:08.297 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:45:08.297 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:45:08.297 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:45:08.297 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:45:08.297 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:45:08.297 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:45:08.297 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:45:08.297 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:45:08.297 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:45:08.297 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:45:08.297 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:45:08.297 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:45:08.297 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:45:08.297 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:45:08.297 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:45:08.297 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:45:08.297 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:45:08.297 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:45:08.297 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:45:08.298 Found 0000:31:00.0 (0x8086 - 0x159b) 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:45:08.298 Found 0000:31:00.1 (0x8086 - 0x159b) 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ up == up ]] 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:45:08.298 Found net devices under 0000:31:00.0: cvl_0_0 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ up == up ]] 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:45:08.298 Found net devices under 0000:31:00.1: cvl_0_1 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@438 -- # is_hw=yes 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:45:08.298 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:08.298 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.593 ms 00:45:08.298 00:45:08.298 --- 10.0.0.2 ping statistics --- 00:45:08.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:08.298 rtt min/avg/max/mdev = 0.593/0.593/0.593/0.000 ms 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:45:08.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:08.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:45:08.298 00:45:08.298 --- 10.0.0.1 ping statistics --- 00:45:08.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:08.298 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@446 -- # return 0 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:45:08.298 16:03:47 nvmf_identify_passthru -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:45:08.298 16:03:47 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:45:08.298 16:03:47 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:08.298 16:03:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:08.298 16:03:47 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:45:08.298 16:03:47 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:45:08.298 16:03:47 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:45:08.298 16:03:47 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:45:08.298 16:03:47 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:45:08.298 16:03:47 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:45:08.298 16:03:47 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:45:08.298 16:03:47 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:45:08.298 16:03:47 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:45:08.298 16:03:47 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:45:08.298 16:03:47 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:45:08.298 16:03:47 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:45:08.298 16:03:47 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:65:00.0 00:45:08.298 16:03:47 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:45:08.298 16:03:47 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:45:08.298 16:03:47 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:45:08.298 16:03:47 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:45:08.298 16:03:47 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:45:08.298 16:03:48 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605494 00:45:08.298 16:03:48 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:45:08.298 16:03:48 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:45:08.298 16:03:48 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:45:08.298 16:03:48 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:45:08.298 16:03:48 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:45:08.298 16:03:48 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:08.298 16:03:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:08.298 16:03:48 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:45:08.298 16:03:48 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:08.298 16:03:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:08.298 16:03:48 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=751718 00:45:08.298 16:03:48 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:45:08.298 16:03:48 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:45:08.298 16:03:48 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 751718 00:45:08.298 16:03:48 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 751718 ']' 00:45:08.298 16:03:48 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:08.298 16:03:48 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:08.298 16:03:48 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:08.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:08.298 16:03:48 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:08.298 16:03:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:08.560 [2024-09-27 16:03:48.826654] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:45:08.560 [2024-09-27 16:03:48.826722] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:08.560 [2024-09-27 16:03:48.916225] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:45:08.560 [2024-09-27 16:03:48.965072] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:08.560 [2024-09-27 16:03:48.965130] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:08.560 [2024-09-27 16:03:48.965138] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:08.560 [2024-09-27 16:03:48.965145] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:08.560 [2024-09-27 16:03:48.965151] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:08.560 [2024-09-27 16:03:48.965305] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:45:08.560 [2024-09-27 16:03:48.965465] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:45:08.560 [2024-09-27 16:03:48.965621] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:45:08.560 [2024-09-27 16:03:48.965622] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:45:09.502 16:03:49 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:09.502 16:03:49 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:45:09.502 16:03:49 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:45:09.502 16:03:49 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:09.502 16:03:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:09.502 INFO: Log level set to 20 00:45:09.502 INFO: Requests: 00:45:09.502 { 00:45:09.502 "jsonrpc": "2.0", 00:45:09.502 "method": "nvmf_set_config", 00:45:09.502 "id": 1, 00:45:09.502 "params": { 00:45:09.502 "admin_cmd_passthru": { 00:45:09.502 "identify_ctrlr": true 00:45:09.502 } 00:45:09.502 } 00:45:09.502 } 00:45:09.502 00:45:09.502 INFO: response: 00:45:09.502 { 00:45:09.502 "jsonrpc": "2.0", 00:45:09.502 "id": 1, 00:45:09.502 "result": true 00:45:09.502 } 00:45:09.502 00:45:09.502 16:03:49 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:09.502 16:03:49 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:45:09.502 16:03:49 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:09.502 16:03:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:09.502 INFO: Setting log level to 20 00:45:09.502 INFO: Setting log level to 20 00:45:09.502 INFO: Log level set to 20 00:45:09.502 INFO: Log level set to 20 00:45:09.502 INFO: Requests: 00:45:09.502 { 00:45:09.502 "jsonrpc": "2.0", 00:45:09.502 "method": "framework_start_init", 00:45:09.502 "id": 1 00:45:09.502 } 00:45:09.502 00:45:09.502 INFO: Requests: 00:45:09.502 { 00:45:09.502 "jsonrpc": "2.0", 00:45:09.502 "method": "framework_start_init", 00:45:09.502 "id": 1 00:45:09.502 } 00:45:09.502 00:45:09.502 [2024-09-27 16:03:49.753206] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:45:09.502 INFO: response: 00:45:09.502 { 00:45:09.502 "jsonrpc": "2.0", 00:45:09.502 "id": 1, 00:45:09.502 "result": true 00:45:09.502 } 00:45:09.502 00:45:09.502 INFO: response: 00:45:09.502 { 00:45:09.502 "jsonrpc": "2.0", 00:45:09.502 "id": 1, 00:45:09.502 "result": true 00:45:09.502 } 00:45:09.502 00:45:09.502 16:03:49 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:09.502 16:03:49 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:45:09.502 16:03:49 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:09.502 16:03:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:09.502 INFO: Setting log level to 40 00:45:09.502 INFO: Setting log level to 40 00:45:09.502 INFO: Setting log level to 40 00:45:09.502 [2024-09-27 16:03:49.766771] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:09.502 16:03:49 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:09.502 16:03:49 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:45:09.502 16:03:49 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:09.502 16:03:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:09.502 16:03:49 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:45:09.502 16:03:49 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:09.502 16:03:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:09.762 Nvme0n1 00:45:09.762 16:03:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:09.762 16:03:50 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:45:09.762 16:03:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:09.762 16:03:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:09.762 16:03:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:09.762 16:03:50 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:45:09.762 16:03:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:09.762 16:03:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:09.762 16:03:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:09.762 16:03:50 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:09.762 16:03:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:09.762 16:03:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:09.762 [2024-09-27 16:03:50.153672] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:09.762 16:03:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:09.762 16:03:50 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:45:09.762 16:03:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:09.762 16:03:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:09.762 [ 00:45:09.762 { 00:45:09.762 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:45:09.762 "subtype": "Discovery", 00:45:09.762 "listen_addresses": [], 00:45:09.762 "allow_any_host": true, 00:45:09.762 "hosts": [] 00:45:09.762 }, 00:45:09.762 { 00:45:09.762 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:45:09.762 "subtype": "NVMe", 00:45:09.762 "listen_addresses": [ 00:45:09.762 { 00:45:09.762 "trtype": "TCP", 00:45:09.762 "adrfam": "IPv4", 00:45:09.762 "traddr": "10.0.0.2", 00:45:09.762 "trsvcid": "4420" 00:45:09.762 } 00:45:09.762 ], 00:45:09.762 "allow_any_host": true, 00:45:09.762 "hosts": [], 00:45:09.762 "serial_number": "SPDK00000000000001", 00:45:09.762 "model_number": "SPDK bdev Controller", 00:45:09.762 "max_namespaces": 1, 00:45:09.762 "min_cntlid": 1, 00:45:09.762 "max_cntlid": 65519, 00:45:09.762 "namespaces": [ 00:45:09.762 { 00:45:09.762 "nsid": 1, 00:45:09.762 "bdev_name": "Nvme0n1", 00:45:09.762 "name": "Nvme0n1", 00:45:09.762 "nguid": "3634473052605494002538450000002B", 00:45:09.762 "uuid": "36344730-5260-5494-0025-38450000002b" 00:45:09.762 } 00:45:09.762 ] 00:45:09.762 } 00:45:09.762 ] 00:45:09.762 16:03:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:09.762 16:03:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:45:09.762 16:03:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:45:09.762 16:03:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:45:10.022 16:03:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605494 00:45:10.022 16:03:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:45:10.022 16:03:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:45:10.022 16:03:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:45:10.282 16:03:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:45:10.282 16:03:50 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605494 '!=' S64GNE0R605494 ']' 00:45:10.282 16:03:50 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:45:10.282 16:03:50 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:10.282 16:03:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:10.282 16:03:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:10.282 16:03:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:10.282 16:03:50 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:45:10.282 16:03:50 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:45:10.282 16:03:50 nvmf_identify_passthru -- nvmf/common.sh@512 -- # nvmfcleanup 00:45:10.282 16:03:50 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:45:10.282 16:03:50 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:10.282 16:03:50 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:45:10.282 16:03:50 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:10.282 16:03:50 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:10.282 rmmod nvme_tcp 00:45:10.282 rmmod nvme_fabrics 00:45:10.282 rmmod nvme_keyring 00:45:10.282 16:03:50 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:10.282 16:03:50 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:45:10.282 16:03:50 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:45:10.282 16:03:50 nvmf_identify_passthru -- nvmf/common.sh@513 -- # '[' -n 751718 ']' 00:45:10.282 16:03:50 nvmf_identify_passthru -- nvmf/common.sh@514 -- # killprocess 751718 00:45:10.282 16:03:50 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 751718 ']' 00:45:10.282 16:03:50 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 751718 00:45:10.282 16:03:50 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:45:10.282 16:03:50 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:10.282 16:03:50 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 751718 00:45:10.542 16:03:50 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:45:10.542 16:03:50 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:45:10.542 16:03:50 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 751718' 00:45:10.542 killing process with pid 751718 00:45:10.542 16:03:50 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 751718 00:45:10.542 16:03:50 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 751718 00:45:10.802 16:03:51 nvmf_identify_passthru -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:45:10.802 16:03:51 nvmf_identify_passthru -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:45:10.803 16:03:51 nvmf_identify_passthru -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:45:10.803 16:03:51 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:45:10.803 16:03:51 nvmf_identify_passthru -- nvmf/common.sh@787 -- # iptables-save 00:45:10.803 16:03:51 nvmf_identify_passthru -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:45:10.803 16:03:51 nvmf_identify_passthru -- nvmf/common.sh@787 -- # iptables-restore 00:45:10.803 16:03:51 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:10.803 16:03:51 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:45:10.803 16:03:51 nvmf_identify_passthru -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:10.803 16:03:51 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:10.803 16:03:51 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:12.712 16:03:53 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:45:12.712 00:45:12.712 real 0m13.319s 00:45:12.712 user 0m10.880s 00:45:12.712 sys 0m6.518s 00:45:12.712 16:03:53 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:12.712 16:03:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:12.712 ************************************ 00:45:12.712 END TEST nvmf_identify_passthru 00:45:12.712 ************************************ 00:45:12.973 16:03:53 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:45:12.973 16:03:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:45:12.973 16:03:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:12.973 16:03:53 -- common/autotest_common.sh@10 -- # set +x 00:45:12.973 ************************************ 00:45:12.973 START TEST nvmf_dif 00:45:12.973 ************************************ 00:45:12.973 16:03:53 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:45:12.973 * Looking for test storage... 00:45:12.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:45:12.973 16:03:53 nvmf_dif -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:45:12.973 16:03:53 nvmf_dif -- common/autotest_common.sh@1681 -- # lcov --version 00:45:12.973 16:03:53 nvmf_dif -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:45:12.973 16:03:53 nvmf_dif -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:45:12.973 16:03:53 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:12.973 16:03:53 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:12.973 16:03:53 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:12.973 16:03:53 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:45:12.973 16:03:53 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:45:12.973 16:03:53 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:45:12.973 16:03:53 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:45:12.973 16:03:53 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:45:12.973 16:03:53 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:45:12.973 16:03:53 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:45:12.973 16:03:53 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:12.973 16:03:53 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:45:12.973 16:03:53 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:45:12.973 16:03:53 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:12.973 16:03:53 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:12.973 16:03:53 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:45:12.973 16:03:53 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:45:12.973 16:03:53 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:12.973 16:03:53 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:45:12.973 16:03:53 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:45:12.973 16:03:53 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:45:12.973 16:03:53 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:45:12.973 16:03:53 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:12.973 16:03:53 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:45:12.973 16:03:53 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:45:12.973 16:03:53 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:12.973 16:03:53 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:12.973 16:03:53 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:45:12.973 16:03:53 nvmf_dif -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:12.973 16:03:53 nvmf_dif -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:45:12.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:12.973 --rc genhtml_branch_coverage=1 00:45:12.973 --rc genhtml_function_coverage=1 00:45:12.973 --rc genhtml_legend=1 00:45:12.973 --rc geninfo_all_blocks=1 00:45:12.973 --rc geninfo_unexecuted_blocks=1 00:45:12.973 00:45:12.973 ' 00:45:12.973 16:03:53 nvmf_dif -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:45:12.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:12.973 --rc genhtml_branch_coverage=1 00:45:12.973 --rc genhtml_function_coverage=1 00:45:12.973 --rc genhtml_legend=1 00:45:12.973 --rc geninfo_all_blocks=1 00:45:12.973 --rc geninfo_unexecuted_blocks=1 00:45:12.973 00:45:12.973 ' 00:45:12.973 16:03:53 nvmf_dif -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:45:12.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:12.973 --rc genhtml_branch_coverage=1 00:45:12.973 --rc genhtml_function_coverage=1 00:45:12.973 --rc genhtml_legend=1 00:45:12.973 --rc geninfo_all_blocks=1 00:45:12.973 --rc geninfo_unexecuted_blocks=1 00:45:12.973 00:45:12.973 ' 00:45:12.973 16:03:53 nvmf_dif -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:45:12.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:12.973 --rc genhtml_branch_coverage=1 00:45:12.973 --rc genhtml_function_coverage=1 00:45:12.973 --rc genhtml_legend=1 00:45:12.973 --rc geninfo_all_blocks=1 00:45:12.973 --rc geninfo_unexecuted_blocks=1 00:45:12.973 00:45:12.973 ' 00:45:12.973 16:03:53 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:12.973 16:03:53 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:45:13.234 16:03:53 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:13.234 16:03:53 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:13.234 16:03:53 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:13.234 16:03:53 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:13.234 16:03:53 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:13.234 16:03:53 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:13.234 16:03:53 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:13.234 16:03:53 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:13.234 16:03:53 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:13.234 16:03:53 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:13.234 16:03:53 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:45:13.234 16:03:53 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:45:13.234 16:03:53 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:13.234 16:03:53 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:13.234 16:03:53 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:13.234 16:03:53 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:13.234 16:03:53 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:13.234 16:03:53 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:45:13.234 16:03:53 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:13.234 16:03:53 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:13.234 16:03:53 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:13.234 16:03:53 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:13.234 16:03:53 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:13.234 16:03:53 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:13.234 16:03:53 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:45:13.234 16:03:53 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:13.234 16:03:53 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:45:13.234 16:03:53 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:13.234 16:03:53 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:13.234 16:03:53 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:13.234 16:03:53 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:13.234 16:03:53 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:13.234 16:03:53 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:13.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:13.234 16:03:53 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:13.234 16:03:53 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:13.234 16:03:53 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:13.234 16:03:53 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:45:13.234 16:03:53 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:45:13.234 16:03:53 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:45:13.234 16:03:53 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:45:13.234 16:03:53 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:45:13.234 16:03:53 nvmf_dif -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:45:13.234 16:03:53 nvmf_dif -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:13.234 16:03:53 nvmf_dif -- nvmf/common.sh@472 -- # prepare_net_devs 00:45:13.234 16:03:53 nvmf_dif -- nvmf/common.sh@434 -- # local -g is_hw=no 00:45:13.234 16:03:53 nvmf_dif -- nvmf/common.sh@436 -- # remove_spdk_ns 00:45:13.234 16:03:53 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:13.234 16:03:53 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:13.234 16:03:53 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:13.234 16:03:53 nvmf_dif -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:45:13.234 16:03:53 nvmf_dif -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:45:13.234 16:03:53 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:45:13.234 16:03:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:45:21.369 Found 0000:31:00.0 (0x8086 - 0x159b) 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:45:21.369 Found 0000:31:00.1 (0x8086 - 0x159b) 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:21.369 16:04:00 nvmf_dif -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@414 -- # [[ up == up ]] 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:45:21.370 Found net devices under 0000:31:00.0: cvl_0_0 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@414 -- # [[ up == up ]] 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:45:21.370 Found net devices under 0000:31:00.1: cvl_0_1 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@438 -- # is_hw=yes 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:45:21.370 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:21.370 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.589 ms 00:45:21.370 00:45:21.370 --- 10.0.0.2 ping statistics --- 00:45:21.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:21.370 rtt min/avg/max/mdev = 0.589/0.589/0.589/0.000 ms 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:45:21.370 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:21.370 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:45:21.370 00:45:21.370 --- 10.0.0.1 ping statistics --- 00:45:21.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:21.370 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@446 -- # return 0 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:45:21.370 16:04:00 nvmf_dif -- nvmf/common.sh@475 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:45:23.914 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:45:23.914 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:45:23.914 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:45:23.914 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:45:23.914 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:45:23.914 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:45:23.914 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:45:23.914 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:45:23.914 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:45:23.914 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:45:23.914 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:45:23.914 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:45:23.914 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:45:23.914 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:45:23.914 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:45:23.914 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:45:23.914 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:45:24.175 16:04:04 nvmf_dif -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:24.175 16:04:04 nvmf_dif -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:45:24.175 16:04:04 nvmf_dif -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:45:24.175 16:04:04 nvmf_dif -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:24.175 16:04:04 nvmf_dif -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:45:24.175 16:04:04 nvmf_dif -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:45:24.175 16:04:04 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:45:24.175 16:04:04 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:45:24.175 16:04:04 nvmf_dif -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:45:24.175 16:04:04 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:24.175 16:04:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:24.175 16:04:04 nvmf_dif -- nvmf/common.sh@505 -- # nvmfpid=757954 00:45:24.175 16:04:04 nvmf_dif -- nvmf/common.sh@506 -- # waitforlisten 757954 00:45:24.175 16:04:04 nvmf_dif -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:45:24.175 16:04:04 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 757954 ']' 00:45:24.175 16:04:04 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:24.175 16:04:04 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:24.175 16:04:04 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:24.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:24.175 16:04:04 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:24.175 16:04:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:24.175 [2024-09-27 16:04:04.633320] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:45:24.175 [2024-09-27 16:04:04.633382] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:24.435 [2024-09-27 16:04:04.717327] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:24.435 [2024-09-27 16:04:04.763214] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:24.435 [2024-09-27 16:04:04.763266] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:24.435 [2024-09-27 16:04:04.763274] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:24.435 [2024-09-27 16:04:04.763281] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:24.435 [2024-09-27 16:04:04.763287] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:24.436 [2024-09-27 16:04:04.763319] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:45:25.005 16:04:05 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:25.005 16:04:05 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:45:25.005 16:04:05 nvmf_dif -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:45:25.005 16:04:05 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:25.005 16:04:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:25.005 16:04:05 nvmf_dif -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:25.005 16:04:05 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:45:25.005 16:04:05 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:45:25.005 16:04:05 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:25.005 16:04:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:25.005 [2024-09-27 16:04:05.486617] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:25.005 16:04:05 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:25.005 16:04:05 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:45:25.005 16:04:05 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:45:25.005 16:04:05 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:25.005 16:04:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:25.265 ************************************ 00:45:25.265 START TEST fio_dif_1_default 00:45:25.265 ************************************ 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:25.265 bdev_null0 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:25.265 [2024-09-27 16:04:05.570966] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # config=() 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # local subsystem config 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:45:25.265 { 00:45:25.265 "params": { 00:45:25.265 "name": "Nvme$subsystem", 00:45:25.265 "trtype": "$TEST_TRANSPORT", 00:45:25.265 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:25.265 "adrfam": "ipv4", 00:45:25.265 "trsvcid": "$NVMF_PORT", 00:45:25.265 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:25.265 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:25.265 "hdgst": ${hdgst:-false}, 00:45:25.265 "ddgst": ${ddgst:-false} 00:45:25.265 }, 00:45:25.265 "method": "bdev_nvme_attach_controller" 00:45:25.265 } 00:45:25.265 EOF 00:45:25.265 )") 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # cat 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # jq . 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@581 -- # IFS=, 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:45:25.265 "params": { 00:45:25.265 "name": "Nvme0", 00:45:25.265 "trtype": "tcp", 00:45:25.265 "traddr": "10.0.0.2", 00:45:25.265 "adrfam": "ipv4", 00:45:25.265 "trsvcid": "4420", 00:45:25.265 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:25.265 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:25.265 "hdgst": false, 00:45:25.265 "ddgst": false 00:45:25.265 }, 00:45:25.265 "method": "bdev_nvme_attach_controller" 00:45:25.265 }' 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:25.265 16:04:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:25.524 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:45:25.524 fio-3.35 00:45:25.524 Starting 1 thread 00:45:37.752 00:45:37.752 filename0: (groupid=0, jobs=1): err= 0: pid=758487: Fri Sep 27 16:04:16 2024 00:45:37.752 read: IOPS=189, BW=759KiB/s (778kB/s)(7600KiB/10009msec) 00:45:37.753 slat (nsec): min=5396, max=32404, avg=6275.26, stdev=1838.13 00:45:37.753 clat (usec): min=592, max=42029, avg=21054.01, stdev=20167.79 00:45:37.753 lat (usec): min=597, max=42054, avg=21060.29, stdev=20167.73 00:45:37.753 clat percentiles (usec): 00:45:37.753 | 1.00th=[ 652], 5.00th=[ 758], 10.00th=[ 799], 20.00th=[ 840], 00:45:37.753 | 30.00th=[ 857], 40.00th=[ 898], 50.00th=[41157], 60.00th=[41157], 00:45:37.753 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:45:37.753 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:45:37.753 | 99.99th=[42206] 00:45:37.753 bw ( KiB/s): min= 704, max= 768, per=99.83%, avg=758.40, stdev=23.45, samples=20 00:45:37.753 iops : min= 176, max= 192, avg=189.60, stdev= 5.86, samples=20 00:45:37.753 lat (usec) : 750=4.68%, 1000=44.37% 00:45:37.753 lat (msec) : 2=0.63%, 4=0.21%, 50=50.11% 00:45:37.753 cpu : usr=93.29%, sys=6.50%, ctx=14, majf=0, minf=145 00:45:37.753 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:37.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:37.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:37.753 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:37.753 latency : target=0, window=0, percentile=100.00%, depth=4 00:45:37.753 00:45:37.753 Run status group 0 (all jobs): 00:45:37.753 READ: bw=759KiB/s (778kB/s), 759KiB/s-759KiB/s (778kB/s-778kB/s), io=7600KiB (7782kB), run=10009-10009msec 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:37.753 00:45:37.753 real 0m11.105s 00:45:37.753 user 0m24.732s 00:45:37.753 sys 0m0.991s 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:37.753 ************************************ 00:45:37.753 END TEST fio_dif_1_default 00:45:37.753 ************************************ 00:45:37.753 16:04:16 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:45:37.753 16:04:16 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:45:37.753 16:04:16 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:37.753 16:04:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:37.753 ************************************ 00:45:37.753 START TEST fio_dif_1_multi_subsystems 00:45:37.753 ************************************ 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:37.753 bdev_null0 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:37.753 [2024-09-27 16:04:16.756652] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:37.753 bdev_null1 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # config=() 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # local subsystem config 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:45:37.753 { 00:45:37.753 "params": { 00:45:37.753 "name": "Nvme$subsystem", 00:45:37.753 "trtype": "$TEST_TRANSPORT", 00:45:37.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:37.753 "adrfam": "ipv4", 00:45:37.753 "trsvcid": "$NVMF_PORT", 00:45:37.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:37.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:37.753 "hdgst": ${hdgst:-false}, 00:45:37.753 "ddgst": ${ddgst:-false} 00:45:37.753 }, 00:45:37.753 "method": "bdev_nvme_attach_controller" 00:45:37.753 } 00:45:37.753 EOF 00:45:37.753 )") 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:37.753 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:45:37.754 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:45:37.754 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:45:37.754 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:37.754 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:45:37.754 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:45:37.754 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:45:37.754 { 00:45:37.754 "params": { 00:45:37.754 "name": "Nvme$subsystem", 00:45:37.754 "trtype": "$TEST_TRANSPORT", 00:45:37.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:37.754 "adrfam": "ipv4", 00:45:37.754 "trsvcid": "$NVMF_PORT", 00:45:37.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:37.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:37.754 "hdgst": ${hdgst:-false}, 00:45:37.754 "ddgst": ${ddgst:-false} 00:45:37.754 }, 00:45:37.754 "method": "bdev_nvme_attach_controller" 00:45:37.754 } 00:45:37.754 EOF 00:45:37.754 )") 00:45:37.754 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:45:37.754 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:45:37.754 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:45:37.754 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # jq . 00:45:37.754 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@581 -- # IFS=, 00:45:37.754 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:45:37.754 "params": { 00:45:37.754 "name": "Nvme0", 00:45:37.754 "trtype": "tcp", 00:45:37.754 "traddr": "10.0.0.2", 00:45:37.754 "adrfam": "ipv4", 00:45:37.754 "trsvcid": "4420", 00:45:37.754 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:37.754 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:37.754 "hdgst": false, 00:45:37.754 "ddgst": false 00:45:37.754 }, 00:45:37.754 "method": "bdev_nvme_attach_controller" 00:45:37.754 },{ 00:45:37.754 "params": { 00:45:37.754 "name": "Nvme1", 00:45:37.754 "trtype": "tcp", 00:45:37.754 "traddr": "10.0.0.2", 00:45:37.754 "adrfam": "ipv4", 00:45:37.754 "trsvcid": "4420", 00:45:37.754 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:37.754 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:37.754 "hdgst": false, 00:45:37.754 "ddgst": false 00:45:37.754 }, 00:45:37.754 "method": "bdev_nvme_attach_controller" 00:45:37.754 }' 00:45:37.754 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:37.754 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:37.754 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:37.754 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:37.754 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:45:37.754 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:37.754 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:37.754 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:37.754 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:37.754 16:04:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:37.754 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:45:37.754 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:45:37.754 fio-3.35 00:45:37.754 Starting 2 threads 00:45:47.753 00:45:47.753 filename0: (groupid=0, jobs=1): err= 0: pid=760685: Fri Sep 27 16:04:27 2024 00:45:47.753 read: IOPS=190, BW=762KiB/s (781kB/s)(7632KiB/10010msec) 00:45:47.753 slat (nsec): min=5398, max=32887, avg=6582.73, stdev=2472.49 00:45:47.753 clat (usec): min=543, max=42119, avg=20966.08, stdev=20142.26 00:45:47.753 lat (usec): min=553, max=42127, avg=20972.67, stdev=20142.15 00:45:47.753 clat percentiles (usec): 00:45:47.753 | 1.00th=[ 652], 5.00th=[ 799], 10.00th=[ 824], 20.00th=[ 848], 00:45:47.753 | 30.00th=[ 865], 40.00th=[ 914], 50.00th=[ 1909], 60.00th=[41157], 00:45:47.753 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:45:47.753 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:45:47.753 | 99.99th=[42206] 00:45:47.753 bw ( KiB/s): min= 704, max= 768, per=49.91%, avg=761.60, stdev=19.70, samples=20 00:45:47.753 iops : min= 176, max= 192, avg=190.40, stdev= 4.92, samples=20 00:45:47.753 lat (usec) : 750=1.47%, 1000=45.13% 00:45:47.753 lat (msec) : 2=3.51%, 50=49.90% 00:45:47.753 cpu : usr=95.50%, sys=4.29%, ctx=9, majf=0, minf=134 00:45:47.753 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:47.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:47.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:47.753 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:47.753 latency : target=0, window=0, percentile=100.00%, depth=4 00:45:47.753 filename1: (groupid=0, jobs=1): err= 0: pid=760686: Fri Sep 27 16:04:27 2024 00:45:47.753 read: IOPS=190, BW=762KiB/s (781kB/s)(7632KiB/10011msec) 00:45:47.753 slat (nsec): min=5432, max=31687, avg=6522.00, stdev=2461.53 00:45:47.753 clat (usec): min=562, max=42591, avg=20968.96, stdev=20164.66 00:45:47.753 lat (usec): min=567, max=42599, avg=20975.49, stdev=20164.55 00:45:47.753 clat percentiles (usec): 00:45:47.753 | 1.00th=[ 603], 5.00th=[ 725], 10.00th=[ 799], 20.00th=[ 832], 00:45:47.753 | 30.00th=[ 857], 40.00th=[ 906], 50.00th=[ 1860], 60.00th=[41157], 00:45:47.753 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:45:47.753 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:45:47.753 | 99.99th=[42730] 00:45:47.753 bw ( KiB/s): min= 704, max= 768, per=49.91%, avg=761.60, stdev=19.70, samples=20 00:45:47.753 iops : min= 176, max= 192, avg=190.40, stdev= 4.92, samples=20 00:45:47.753 lat (usec) : 750=7.29%, 1000=38.68% 00:45:47.753 lat (msec) : 2=4.14%, 50=49.90% 00:45:47.753 cpu : usr=95.34%, sys=4.46%, ctx=13, majf=0, minf=125 00:45:47.753 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:47.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:47.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:47.753 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:47.753 latency : target=0, window=0, percentile=100.00%, depth=4 00:45:47.753 00:45:47.753 Run status group 0 (all jobs): 00:45:47.753 READ: bw=1525KiB/s (1561kB/s), 762KiB/s-762KiB/s (781kB/s-781kB/s), io=14.9MiB (15.6MB), run=10010-10011msec 00:45:47.753 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:45:47.753 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:45:47.753 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:45:47.753 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:47.753 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:45:47.753 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:47.753 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:47.753 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:47.753 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:47.753 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:47.753 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:47.753 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:47.753 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:47.753 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:45:47.753 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:45:47.753 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:45:47.753 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:47.753 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:47.753 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:47.753 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:47.753 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:45:47.753 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:47.753 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:47.753 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:47.753 00:45:47.753 real 0m11.372s 00:45:47.753 user 0m34.391s 00:45:47.753 sys 0m1.269s 00:45:47.753 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:47.753 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:47.753 ************************************ 00:45:47.753 END TEST fio_dif_1_multi_subsystems 00:45:47.753 ************************************ 00:45:47.753 16:04:28 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:45:47.753 16:04:28 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:45:47.753 16:04:28 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:47.753 16:04:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:47.753 ************************************ 00:45:47.753 START TEST fio_dif_rand_params 00:45:47.753 ************************************ 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:47.753 bdev_null0 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:47.753 [2024-09-27 16:04:28.208362] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:45:47.753 { 00:45:47.753 "params": { 00:45:47.753 "name": "Nvme$subsystem", 00:45:47.753 "trtype": "$TEST_TRANSPORT", 00:45:47.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:47.753 "adrfam": "ipv4", 00:45:47.753 "trsvcid": "$NVMF_PORT", 00:45:47.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:47.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:47.753 "hdgst": ${hdgst:-false}, 00:45:47.753 "ddgst": ${ddgst:-false} 00:45:47.753 }, 00:45:47.753 "method": "bdev_nvme_attach_controller" 00:45:47.753 } 00:45:47.753 EOF 00:45:47.753 )") 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:45:47.753 16:04:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:45:47.753 "params": { 00:45:47.753 "name": "Nvme0", 00:45:47.753 "trtype": "tcp", 00:45:47.753 "traddr": "10.0.0.2", 00:45:47.753 "adrfam": "ipv4", 00:45:47.753 "trsvcid": "4420", 00:45:47.753 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:47.753 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:47.753 "hdgst": false, 00:45:47.753 "ddgst": false 00:45:47.753 }, 00:45:47.753 "method": "bdev_nvme_attach_controller" 00:45:47.753 }' 00:45:48.015 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:48.015 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:48.015 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:48.015 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:48.015 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:45:48.015 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:48.015 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:48.015 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:48.015 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:48.015 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:48.275 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:45:48.275 ... 00:45:48.275 fio-3.35 00:45:48.275 Starting 3 threads 00:45:54.860 00:45:54.860 filename0: (groupid=0, jobs=1): err= 0: pid=762897: Fri Sep 27 16:04:34 2024 00:45:54.860 read: IOPS=363, BW=45.4MiB/s (47.6MB/s)(229MiB/5047msec) 00:45:54.860 slat (nsec): min=5501, max=31834, avg=8658.89, stdev=1119.15 00:45:54.860 clat (usec): min=3488, max=87729, avg=8225.45, stdev=6747.14 00:45:54.860 lat (usec): min=3497, max=87735, avg=8234.11, stdev=6747.19 00:45:54.860 clat percentiles (usec): 00:45:54.860 | 1.00th=[ 4293], 5.00th=[ 5014], 10.00th=[ 5342], 20.00th=[ 5866], 00:45:54.860 | 30.00th=[ 6259], 40.00th=[ 6849], 50.00th=[ 7439], 60.00th=[ 7767], 00:45:54.860 | 70.00th=[ 8160], 80.00th=[ 8586], 90.00th=[ 9110], 95.00th=[ 9634], 00:45:54.860 | 99.00th=[47449], 99.50th=[48497], 99.90th=[86508], 99.95th=[87557], 00:45:54.860 | 99.99th=[87557] 00:45:54.860 bw ( KiB/s): min=25088, max=57344, per=38.01%, avg=46848.00, stdev=9335.33, samples=10 00:45:54.860 iops : min= 196, max= 448, avg=366.00, stdev=72.93, samples=10 00:45:54.860 lat (msec) : 4=0.44%, 10=96.62%, 20=0.49%, 50=2.29%, 100=0.16% 00:45:54.860 cpu : usr=94.23%, sys=5.53%, ctx=15, majf=0, minf=129 00:45:54.860 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:54.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:54.860 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:54.860 issued rwts: total=1833,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:54.860 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:54.860 filename0: (groupid=0, jobs=1): err= 0: pid=762898: Fri Sep 27 16:04:34 2024 00:45:54.860 read: IOPS=276, BW=34.6MiB/s (36.2MB/s)(174MiB/5044msec) 00:45:54.860 slat (nsec): min=5479, max=35125, avg=8609.98, stdev=1447.84 00:45:54.860 clat (usec): min=4214, max=91594, avg=10804.94, stdev=12658.36 00:45:54.860 lat (usec): min=4222, max=91602, avg=10813.55, stdev=12658.48 00:45:54.860 clat percentiles (usec): 00:45:54.860 | 1.00th=[ 4752], 5.00th=[ 5407], 10.00th=[ 6063], 20.00th=[ 6521], 00:45:54.860 | 30.00th=[ 6849], 40.00th=[ 7111], 50.00th=[ 7242], 60.00th=[ 7439], 00:45:54.860 | 70.00th=[ 7701], 80.00th=[ 8094], 90.00th=[ 8979], 95.00th=[47973], 00:45:54.860 | 99.00th=[49546], 99.50th=[87557], 99.90th=[90702], 99.95th=[91751], 00:45:54.860 | 99.99th=[91751] 00:45:54.860 bw ( KiB/s): min=25088, max=47872, per=28.93%, avg=35660.80, stdev=8944.18, samples=10 00:45:54.860 iops : min= 196, max= 374, avg=278.60, stdev=69.88, samples=10 00:45:54.860 lat (msec) : 10=91.61%, 20=0.07%, 50=7.53%, 100=0.79% 00:45:54.860 cpu : usr=95.74%, sys=4.01%, ctx=7, majf=0, minf=73 00:45:54.860 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:54.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:54.860 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:54.860 issued rwts: total=1395,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:54.860 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:54.860 filename0: (groupid=0, jobs=1): err= 0: pid=762899: Fri Sep 27 16:04:34 2024 00:45:54.860 read: IOPS=323, BW=40.4MiB/s (42.4MB/s)(204MiB/5045msec) 00:45:54.860 slat (nsec): min=5480, max=31581, avg=8437.10, stdev=1989.82 00:45:54.860 clat (usec): min=4226, max=50613, avg=9237.35, stdev=4168.98 00:45:54.860 lat (usec): min=4232, max=50620, avg=9245.79, stdev=4169.14 00:45:54.860 clat percentiles (usec): 00:45:54.860 | 1.00th=[ 5080], 5.00th=[ 5932], 10.00th=[ 6325], 20.00th=[ 6783], 00:45:54.861 | 30.00th=[ 7701], 40.00th=[ 8291], 50.00th=[ 8848], 60.00th=[ 9634], 00:45:54.861 | 70.00th=[10290], 80.00th=[10945], 90.00th=[11600], 95.00th=[12125], 00:45:54.861 | 99.00th=[13173], 99.50th=[48497], 99.90th=[50070], 99.95th=[50594], 00:45:54.861 | 99.99th=[50594] 00:45:54.861 bw ( KiB/s): min=38144, max=46685, per=33.86%, avg=41737.30, stdev=2946.07, samples=10 00:45:54.861 iops : min= 298, max= 364, avg=326.00, stdev=22.88, samples=10 00:45:54.861 lat (msec) : 10=65.13%, 20=34.01%, 50=0.74%, 100=0.12% 00:45:54.861 cpu : usr=91.44%, sys=6.94%, ctx=454, majf=0, minf=65 00:45:54.861 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:54.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:54.861 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:54.861 issued rwts: total=1632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:54.861 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:54.861 00:45:54.861 Run status group 0 (all jobs): 00:45:54.861 READ: bw=120MiB/s (126MB/s), 34.6MiB/s-45.4MiB/s (36.2MB/s-47.6MB/s), io=608MiB (637MB), run=5044-5047msec 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:54.861 bdev_null0 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:54.861 [2024-09-27 16:04:34.296485] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:54.861 bdev_null1 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:54.861 bdev_null2 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:45:54.861 { 00:45:54.861 "params": { 00:45:54.861 "name": "Nvme$subsystem", 00:45:54.861 "trtype": "$TEST_TRANSPORT", 00:45:54.861 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:54.861 "adrfam": "ipv4", 00:45:54.861 "trsvcid": "$NVMF_PORT", 00:45:54.861 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:54.861 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:54.861 "hdgst": ${hdgst:-false}, 00:45:54.861 "ddgst": ${ddgst:-false} 00:45:54.861 }, 00:45:54.861 "method": "bdev_nvme_attach_controller" 00:45:54.861 } 00:45:54.861 EOF 00:45:54.861 )") 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:54.861 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:45:54.862 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:45:54.862 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:54.862 16:04:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:45:54.862 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:54.862 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:45:54.862 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:45:54.862 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:54.862 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:54.862 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:54.862 16:04:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:45:54.862 16:04:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:45:54.862 { 00:45:54.862 "params": { 00:45:54.862 "name": "Nvme$subsystem", 00:45:54.862 "trtype": "$TEST_TRANSPORT", 00:45:54.862 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:54.862 "adrfam": "ipv4", 00:45:54.862 "trsvcid": "$NVMF_PORT", 00:45:54.862 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:54.862 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:54.862 "hdgst": ${hdgst:-false}, 00:45:54.862 "ddgst": ${ddgst:-false} 00:45:54.862 }, 00:45:54.862 "method": "bdev_nvme_attach_controller" 00:45:54.862 } 00:45:54.862 EOF 00:45:54.862 )") 00:45:54.862 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:54.862 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:54.862 16:04:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:45:54.862 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:54.862 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:54.862 16:04:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:54.862 16:04:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:45:54.862 16:04:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:45:54.862 { 00:45:54.862 "params": { 00:45:54.862 "name": "Nvme$subsystem", 00:45:54.862 "trtype": "$TEST_TRANSPORT", 00:45:54.862 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:54.862 "adrfam": "ipv4", 00:45:54.862 "trsvcid": "$NVMF_PORT", 00:45:54.862 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:54.862 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:54.862 "hdgst": ${hdgst:-false}, 00:45:54.862 "ddgst": ${ddgst:-false} 00:45:54.862 }, 00:45:54.862 "method": "bdev_nvme_attach_controller" 00:45:54.862 } 00:45:54.862 EOF 00:45:54.862 )") 00:45:54.862 16:04:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:45:54.862 16:04:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:45:54.862 16:04:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:45:54.862 16:04:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:45:54.862 "params": { 00:45:54.862 "name": "Nvme0", 00:45:54.862 "trtype": "tcp", 00:45:54.862 "traddr": "10.0.0.2", 00:45:54.862 "adrfam": "ipv4", 00:45:54.862 "trsvcid": "4420", 00:45:54.862 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:54.862 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:54.862 "hdgst": false, 00:45:54.862 "ddgst": false 00:45:54.862 }, 00:45:54.862 "method": "bdev_nvme_attach_controller" 00:45:54.862 },{ 00:45:54.862 "params": { 00:45:54.862 "name": "Nvme1", 00:45:54.862 "trtype": "tcp", 00:45:54.862 "traddr": "10.0.0.2", 00:45:54.862 "adrfam": "ipv4", 00:45:54.862 "trsvcid": "4420", 00:45:54.862 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:54.862 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:54.862 "hdgst": false, 00:45:54.862 "ddgst": false 00:45:54.862 }, 00:45:54.862 "method": "bdev_nvme_attach_controller" 00:45:54.862 },{ 00:45:54.862 "params": { 00:45:54.862 "name": "Nvme2", 00:45:54.862 "trtype": "tcp", 00:45:54.862 "traddr": "10.0.0.2", 00:45:54.862 "adrfam": "ipv4", 00:45:54.862 "trsvcid": "4420", 00:45:54.862 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:45:54.862 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:45:54.862 "hdgst": false, 00:45:54.862 "ddgst": false 00:45:54.862 }, 00:45:54.862 "method": "bdev_nvme_attach_controller" 00:45:54.862 }' 00:45:54.862 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:54.862 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:54.862 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:54.862 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:54.862 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:45:54.862 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:54.862 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:54.862 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:54.862 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:54.862 16:04:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:54.862 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:54.862 ... 00:45:54.862 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:54.862 ... 00:45:54.862 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:54.862 ... 00:45:54.862 fio-3.35 00:45:54.862 Starting 24 threads 00:46:07.092 00:46:07.092 filename0: (groupid=0, jobs=1): err= 0: pid=764380: Fri Sep 27 16:04:45 2024 00:46:07.092 read: IOPS=677, BW=2711KiB/s (2776kB/s)(26.5MiB/10017msec) 00:46:07.092 slat (nsec): min=5572, max=96692, avg=19459.01, stdev=16130.28 00:46:07.092 clat (usec): min=4836, max=35969, avg=23442.14, stdev=1971.60 00:46:07.092 lat (usec): min=4858, max=35982, avg=23461.60, stdev=1971.03 00:46:07.092 clat percentiles (usec): 00:46:07.092 | 1.00th=[12911], 5.00th=[22676], 10.00th=[22938], 20.00th=[23200], 00:46:07.092 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:46:07.092 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:46:07.092 | 99.00th=[26608], 99.50th=[30802], 99.90th=[34341], 99.95th=[35914], 00:46:07.092 | 99.99th=[35914] 00:46:07.092 bw ( KiB/s): min= 2560, max= 2949, per=4.18%, avg=2709.25, stdev=90.07, samples=20 00:46:07.092 iops : min= 640, max= 737, avg=677.25, stdev=22.46, samples=20 00:46:07.092 lat (msec) : 10=0.47%, 20=2.50%, 50=97.03% 00:46:07.092 cpu : usr=99.05%, sys=0.59%, ctx=71, majf=0, minf=48 00:46:07.092 IO depths : 1=5.7%, 2=11.5%, 4=23.7%, 8=52.2%, 16=7.0%, 32=0.0%, >=64=0.0% 00:46:07.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.092 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.092 issued rwts: total=6790,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:07.092 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:07.092 filename0: (groupid=0, jobs=1): err= 0: pid=764381: Fri Sep 27 16:04:45 2024 00:46:07.092 read: IOPS=676, BW=2706KiB/s (2771kB/s)(26.4MiB/10004msec) 00:46:07.092 slat (nsec): min=5566, max=83871, avg=10468.24, stdev=8132.71 00:46:07.092 clat (usec): min=2094, max=25380, avg=23563.06, stdev=1626.79 00:46:07.092 lat (usec): min=2114, max=25388, avg=23573.52, stdev=1625.76 00:46:07.092 clat percentiles (usec): 00:46:07.092 | 1.00th=[16909], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:46:07.092 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:46:07.092 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:46:07.092 | 99.00th=[24773], 99.50th=[25035], 99.90th=[25297], 99.95th=[25297], 00:46:07.092 | 99.99th=[25297] 00:46:07.092 bw ( KiB/s): min= 2560, max= 3072, per=4.18%, avg=2707.89, stdev=97.96, samples=19 00:46:07.092 iops : min= 640, max= 768, avg=676.95, stdev=24.50, samples=19 00:46:07.092 lat (msec) : 4=0.10%, 10=0.40%, 20=1.12%, 50=98.37% 00:46:07.092 cpu : usr=98.62%, sys=0.87%, ctx=206, majf=0, minf=49 00:46:07.092 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:46:07.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.092 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.092 issued rwts: total=6768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:07.092 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:07.092 filename0: (groupid=0, jobs=1): err= 0: pid=764382: Fri Sep 27 16:04:45 2024 00:46:07.092 read: IOPS=671, BW=2687KiB/s (2752kB/s)(26.2MiB/10002msec) 00:46:07.092 slat (nsec): min=5562, max=92479, avg=11717.72, stdev=11180.92 00:46:07.092 clat (usec): min=12114, max=32687, avg=23716.52, stdev=1070.28 00:46:07.092 lat (usec): min=12127, max=32708, avg=23728.24, stdev=1068.91 00:46:07.092 clat percentiles (usec): 00:46:07.092 | 1.00th=[19792], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:46:07.092 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:46:07.092 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:46:07.092 | 99.00th=[25297], 99.50th=[29492], 99.90th=[32637], 99.95th=[32637], 00:46:07.092 | 99.99th=[32637] 00:46:07.092 bw ( KiB/s): min= 2560, max= 2816, per=4.15%, avg=2688.79, stdev=42.71, samples=19 00:46:07.092 iops : min= 640, max= 704, avg=672.16, stdev=10.67, samples=19 00:46:07.092 lat (msec) : 20=1.12%, 50=98.88% 00:46:07.092 cpu : usr=98.82%, sys=0.75%, ctx=125, majf=0, minf=49 00:46:07.092 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:46:07.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.092 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.092 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:07.092 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:07.092 filename0: (groupid=0, jobs=1): err= 0: pid=764383: Fri Sep 27 16:04:45 2024 00:46:07.092 read: IOPS=681, BW=2726KiB/s (2791kB/s)(26.6MiB/10010msec) 00:46:07.092 slat (usec): min=5, max=103, avg=21.49, stdev=17.47 00:46:07.092 clat (usec): min=8507, max=42661, avg=23303.32, stdev=3830.38 00:46:07.092 lat (usec): min=8514, max=42739, avg=23324.82, stdev=3832.07 00:46:07.092 clat percentiles (usec): 00:46:07.092 | 1.00th=[12911], 5.00th=[15795], 10.00th=[18482], 20.00th=[22152], 00:46:07.092 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:46:07.092 | 70.00th=[23987], 80.00th=[24249], 90.00th=[26870], 95.00th=[29492], 00:46:07.092 | 99.00th=[35390], 99.50th=[39584], 99.90th=[41157], 99.95th=[42206], 00:46:07.092 | 99.99th=[42730] 00:46:07.092 bw ( KiB/s): min= 2480, max= 3024, per=4.20%, avg=2720.74, stdev=128.59, samples=19 00:46:07.092 iops : min= 620, max= 756, avg=680.11, stdev=32.16, samples=19 00:46:07.092 lat (msec) : 10=0.12%, 20=13.10%, 50=86.78% 00:46:07.092 cpu : usr=98.96%, sys=0.71%, ctx=70, majf=0, minf=37 00:46:07.092 IO depths : 1=2.7%, 2=5.6%, 4=13.5%, 8=67.0%, 16=11.1%, 32=0.0%, >=64=0.0% 00:46:07.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.092 complete : 0=0.0%, 4=91.2%, 8=4.5%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.092 issued rwts: total=6822,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:07.092 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:07.092 filename0: (groupid=0, jobs=1): err= 0: pid=764384: Fri Sep 27 16:04:45 2024 00:46:07.092 read: IOPS=671, BW=2687KiB/s (2751kB/s)(26.2MiB/10005msec) 00:46:07.092 slat (usec): min=5, max=112, avg=24.97, stdev=17.92 00:46:07.092 clat (usec): min=16769, max=30535, avg=23613.53, stdev=744.41 00:46:07.092 lat (usec): min=16783, max=30541, avg=23638.50, stdev=742.40 00:46:07.092 clat percentiles (usec): 00:46:07.092 | 1.00th=[22414], 5.00th=[22938], 10.00th=[23200], 20.00th=[23200], 00:46:07.092 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:46:07.092 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:46:07.093 | 99.00th=[25035], 99.50th=[25297], 99.90th=[29492], 99.95th=[30540], 00:46:07.093 | 99.99th=[30540] 00:46:07.093 bw ( KiB/s): min= 2560, max= 2688, per=4.14%, avg=2680.63, stdev=29.27, samples=19 00:46:07.093 iops : min= 640, max= 672, avg=670.11, stdev= 7.32, samples=19 00:46:07.093 lat (msec) : 20=0.60%, 50=99.40% 00:46:07.093 cpu : usr=97.51%, sys=1.47%, ctx=870, majf=0, minf=58 00:46:07.093 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:46:07.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.093 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.093 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:07.093 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:07.093 filename0: (groupid=0, jobs=1): err= 0: pid=764385: Fri Sep 27 16:04:45 2024 00:46:07.093 read: IOPS=671, BW=2686KiB/s (2751kB/s)(26.3MiB/10015msec) 00:46:07.093 slat (usec): min=5, max=102, avg=25.68, stdev=15.04 00:46:07.093 clat (usec): min=13885, max=40607, avg=23612.50, stdev=1244.46 00:46:07.093 lat (usec): min=13893, max=40636, avg=23638.18, stdev=1244.05 00:46:07.093 clat percentiles (usec): 00:46:07.093 | 1.00th=[18744], 5.00th=[22938], 10.00th=[23200], 20.00th=[23200], 00:46:07.093 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:46:07.093 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:46:07.093 | 99.00th=[25297], 99.50th=[28705], 99.90th=[39584], 99.95th=[39584], 00:46:07.093 | 99.99th=[40633] 00:46:07.093 bw ( KiB/s): min= 2560, max= 2816, per=4.14%, avg=2684.05, stdev=46.49, samples=19 00:46:07.093 iops : min= 640, max= 704, avg=671.00, stdev=11.62, samples=19 00:46:07.093 lat (msec) : 20=1.01%, 50=98.99% 00:46:07.093 cpu : usr=98.68%, sys=0.93%, ctx=57, majf=0, minf=34 00:46:07.093 IO depths : 1=6.1%, 2=12.2%, 4=24.6%, 8=50.7%, 16=6.4%, 32=0.0%, >=64=0.0% 00:46:07.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.093 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.093 issued rwts: total=6726,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:07.093 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:07.093 filename0: (groupid=0, jobs=1): err= 0: pid=764386: Fri Sep 27 16:04:45 2024 00:46:07.093 read: IOPS=708, BW=2833KiB/s (2901kB/s)(27.7MiB/10009msec) 00:46:07.093 slat (usec): min=5, max=111, avg=10.88, stdev= 9.46 00:46:07.093 clat (usec): min=1740, max=41371, avg=22519.93, stdev=3999.06 00:46:07.093 lat (usec): min=1759, max=41390, avg=22530.80, stdev=3999.67 00:46:07.093 clat percentiles (usec): 00:46:07.093 | 1.00th=[ 8094], 5.00th=[15139], 10.00th=[16909], 20.00th=[20317], 00:46:07.093 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:46:07.093 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24511], 95.00th=[27132], 00:46:07.093 | 99.00th=[33162], 99.50th=[35914], 99.90th=[41157], 99.95th=[41157], 00:46:07.093 | 99.99th=[41157] 00:46:07.093 bw ( KiB/s): min= 2656, max= 3312, per=4.39%, avg=2842.32, stdev=167.59, samples=19 00:46:07.093 iops : min= 664, max= 828, avg=710.53, stdev=41.95, samples=19 00:46:07.093 lat (msec) : 2=0.14%, 4=0.49%, 10=0.66%, 20=17.30%, 50=81.41% 00:46:07.093 cpu : usr=98.84%, sys=0.84%, ctx=19, majf=0, minf=65 00:46:07.093 IO depths : 1=1.4%, 2=3.6%, 4=11.8%, 8=71.3%, 16=11.9%, 32=0.0%, >=64=0.0% 00:46:07.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.093 complete : 0=0.0%, 4=90.7%, 8=4.5%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.093 issued rwts: total=7088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:07.093 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:07.093 filename0: (groupid=0, jobs=1): err= 0: pid=764387: Fri Sep 27 16:04:45 2024 00:46:07.093 read: IOPS=671, BW=2686KiB/s (2750kB/s)(26.2MiB/10006msec) 00:46:07.093 slat (usec): min=5, max=105, avg=14.18, stdev=12.65 00:46:07.093 clat (usec): min=6228, max=45564, avg=23757.51, stdev=3269.89 00:46:07.093 lat (usec): min=6234, max=45583, avg=23771.69, stdev=3270.64 00:46:07.093 clat percentiles (usec): 00:46:07.093 | 1.00th=[13698], 5.00th=[18482], 10.00th=[21103], 20.00th=[23200], 00:46:07.093 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:46:07.093 | 70.00th=[23987], 80.00th=[24249], 90.00th=[25560], 95.00th=[29492], 00:46:07.093 | 99.00th=[34866], 99.50th=[40109], 99.90th=[43779], 99.95th=[44303], 00:46:07.093 | 99.99th=[45351] 00:46:07.093 bw ( KiB/s): min= 2536, max= 2784, per=4.13%, avg=2672.74, stdev=55.38, samples=19 00:46:07.093 iops : min= 634, max= 696, avg=668.11, stdev=13.86, samples=19 00:46:07.093 lat (msec) : 10=0.46%, 20=7.06%, 50=92.48% 00:46:07.093 cpu : usr=99.12%, sys=0.60%, ctx=16, majf=0, minf=70 00:46:07.093 IO depths : 1=0.4%, 2=1.1%, 4=4.2%, 8=78.1%, 16=16.2%, 32=0.0%, >=64=0.0% 00:46:07.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.093 complete : 0=0.0%, 4=89.8%, 8=8.5%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.093 issued rwts: total=6718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:07.093 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:07.093 filename1: (groupid=0, jobs=1): err= 0: pid=764388: Fri Sep 27 16:04:45 2024 00:46:07.093 read: IOPS=687, BW=2749KiB/s (2815kB/s)(26.9MiB/10003msec) 00:46:07.093 slat (nsec): min=5420, max=95269, avg=19790.39, stdev=14922.12 00:46:07.093 clat (usec): min=2634, max=42343, avg=23114.95, stdev=3599.06 00:46:07.093 lat (usec): min=2640, max=42367, avg=23134.74, stdev=3601.12 00:46:07.093 clat percentiles (usec): 00:46:07.093 | 1.00th=[10814], 5.00th=[16057], 10.00th=[19268], 20.00th=[23200], 00:46:07.093 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:46:07.093 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24511], 95.00th=[26608], 00:46:07.093 | 99.00th=[35914], 99.50th=[38536], 99.90th=[42206], 99.95th=[42206], 00:46:07.093 | 99.99th=[42206] 00:46:07.093 bw ( KiB/s): min= 2565, max= 2928, per=4.22%, avg=2732.53, stdev=89.98, samples=19 00:46:07.093 iops : min= 641, max= 732, avg=683.05, stdev=22.56, samples=19 00:46:07.093 lat (msec) : 4=0.13%, 10=0.68%, 20=10.26%, 50=88.93% 00:46:07.093 cpu : usr=98.99%, sys=0.72%, ctx=14, majf=0, minf=45 00:46:07.093 IO depths : 1=3.5%, 2=7.4%, 4=16.8%, 8=62.1%, 16=10.1%, 32=0.0%, >=64=0.0% 00:46:07.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.093 complete : 0=0.0%, 4=92.1%, 8=3.3%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.093 issued rwts: total=6874,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:07.093 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:07.093 filename1: (groupid=0, jobs=1): err= 0: pid=764389: Fri Sep 27 16:04:45 2024 00:46:07.093 read: IOPS=672, BW=2690KiB/s (2754kB/s)(26.3MiB/10017msec) 00:46:07.093 slat (nsec): min=5570, max=88137, avg=20367.49, stdev=14309.16 00:46:07.093 clat (usec): min=14078, max=25091, avg=23615.92, stdev=743.03 00:46:07.093 lat (usec): min=14099, max=25097, avg=23636.29, stdev=741.96 00:46:07.093 clat percentiles (usec): 00:46:07.093 | 1.00th=[21627], 5.00th=[22938], 10.00th=[23200], 20.00th=[23200], 00:46:07.093 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:46:07.093 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:46:07.093 | 99.00th=[24773], 99.50th=[24773], 99.90th=[25035], 99.95th=[25035], 00:46:07.093 | 99.99th=[25035] 00:46:07.093 bw ( KiB/s): min= 2560, max= 2810, per=4.15%, avg=2687.68, stdev=41.68, samples=19 00:46:07.093 iops : min= 640, max= 702, avg=671.89, stdev=10.34, samples=19 00:46:07.093 lat (msec) : 20=0.62%, 50=99.38% 00:46:07.093 cpu : usr=99.07%, sys=0.65%, ctx=15, majf=0, minf=41 00:46:07.093 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:46:07.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.093 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.093 issued rwts: total=6736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:07.093 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:07.093 filename1: (groupid=0, jobs=1): err= 0: pid=764390: Fri Sep 27 16:04:45 2024 00:46:07.093 read: IOPS=682, BW=2731KiB/s (2796kB/s)(26.7MiB/10016msec) 00:46:07.093 slat (nsec): min=5559, max=97673, avg=29862.95, stdev=18576.93 00:46:07.093 clat (usec): min=7465, max=38331, avg=23176.31, stdev=2162.16 00:46:07.093 lat (usec): min=7475, max=38352, avg=23206.17, stdev=2164.94 00:46:07.093 clat percentiles (usec): 00:46:07.093 | 1.00th=[12911], 5.00th=[19268], 10.00th=[22938], 20.00th=[23200], 00:46:07.093 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23462], 60.00th=[23462], 00:46:07.093 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:46:07.093 | 99.00th=[25297], 99.50th=[28181], 99.90th=[36963], 99.95th=[38011], 00:46:07.093 | 99.99th=[38536] 00:46:07.093 bw ( KiB/s): min= 2682, max= 2992, per=4.21%, avg=2728.75, stdev=91.80, samples=20 00:46:07.093 iops : min= 670, max= 748, avg=682.15, stdev=22.93, samples=20 00:46:07.093 lat (msec) : 10=0.29%, 20=5.13%, 50=94.57% 00:46:07.093 cpu : usr=98.70%, sys=0.84%, ctx=67, majf=0, minf=37 00:46:07.093 IO depths : 1=5.8%, 2=11.6%, 4=23.8%, 8=52.1%, 16=6.8%, 32=0.0%, >=64=0.0% 00:46:07.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.093 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.093 issued rwts: total=6838,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:07.093 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:07.093 filename1: (groupid=0, jobs=1): err= 0: pid=764391: Fri Sep 27 16:04:45 2024 00:46:07.093 read: IOPS=672, BW=2692KiB/s (2757kB/s)(26.3MiB/10006msec) 00:46:07.093 slat (usec): min=5, max=100, avg=31.48, stdev=17.14 00:46:07.093 clat (usec): min=5246, max=49118, avg=23477.74, stdev=1757.46 00:46:07.093 lat (usec): min=5253, max=49135, avg=23509.22, stdev=1758.22 00:46:07.093 clat percentiles (usec): 00:46:07.093 | 1.00th=[21103], 5.00th=[22938], 10.00th=[22938], 20.00th=[23200], 00:46:07.093 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:46:07.093 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:46:07.093 | 99.00th=[24773], 99.50th=[25297], 99.90th=[44827], 99.95th=[44827], 00:46:07.093 | 99.99th=[49021] 00:46:07.093 bw ( KiB/s): min= 2560, max= 2704, per=4.13%, avg=2673.84, stdev=39.67, samples=19 00:46:07.093 iops : min= 640, max= 676, avg=668.37, stdev= 9.94, samples=19 00:46:07.093 lat (msec) : 10=0.45%, 20=0.53%, 50=99.02% 00:46:07.093 cpu : usr=98.80%, sys=0.78%, ctx=112, majf=0, minf=39 00:46:07.093 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.4%, 16=6.5%, 32=0.0%, >=64=0.0% 00:46:07.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.093 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.093 issued rwts: total=6734,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:07.093 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:07.094 filename1: (groupid=0, jobs=1): err= 0: pid=764392: Fri Sep 27 16:04:45 2024 00:46:07.094 read: IOPS=673, BW=2693KiB/s (2758kB/s)(26.3MiB/10004msec) 00:46:07.094 slat (usec): min=5, max=105, avg=32.37, stdev=18.78 00:46:07.094 clat (usec): min=5340, max=43172, avg=23436.32, stdev=1742.28 00:46:07.094 lat (usec): min=5346, max=43194, avg=23468.69, stdev=1743.58 00:46:07.094 clat percentiles (usec): 00:46:07.094 | 1.00th=[21365], 5.00th=[22938], 10.00th=[22938], 20.00th=[23200], 00:46:07.094 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23462], 60.00th=[23462], 00:46:07.094 | 70.00th=[23725], 80.00th=[23987], 90.00th=[23987], 95.00th=[24249], 00:46:07.094 | 99.00th=[24773], 99.50th=[25035], 99.90th=[43254], 99.95th=[43254], 00:46:07.094 | 99.99th=[43254] 00:46:07.094 bw ( KiB/s): min= 2560, max= 2698, per=4.13%, avg=2674.16, stdev=40.65, samples=19 00:46:07.094 iops : min= 640, max= 674, avg=668.47, stdev=10.15, samples=19 00:46:07.094 lat (msec) : 10=0.48%, 20=0.50%, 50=99.02% 00:46:07.094 cpu : usr=99.15%, sys=0.55%, ctx=11, majf=0, minf=34 00:46:07.094 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:46:07.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.094 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.094 issued rwts: total=6736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:07.094 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:07.094 filename1: (groupid=0, jobs=1): err= 0: pid=764393: Fri Sep 27 16:04:45 2024 00:46:07.094 read: IOPS=672, BW=2692KiB/s (2757kB/s)(26.3MiB/10009msec) 00:46:07.094 slat (nsec): min=5591, max=85243, avg=19561.82, stdev=14381.63 00:46:07.094 clat (usec): min=11884, max=39918, avg=23594.47, stdev=1188.92 00:46:07.094 lat (usec): min=11893, max=39937, avg=23614.04, stdev=1188.24 00:46:07.094 clat percentiles (usec): 00:46:07.094 | 1.00th=[17171], 5.00th=[22938], 10.00th=[23200], 20.00th=[23200], 00:46:07.094 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:46:07.094 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:46:07.094 | 99.00th=[24773], 99.50th=[29230], 99.90th=[30802], 99.95th=[40109], 00:46:07.094 | 99.99th=[40109] 00:46:07.094 bw ( KiB/s): min= 2560, max= 2810, per=4.15%, avg=2687.95, stdev=44.66, samples=19 00:46:07.094 iops : min= 640, max= 702, avg=671.95, stdev=11.09, samples=19 00:46:07.094 lat (msec) : 20=1.32%, 50=98.68% 00:46:07.094 cpu : usr=99.09%, sys=0.61%, ctx=11, majf=0, minf=30 00:46:07.094 IO depths : 1=6.2%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.3%, 32=0.0%, >=64=0.0% 00:46:07.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.094 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.094 issued rwts: total=6736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:07.094 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:07.094 filename1: (groupid=0, jobs=1): err= 0: pid=764394: Fri Sep 27 16:04:45 2024 00:46:07.094 read: IOPS=680, BW=2723KiB/s (2789kB/s)(26.7MiB/10021msec) 00:46:07.094 slat (nsec): min=5561, max=82128, avg=8184.41, stdev=4547.64 00:46:07.094 clat (usec): min=2464, max=31500, avg=23428.72, stdev=2343.61 00:46:07.094 lat (usec): min=2483, max=31514, avg=23436.91, stdev=2342.28 00:46:07.094 clat percentiles (usec): 00:46:07.094 | 1.00th=[ 7898], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:46:07.094 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:46:07.094 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:46:07.094 | 99.00th=[24773], 99.50th=[24773], 99.90th=[30802], 99.95th=[31327], 00:46:07.094 | 99.99th=[31589] 00:46:07.094 bw ( KiB/s): min= 2560, max= 3256, per=4.20%, avg=2722.20, stdev=135.42, samples=20 00:46:07.094 iops : min= 640, max= 814, avg=680.50, stdev=33.87, samples=20 00:46:07.094 lat (msec) : 4=0.45%, 10=0.59%, 20=1.69%, 50=97.27% 00:46:07.094 cpu : usr=98.95%, sys=0.74%, ctx=64, majf=0, minf=82 00:46:07.094 IO depths : 1=6.1%, 2=12.2%, 4=24.8%, 8=50.5%, 16=6.5%, 32=0.0%, >=64=0.0% 00:46:07.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.094 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.094 issued rwts: total=6823,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:07.094 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:07.094 filename1: (groupid=0, jobs=1): err= 0: pid=764395: Fri Sep 27 16:04:45 2024 00:46:07.094 read: IOPS=673, BW=2693KiB/s (2757kB/s)(26.3MiB/10006msec) 00:46:07.094 slat (nsec): min=5554, max=99488, avg=24943.05, stdev=17836.93 00:46:07.094 clat (usec): min=9942, max=40717, avg=23556.86, stdev=1502.47 00:46:07.094 lat (usec): min=9963, max=40748, avg=23581.80, stdev=1502.80 00:46:07.094 clat percentiles (usec): 00:46:07.094 | 1.00th=[17171], 5.00th=[22938], 10.00th=[22938], 20.00th=[23200], 00:46:07.094 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:46:07.094 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:46:07.094 | 99.00th=[26608], 99.50th=[29754], 99.90th=[35390], 99.95th=[40633], 00:46:07.094 | 99.99th=[40633] 00:46:07.094 bw ( KiB/s): min= 2560, max= 2816, per=4.15%, avg=2687.37, stdev=42.71, samples=19 00:46:07.094 iops : min= 640, max= 704, avg=671.79, stdev=10.69, samples=19 00:46:07.094 lat (msec) : 10=0.10%, 20=1.93%, 50=97.97% 00:46:07.094 cpu : usr=99.07%, sys=0.64%, ctx=14, majf=0, minf=29 00:46:07.094 IO depths : 1=5.8%, 2=11.8%, 4=24.2%, 8=51.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:46:07.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.094 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.094 issued rwts: total=6736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:07.094 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:07.094 filename2: (groupid=0, jobs=1): err= 0: pid=764396: Fri Sep 27 16:04:45 2024 00:46:07.094 read: IOPS=673, BW=2692KiB/s (2757kB/s)(26.3MiB/10007msec) 00:46:07.094 slat (nsec): min=5420, max=52022, avg=9164.01, stdev=5034.80 00:46:07.094 clat (usec): min=6146, max=40066, avg=23684.43, stdev=1480.04 00:46:07.094 lat (usec): min=6153, max=40084, avg=23693.60, stdev=1480.31 00:46:07.094 clat percentiles (usec): 00:46:07.094 | 1.00th=[22152], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:46:07.094 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:46:07.094 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:46:07.094 | 99.00th=[24773], 99.50th=[25297], 99.90th=[40109], 99.95th=[40109], 00:46:07.094 | 99.99th=[40109] 00:46:07.094 bw ( KiB/s): min= 2560, max= 2688, per=4.14%, avg=2680.32, stdev=29.22, samples=19 00:46:07.094 iops : min= 640, max= 672, avg=670.00, stdev= 7.30, samples=19 00:46:07.094 lat (msec) : 10=0.33%, 20=0.40%, 50=99.27% 00:46:07.094 cpu : usr=98.58%, sys=0.87%, ctx=108, majf=0, minf=43 00:46:07.094 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:46:07.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.094 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.094 issued rwts: total=6735,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:07.094 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:07.094 filename2: (groupid=0, jobs=1): err= 0: pid=764397: Fri Sep 27 16:04:45 2024 00:46:07.094 read: IOPS=672, BW=2690KiB/s (2755kB/s)(26.3MiB/10016msec) 00:46:07.094 slat (nsec): min=5493, max=90525, avg=19570.58, stdev=12569.45 00:46:07.094 clat (usec): min=9900, max=36727, avg=23619.15, stdev=823.02 00:46:07.094 lat (usec): min=9910, max=36745, avg=23638.72, stdev=822.52 00:46:07.094 clat percentiles (usec): 00:46:07.094 | 1.00th=[21890], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:46:07.094 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:46:07.094 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:46:07.094 | 99.00th=[24773], 99.50th=[24773], 99.90th=[25035], 99.95th=[25035], 00:46:07.094 | 99.99th=[36963] 00:46:07.094 bw ( KiB/s): min= 2560, max= 2816, per=4.15%, avg=2687.95, stdev=42.71, samples=19 00:46:07.094 iops : min= 640, max= 704, avg=671.95, stdev=10.68, samples=19 00:46:07.094 lat (msec) : 10=0.03%, 20=0.71%, 50=99.26% 00:46:07.094 cpu : usr=98.99%, sys=0.70%, ctx=17, majf=0, minf=43 00:46:07.094 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:46:07.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.094 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.094 issued rwts: total=6736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:07.094 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:07.094 filename2: (groupid=0, jobs=1): err= 0: pid=764398: Fri Sep 27 16:04:45 2024 00:46:07.094 read: IOPS=669, BW=2679KiB/s (2743kB/s)(26.3MiB/10044msec) 00:46:07.094 slat (usec): min=5, max=112, avg=32.20, stdev=17.85 00:46:07.094 clat (usec): min=6169, max=51242, avg=23558.29, stdev=1777.42 00:46:07.094 lat (usec): min=6175, max=51249, avg=23590.49, stdev=1777.15 00:46:07.094 clat percentiles (usec): 00:46:07.094 | 1.00th=[22414], 5.00th=[22938], 10.00th=[22938], 20.00th=[23200], 00:46:07.094 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23462], 60.00th=[23462], 00:46:07.094 | 70.00th=[23725], 80.00th=[23987], 90.00th=[23987], 95.00th=[24249], 00:46:07.094 | 99.00th=[25035], 99.50th=[26346], 99.90th=[45351], 99.95th=[51119], 00:46:07.094 | 99.99th=[51119] 00:46:07.094 bw ( KiB/s): min= 2560, max= 2693, per=4.13%, avg=2674.16, stdev=40.29, samples=19 00:46:07.094 iops : min= 640, max= 673, avg=668.47, stdev=10.06, samples=19 00:46:07.094 lat (msec) : 10=0.15%, 20=0.54%, 50=99.26%, 100=0.06% 00:46:07.094 cpu : usr=98.83%, sys=0.74%, ctx=62, majf=0, minf=32 00:46:07.094 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:46:07.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.094 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.094 issued rwts: total=6727,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:07.094 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:07.094 filename2: (groupid=0, jobs=1): err= 0: pid=764399: Fri Sep 27 16:04:45 2024 00:46:07.094 read: IOPS=685, BW=2743KiB/s (2809kB/s)(26.8MiB/10003msec) 00:46:07.094 slat (nsec): min=5554, max=98109, avg=14561.70, stdev=13778.92 00:46:07.094 clat (usec): min=5356, max=53986, avg=23267.13, stdev=3520.98 00:46:07.094 lat (usec): min=5370, max=54010, avg=23281.69, stdev=3521.44 00:46:07.094 clat percentiles (usec): 00:46:07.094 | 1.00th=[13435], 5.00th=[16909], 10.00th=[18744], 20.00th=[21890], 00:46:07.094 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:46:07.094 | 70.00th=[23987], 80.00th=[24249], 90.00th=[26084], 95.00th=[29230], 00:46:07.094 | 99.00th=[33817], 99.50th=[35914], 99.90th=[42206], 99.95th=[42206], 00:46:07.094 | 99.99th=[53740] 00:46:07.094 bw ( KiB/s): min= 2597, max= 2896, per=4.22%, avg=2735.05, stdev=94.91, samples=19 00:46:07.094 iops : min= 649, max= 724, avg=683.68, stdev=23.74, samples=19 00:46:07.095 lat (msec) : 10=0.32%, 20=14.21%, 50=85.44%, 100=0.03% 00:46:07.095 cpu : usr=98.61%, sys=0.91%, ctx=66, majf=0, minf=42 00:46:07.095 IO depths : 1=0.1%, 2=0.4%, 4=3.4%, 8=79.7%, 16=16.5%, 32=0.0%, >=64=0.0% 00:46:07.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.095 complete : 0=0.0%, 4=89.4%, 8=8.8%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.095 issued rwts: total=6860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:07.095 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:07.095 filename2: (groupid=0, jobs=1): err= 0: pid=764400: Fri Sep 27 16:04:45 2024 00:46:07.095 read: IOPS=671, BW=2688KiB/s (2752kB/s)(26.2MiB/10001msec) 00:46:07.095 slat (usec): min=5, max=100, avg=28.27, stdev=16.88 00:46:07.095 clat (usec): min=10827, max=31211, avg=23565.16, stdev=917.86 00:46:07.095 lat (usec): min=10836, max=31233, avg=23593.43, stdev=917.35 00:46:07.095 clat percentiles (usec): 00:46:07.095 | 1.00th=[22414], 5.00th=[22938], 10.00th=[23200], 20.00th=[23200], 00:46:07.095 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:46:07.095 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:46:07.095 | 99.00th=[25035], 99.50th=[25297], 99.90th=[31065], 99.95th=[31065], 00:46:07.095 | 99.99th=[31327] 00:46:07.095 bw ( KiB/s): min= 2560, max= 2816, per=4.14%, avg=2680.95, stdev=51.77, samples=19 00:46:07.095 iops : min= 640, max= 704, avg=670.21, stdev=12.94, samples=19 00:46:07.095 lat (msec) : 20=0.51%, 50=99.49% 00:46:07.095 cpu : usr=98.94%, sys=0.71%, ctx=42, majf=0, minf=51 00:46:07.095 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:46:07.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.095 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.095 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:07.095 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:07.095 filename2: (groupid=0, jobs=1): err= 0: pid=764401: Fri Sep 27 16:04:45 2024 00:46:07.095 read: IOPS=671, BW=2687KiB/s (2751kB/s)(26.2MiB/10005msec) 00:46:07.095 slat (nsec): min=5564, max=93077, avg=17871.61, stdev=14173.16 00:46:07.095 clat (usec): min=14979, max=29754, avg=23675.16, stdev=716.48 00:46:07.095 lat (usec): min=14992, max=29789, avg=23693.03, stdev=714.30 00:46:07.095 clat percentiles (usec): 00:46:07.095 | 1.00th=[22414], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:46:07.095 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:46:07.095 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:46:07.095 | 99.00th=[24773], 99.50th=[25035], 99.90th=[29492], 99.95th=[29754], 00:46:07.095 | 99.99th=[29754] 00:46:07.095 bw ( KiB/s): min= 2565, max= 2693, per=4.14%, avg=2682.05, stdev=28.39, samples=19 00:46:07.095 iops : min= 641, max= 673, avg=670.47, stdev= 7.14, samples=19 00:46:07.095 lat (msec) : 20=0.48%, 50=99.52% 00:46:07.095 cpu : usr=98.59%, sys=0.92%, ctx=127, majf=0, minf=33 00:46:07.095 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:46:07.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.095 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.095 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:07.095 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:07.095 filename2: (groupid=0, jobs=1): err= 0: pid=764402: Fri Sep 27 16:04:45 2024 00:46:07.095 read: IOPS=669, BW=2680KiB/s (2744kB/s)(26.2MiB/10006msec) 00:46:07.095 slat (usec): min=5, max=101, avg=28.85, stdev=18.23 00:46:07.095 clat (usec): min=10797, max=34956, avg=23592.13, stdev=1152.99 00:46:07.095 lat (usec): min=10805, max=34995, avg=23620.98, stdev=1152.42 00:46:07.095 clat percentiles (usec): 00:46:07.095 | 1.00th=[22414], 5.00th=[22938], 10.00th=[23200], 20.00th=[23200], 00:46:07.095 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:46:07.095 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:46:07.095 | 99.00th=[25035], 99.50th=[29754], 99.90th=[34866], 99.95th=[34866], 00:46:07.095 | 99.99th=[34866] 00:46:07.095 bw ( KiB/s): min= 2560, max= 2816, per=4.13%, avg=2673.89, stdev=58.61, samples=19 00:46:07.095 iops : min= 640, max= 704, avg=668.42, stdev=14.65, samples=19 00:46:07.095 lat (msec) : 20=0.48%, 50=99.52% 00:46:07.095 cpu : usr=98.86%, sys=0.73%, ctx=87, majf=0, minf=30 00:46:07.095 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:46:07.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.095 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.095 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:07.095 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:07.095 filename2: (groupid=0, jobs=1): err= 0: pid=764403: Fri Sep 27 16:04:45 2024 00:46:07.095 read: IOPS=682, BW=2729KiB/s (2795kB/s)(26.7MiB/10018msec) 00:46:07.095 slat (usec): min=5, max=100, avg=20.09, stdev=15.86 00:46:07.095 clat (usec): min=10981, max=39657, avg=23279.36, stdev=2867.64 00:46:07.095 lat (usec): min=10987, max=39722, avg=23299.45, stdev=2869.28 00:46:07.095 clat percentiles (usec): 00:46:07.095 | 1.00th=[13829], 5.00th=[17695], 10.00th=[19792], 20.00th=[22938], 00:46:07.095 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:46:07.095 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24773], 95.00th=[28181], 00:46:07.095 | 99.00th=[32375], 99.50th=[34341], 99.90th=[39060], 99.95th=[39584], 00:46:07.095 | 99.99th=[39584] 00:46:07.095 bw ( KiB/s): min= 2554, max= 2928, per=4.22%, avg=2730.60, stdev=82.65, samples=20 00:46:07.095 iops : min= 638, max= 732, avg=682.60, stdev=20.73, samples=20 00:46:07.095 lat (msec) : 20=11.02%, 50=88.98% 00:46:07.095 cpu : usr=98.18%, sys=1.15%, ctx=116, majf=0, minf=34 00:46:07.095 IO depths : 1=3.2%, 2=6.7%, 4=15.0%, 8=64.6%, 16=10.5%, 32=0.0%, >=64=0.0% 00:46:07.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.095 complete : 0=0.0%, 4=91.6%, 8=4.0%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.095 issued rwts: total=6835,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:07.095 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:07.095 00:46:07.095 Run status group 0 (all jobs): 00:46:07.095 READ: bw=63.2MiB/s (66.3MB/s), 2679KiB/s-2833KiB/s (2743kB/s-2901kB/s), io=635MiB (666MB), run=10001-10044msec 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:07.095 bdev_null0 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:46:07.095 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:07.096 [2024-09-27 16:04:46.158779] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:07.096 bdev_null1 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:46:07.096 { 00:46:07.096 "params": { 00:46:07.096 "name": "Nvme$subsystem", 00:46:07.096 "trtype": "$TEST_TRANSPORT", 00:46:07.096 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:07.096 "adrfam": "ipv4", 00:46:07.096 "trsvcid": "$NVMF_PORT", 00:46:07.096 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:07.096 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:07.096 "hdgst": ${hdgst:-false}, 00:46:07.096 "ddgst": ${ddgst:-false} 00:46:07.096 }, 00:46:07.096 "method": "bdev_nvme_attach_controller" 00:46:07.096 } 00:46:07.096 EOF 00:46:07.096 )") 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:46:07.096 { 00:46:07.096 "params": { 00:46:07.096 "name": "Nvme$subsystem", 00:46:07.096 "trtype": "$TEST_TRANSPORT", 00:46:07.096 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:07.096 "adrfam": "ipv4", 00:46:07.096 "trsvcid": "$NVMF_PORT", 00:46:07.096 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:07.096 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:07.096 "hdgst": ${hdgst:-false}, 00:46:07.096 "ddgst": ${ddgst:-false} 00:46:07.096 }, 00:46:07.096 "method": "bdev_nvme_attach_controller" 00:46:07.096 } 00:46:07.096 EOF 00:46:07.096 )") 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:46:07.096 "params": { 00:46:07.096 "name": "Nvme0", 00:46:07.096 "trtype": "tcp", 00:46:07.096 "traddr": "10.0.0.2", 00:46:07.096 "adrfam": "ipv4", 00:46:07.096 "trsvcid": "4420", 00:46:07.096 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:07.096 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:07.096 "hdgst": false, 00:46:07.096 "ddgst": false 00:46:07.096 }, 00:46:07.096 "method": "bdev_nvme_attach_controller" 00:46:07.096 },{ 00:46:07.096 "params": { 00:46:07.096 "name": "Nvme1", 00:46:07.096 "trtype": "tcp", 00:46:07.096 "traddr": "10.0.0.2", 00:46:07.096 "adrfam": "ipv4", 00:46:07.096 "trsvcid": "4420", 00:46:07.096 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:46:07.096 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:46:07.096 "hdgst": false, 00:46:07.096 "ddgst": false 00:46:07.096 }, 00:46:07.096 "method": "bdev_nvme_attach_controller" 00:46:07.096 }' 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:46:07.096 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:07.096 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:46:07.096 ... 00:46:07.096 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:46:07.096 ... 00:46:07.096 fio-3.35 00:46:07.096 Starting 4 threads 00:46:12.379 00:46:12.379 filename0: (groupid=0, jobs=1): err= 0: pid=766671: Fri Sep 27 16:04:52 2024 00:46:12.379 read: IOPS=2972, BW=23.2MiB/s (24.3MB/s)(116MiB/5003msec) 00:46:12.379 slat (nsec): min=5395, max=70784, avg=6668.90, stdev=2802.67 00:46:12.379 clat (usec): min=1615, max=5504, avg=2676.06, stdev=165.23 00:46:12.379 lat (usec): min=1623, max=5510, avg=2682.73, stdev=165.34 00:46:12.379 clat percentiles (usec): 00:46:12.379 | 1.00th=[ 2147], 5.00th=[ 2442], 10.00th=[ 2540], 20.00th=[ 2638], 00:46:12.379 | 30.00th=[ 2671], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2704], 00:46:12.379 | 70.00th=[ 2704], 80.00th=[ 2704], 90.00th=[ 2737], 95.00th=[ 2900], 00:46:12.379 | 99.00th=[ 3294], 99.50th=[ 3523], 99.90th=[ 3982], 99.95th=[ 4178], 00:46:12.379 | 99.99th=[ 5473] 00:46:12.379 bw ( KiB/s): min=23600, max=24048, per=25.12%, avg=23782.40, stdev=159.32, samples=10 00:46:12.379 iops : min= 2950, max= 3006, avg=2972.80, stdev=19.92, samples=10 00:46:12.379 lat (msec) : 2=0.30%, 4=99.62%, 10=0.09% 00:46:12.379 cpu : usr=96.16%, sys=3.36%, ctx=192, majf=0, minf=0 00:46:12.379 IO depths : 1=0.1%, 2=0.2%, 4=67.0%, 8=32.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:12.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:12.379 complete : 0=0.0%, 4=96.6%, 8=3.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:12.379 issued rwts: total=14869,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:12.379 latency : target=0, window=0, percentile=100.00%, depth=8 00:46:12.379 filename0: (groupid=0, jobs=1): err= 0: pid=766672: Fri Sep 27 16:04:52 2024 00:46:12.380 read: IOPS=2960, BW=23.1MiB/s (24.3MB/s)(116MiB/5003msec) 00:46:12.380 slat (nsec): min=5397, max=72413, avg=7650.26, stdev=3116.68 00:46:12.380 clat (usec): min=1012, max=4453, avg=2681.84, stdev=166.93 00:46:12.380 lat (usec): min=1029, max=4468, avg=2689.49, stdev=166.70 00:46:12.380 clat percentiles (usec): 00:46:12.380 | 1.00th=[ 2245], 5.00th=[ 2507], 10.00th=[ 2606], 20.00th=[ 2638], 00:46:12.380 | 30.00th=[ 2671], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2704], 00:46:12.380 | 70.00th=[ 2704], 80.00th=[ 2704], 90.00th=[ 2737], 95.00th=[ 2900], 00:46:12.380 | 99.00th=[ 3195], 99.50th=[ 3654], 99.90th=[ 4146], 99.95th=[ 4228], 00:46:12.380 | 99.99th=[ 4424] 00:46:12.380 bw ( KiB/s): min=23552, max=23776, per=25.03%, avg=23694.22, stdev=74.81, samples=9 00:46:12.380 iops : min= 2944, max= 2972, avg=2961.78, stdev= 9.35, samples=9 00:46:12.380 lat (msec) : 2=0.38%, 4=99.41%, 10=0.21% 00:46:12.380 cpu : usr=95.64%, sys=3.72%, ctx=88, majf=0, minf=0 00:46:12.380 IO depths : 1=0.1%, 2=0.1%, 4=70.8%, 8=29.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:12.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:12.380 complete : 0=0.0%, 4=93.5%, 8=6.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:12.380 issued rwts: total=14811,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:12.380 latency : target=0, window=0, percentile=100.00%, depth=8 00:46:12.380 filename1: (groupid=0, jobs=1): err= 0: pid=766674: Fri Sep 27 16:04:52 2024 00:46:12.380 read: IOPS=2946, BW=23.0MiB/s (24.1MB/s)(115MiB/5003msec) 00:46:12.380 slat (nsec): min=5401, max=49897, avg=6090.29, stdev=2109.44 00:46:12.380 clat (usec): min=748, max=5347, avg=2697.80, stdev=197.00 00:46:12.380 lat (usec): min=760, max=5356, avg=2703.89, stdev=197.11 00:46:12.380 clat percentiles (usec): 00:46:12.380 | 1.00th=[ 2212], 5.00th=[ 2507], 10.00th=[ 2638], 20.00th=[ 2671], 00:46:12.380 | 30.00th=[ 2671], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2704], 00:46:12.380 | 70.00th=[ 2704], 80.00th=[ 2704], 90.00th=[ 2737], 95.00th=[ 2933], 00:46:12.380 | 99.00th=[ 3654], 99.50th=[ 3982], 99.90th=[ 4490], 99.95th=[ 5145], 00:46:12.380 | 99.99th=[ 5342] 00:46:12.380 bw ( KiB/s): min=23424, max=23648, per=24.91%, avg=23583.90, stdev=73.10, samples=10 00:46:12.380 iops : min= 2928, max= 2956, avg=2947.90, stdev= 9.12, samples=10 00:46:12.380 lat (usec) : 750=0.01%, 1000=0.01% 00:46:12.380 lat (msec) : 2=0.32%, 4=99.23%, 10=0.44% 00:46:12.380 cpu : usr=97.10%, sys=2.66%, ctx=8, majf=0, minf=0 00:46:12.380 IO depths : 1=0.1%, 2=0.2%, 4=73.0%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:12.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:12.380 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:12.380 issued rwts: total=14740,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:12.380 latency : target=0, window=0, percentile=100.00%, depth=8 00:46:12.380 filename1: (groupid=0, jobs=1): err= 0: pid=766675: Fri Sep 27 16:04:52 2024 00:46:12.380 read: IOPS=2957, BW=23.1MiB/s (24.2MB/s)(116MiB/5004msec) 00:46:12.380 slat (nsec): min=5401, max=59518, avg=6330.79, stdev=2224.68 00:46:12.380 clat (usec): min=1124, max=5737, avg=2688.45, stdev=207.25 00:46:12.380 lat (usec): min=1130, max=5743, avg=2694.78, stdev=207.33 00:46:12.380 clat percentiles (usec): 00:46:12.380 | 1.00th=[ 2114], 5.00th=[ 2442], 10.00th=[ 2573], 20.00th=[ 2638], 00:46:12.380 | 30.00th=[ 2671], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2704], 00:46:12.380 | 70.00th=[ 2704], 80.00th=[ 2704], 90.00th=[ 2737], 95.00th=[ 2933], 00:46:12.380 | 99.00th=[ 3654], 99.50th=[ 3982], 99.90th=[ 4424], 99.95th=[ 4555], 00:46:12.380 | 99.99th=[ 5735] 00:46:12.380 bw ( KiB/s): min=23280, max=23904, per=25.00%, avg=23672.00, stdev=182.78, samples=10 00:46:12.380 iops : min= 2910, max= 2988, avg=2959.00, stdev=22.85, samples=10 00:46:12.380 lat (msec) : 2=0.29%, 4=99.28%, 10=0.43% 00:46:12.380 cpu : usr=96.66%, sys=3.10%, ctx=7, majf=0, minf=0 00:46:12.380 IO depths : 1=0.1%, 2=0.1%, 4=69.6%, 8=30.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:12.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:12.380 complete : 0=0.0%, 4=94.5%, 8=5.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:12.380 issued rwts: total=14800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:12.380 latency : target=0, window=0, percentile=100.00%, depth=8 00:46:12.380 00:46:12.380 Run status group 0 (all jobs): 00:46:12.380 READ: bw=92.5MiB/s (96.9MB/s), 23.0MiB/s-23.2MiB/s (24.1MB/s-24.3MB/s), io=463MiB (485MB), run=5003-5004msec 00:46:12.380 16:04:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:46:12.380 16:04:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:46:12.380 16:04:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:46:12.380 16:04:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:46:12.380 16:04:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:46:12.380 16:04:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:46:12.380 16:04:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:12.380 16:04:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:12.380 16:04:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:12.380 16:04:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:46:12.380 16:04:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:12.380 16:04:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:12.380 16:04:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:12.380 16:04:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:46:12.380 16:04:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:46:12.380 16:04:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:46:12.380 16:04:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:46:12.380 16:04:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:12.380 16:04:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:12.380 16:04:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:12.380 16:04:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:46:12.380 16:04:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:12.380 16:04:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:12.380 16:04:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:12.380 00:46:12.380 real 0m24.424s 00:46:12.380 user 5m14.642s 00:46:12.380 sys 0m4.473s 00:46:12.380 16:04:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:12.380 16:04:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:12.380 ************************************ 00:46:12.380 END TEST fio_dif_rand_params 00:46:12.380 ************************************ 00:46:12.380 16:04:52 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:46:12.380 16:04:52 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:46:12.380 16:04:52 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:46:12.380 16:04:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:12.380 ************************************ 00:46:12.380 START TEST fio_dif_digest 00:46:12.380 ************************************ 00:46:12.380 16:04:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:46:12.380 16:04:52 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:46:12.380 16:04:52 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:46:12.380 16:04:52 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:46:12.380 16:04:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:46:12.380 16:04:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:46:12.380 16:04:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:46:12.380 16:04:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:46:12.380 16:04:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:46:12.380 16:04:52 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:46:12.380 16:04:52 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:46:12.380 16:04:52 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:46:12.380 16:04:52 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:46:12.380 16:04:52 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:46:12.380 16:04:52 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:46:12.380 16:04:52 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:46:12.380 16:04:52 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:46:12.380 16:04:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:12.380 16:04:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:46:12.380 bdev_null0 00:46:12.380 16:04:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:12.380 16:04:52 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:46:12.380 16:04:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:12.380 16:04:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:46:12.380 16:04:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:12.380 16:04:52 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:46:12.380 16:04:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:12.380 16:04:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:46:12.380 16:04:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:12.380 16:04:52 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:46:12.380 16:04:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:12.380 16:04:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:46:12.380 [2024-09-27 16:04:52.716911] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:12.380 16:04:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:12.380 16:04:52 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:46:12.380 16:04:52 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:46:12.380 16:04:52 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:46:12.381 16:04:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # config=() 00:46:12.381 16:04:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # local subsystem config 00:46:12.381 16:04:52 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:12.381 16:04:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:46:12.381 16:04:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:46:12.381 { 00:46:12.381 "params": { 00:46:12.381 "name": "Nvme$subsystem", 00:46:12.381 "trtype": "$TEST_TRANSPORT", 00:46:12.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:12.381 "adrfam": "ipv4", 00:46:12.381 "trsvcid": "$NVMF_PORT", 00:46:12.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:12.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:12.381 "hdgst": ${hdgst:-false}, 00:46:12.381 "ddgst": ${ddgst:-false} 00:46:12.381 }, 00:46:12.381 "method": "bdev_nvme_attach_controller" 00:46:12.381 } 00:46:12.381 EOF 00:46:12.381 )") 00:46:12.381 16:04:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:12.381 16:04:52 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:46:12.381 16:04:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:46:12.381 16:04:52 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:46:12.381 16:04:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:46:12.381 16:04:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:46:12.381 16:04:52 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:46:12.381 16:04:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:12.381 16:04:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:46:12.381 16:04:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:46:12.381 16:04:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:46:12.381 16:04:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # cat 00:46:12.381 16:04:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:12.381 16:04:52 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:46:12.381 16:04:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:46:12.381 16:04:52 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:46:12.381 16:04:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:46:12.381 16:04:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # jq . 00:46:12.381 16:04:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@581 -- # IFS=, 00:46:12.381 16:04:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:46:12.381 "params": { 00:46:12.381 "name": "Nvme0", 00:46:12.381 "trtype": "tcp", 00:46:12.381 "traddr": "10.0.0.2", 00:46:12.381 "adrfam": "ipv4", 00:46:12.381 "trsvcid": "4420", 00:46:12.381 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:12.381 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:12.381 "hdgst": true, 00:46:12.381 "ddgst": true 00:46:12.381 }, 00:46:12.381 "method": "bdev_nvme_attach_controller" 00:46:12.381 }' 00:46:12.381 16:04:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:46:12.381 16:04:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:46:12.381 16:04:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:46:12.381 16:04:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:12.381 16:04:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:46:12.381 16:04:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:46:12.381 16:04:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:46:12.381 16:04:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:46:12.381 16:04:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:46:12.381 16:04:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:12.949 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:46:12.949 ... 00:46:12.949 fio-3.35 00:46:12.949 Starting 3 threads 00:46:25.174 00:46:25.174 filename0: (groupid=0, jobs=1): err= 0: pid=768097: Fri Sep 27 16:05:03 2024 00:46:25.174 read: IOPS=275, BW=34.4MiB/s (36.1MB/s)(346MiB/10046msec) 00:46:25.174 slat (nsec): min=5733, max=32000, avg=7978.28, stdev=1904.59 00:46:25.174 clat (usec): min=8331, max=54203, avg=10869.91, stdev=1381.17 00:46:25.174 lat (usec): min=8340, max=54210, avg=10877.89, stdev=1381.16 00:46:25.174 clat percentiles (usec): 00:46:25.174 | 1.00th=[ 9110], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10159], 00:46:25.174 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10814], 60.00th=[11076], 00:46:25.174 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11863], 95.00th=[12256], 00:46:25.174 | 99.00th=[12780], 99.50th=[13173], 99.90th=[20317], 99.95th=[47449], 00:46:25.174 | 99.99th=[54264] 00:46:25.174 bw ( KiB/s): min=34304, max=36352, per=29.92%, avg=35379.20, stdev=522.67, samples=20 00:46:25.174 iops : min= 268, max= 284, avg=276.40, stdev= 4.08, samples=20 00:46:25.174 lat (msec) : 10=15.18%, 20=84.67%, 50=0.11%, 100=0.04% 00:46:25.174 cpu : usr=90.68%, sys=6.81%, ctx=826, majf=0, minf=83 00:46:25.174 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:25.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:25.174 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:25.174 issued rwts: total=2766,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:25.174 latency : target=0, window=0, percentile=100.00%, depth=3 00:46:25.174 filename0: (groupid=0, jobs=1): err= 0: pid=768098: Fri Sep 27 16:05:03 2024 00:46:25.174 read: IOPS=337, BW=42.2MiB/s (44.3MB/s)(424MiB/10045msec) 00:46:25.174 slat (nsec): min=5814, max=31766, avg=8033.26, stdev=1535.85 00:46:25.174 clat (usec): min=6725, max=47867, avg=8859.41, stdev=1135.63 00:46:25.174 lat (usec): min=6731, max=47874, avg=8867.45, stdev=1135.71 00:46:25.174 clat percentiles (usec): 00:46:25.174 | 1.00th=[ 7373], 5.00th=[ 7767], 10.00th=[ 8029], 20.00th=[ 8291], 00:46:25.174 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 8848], 60.00th=[ 8979], 00:46:25.174 | 70.00th=[ 9110], 80.00th=[ 9372], 90.00th=[ 9634], 95.00th=[ 9896], 00:46:25.174 | 99.00th=[10421], 99.50th=[10683], 99.90th=[13042], 99.95th=[46924], 00:46:25.174 | 99.99th=[47973] 00:46:25.174 bw ( KiB/s): min=41728, max=45056, per=36.70%, avg=43404.80, stdev=832.54, samples=20 00:46:25.174 iops : min= 326, max= 352, avg=339.10, stdev= 6.50, samples=20 00:46:25.174 lat (msec) : 10=96.40%, 20=3.54%, 50=0.06% 00:46:25.174 cpu : usr=95.58%, sys=4.19%, ctx=14, majf=0, minf=87 00:46:25.174 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:25.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:25.174 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:25.174 issued rwts: total=3393,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:25.174 latency : target=0, window=0, percentile=100.00%, depth=3 00:46:25.174 filename0: (groupid=0, jobs=1): err= 0: pid=768099: Fri Sep 27 16:05:03 2024 00:46:25.174 read: IOPS=310, BW=38.8MiB/s (40.7MB/s)(390MiB/10046msec) 00:46:25.174 slat (nsec): min=5621, max=31288, avg=6542.17, stdev=964.95 00:46:25.174 clat (usec): min=6886, max=47732, avg=9631.01, stdev=1185.92 00:46:25.174 lat (usec): min=6892, max=47739, avg=9637.55, stdev=1185.95 00:46:25.174 clat percentiles (usec): 00:46:25.174 | 1.00th=[ 8029], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 8979], 00:46:25.174 | 30.00th=[ 9241], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[ 9765], 00:46:25.174 | 70.00th=[10028], 80.00th=[10159], 90.00th=[10552], 95.00th=[10814], 00:46:25.174 | 99.00th=[11469], 99.50th=[11863], 99.90th=[12518], 99.95th=[45351], 00:46:25.174 | 99.99th=[47973] 00:46:25.174 bw ( KiB/s): min=37120, max=41728, per=33.78%, avg=39940.00, stdev=948.31, samples=20 00:46:25.174 iops : min= 290, max= 326, avg=312.00, stdev= 7.40, samples=20 00:46:25.174 lat (msec) : 10=71.97%, 20=27.96%, 50=0.06% 00:46:25.174 cpu : usr=94.59%, sys=5.19%, ctx=18, majf=0, minf=173 00:46:25.174 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:25.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:25.174 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:25.174 issued rwts: total=3122,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:25.174 latency : target=0, window=0, percentile=100.00%, depth=3 00:46:25.174 00:46:25.174 Run status group 0 (all jobs): 00:46:25.174 READ: bw=115MiB/s (121MB/s), 34.4MiB/s-42.2MiB/s (36.1MB/s-44.3MB/s), io=1160MiB (1216MB), run=10045-10046msec 00:46:25.174 16:05:03 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:46:25.174 16:05:03 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:46:25.174 16:05:03 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:46:25.174 16:05:03 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:46:25.174 16:05:03 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:46:25.174 16:05:03 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:46:25.174 16:05:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:25.174 16:05:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:46:25.174 16:05:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:25.174 16:05:03 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:46:25.174 16:05:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:25.174 16:05:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:46:25.174 16:05:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:25.174 00:46:25.174 real 0m11.181s 00:46:25.174 user 0m39.944s 00:46:25.174 sys 0m1.906s 00:46:25.174 16:05:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:25.174 16:05:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:46:25.174 ************************************ 00:46:25.174 END TEST fio_dif_digest 00:46:25.174 ************************************ 00:46:25.174 16:05:03 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:46:25.174 16:05:03 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:46:25.174 16:05:03 nvmf_dif -- nvmf/common.sh@512 -- # nvmfcleanup 00:46:25.174 16:05:03 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:46:25.174 16:05:03 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:46:25.174 16:05:03 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:46:25.174 16:05:03 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:46:25.174 16:05:03 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:46:25.174 rmmod nvme_tcp 00:46:25.174 rmmod nvme_fabrics 00:46:25.174 rmmod nvme_keyring 00:46:25.174 16:05:03 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:46:25.174 16:05:03 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:46:25.174 16:05:03 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:46:25.174 16:05:03 nvmf_dif -- nvmf/common.sh@513 -- # '[' -n 757954 ']' 00:46:25.174 16:05:03 nvmf_dif -- nvmf/common.sh@514 -- # killprocess 757954 00:46:25.174 16:05:03 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 757954 ']' 00:46:25.174 16:05:03 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 757954 00:46:25.174 16:05:03 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:46:25.174 16:05:03 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:46:25.174 16:05:03 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 757954 00:46:25.174 16:05:04 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:46:25.174 16:05:04 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:46:25.174 16:05:04 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 757954' 00:46:25.174 killing process with pid 757954 00:46:25.174 16:05:04 nvmf_dif -- common/autotest_common.sh@969 -- # kill 757954 00:46:25.174 16:05:04 nvmf_dif -- common/autotest_common.sh@974 -- # wait 757954 00:46:25.174 16:05:04 nvmf_dif -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:46:25.174 16:05:04 nvmf_dif -- nvmf/common.sh@517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:46:27.083 Waiting for block devices as requested 00:46:27.083 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:46:27.343 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:46:27.343 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:46:27.343 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:46:27.603 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:46:27.603 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:46:27.603 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:46:27.603 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:46:27.862 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:46:27.862 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:46:28.123 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:46:28.123 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:46:28.123 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:46:28.383 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:46:28.383 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:46:28.383 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:46:28.643 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:46:28.902 16:05:09 nvmf_dif -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:46:28.902 16:05:09 nvmf_dif -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:46:28.902 16:05:09 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:46:28.902 16:05:09 nvmf_dif -- nvmf/common.sh@787 -- # iptables-save 00:46:28.902 16:05:09 nvmf_dif -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:46:28.902 16:05:09 nvmf_dif -- nvmf/common.sh@787 -- # iptables-restore 00:46:28.902 16:05:09 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:46:28.902 16:05:09 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:46:28.902 16:05:09 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:28.902 16:05:09 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:46:28.902 16:05:09 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:30.820 16:05:11 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:46:30.820 00:46:30.820 real 1m18.044s 00:46:30.820 user 7m55.981s 00:46:30.820 sys 0m22.004s 00:46:30.820 16:05:11 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:30.820 16:05:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:30.820 ************************************ 00:46:30.820 END TEST nvmf_dif 00:46:30.820 ************************************ 00:46:31.081 16:05:11 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:46:31.081 16:05:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:46:31.081 16:05:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:46:31.081 16:05:11 -- common/autotest_common.sh@10 -- # set +x 00:46:31.081 ************************************ 00:46:31.081 START TEST nvmf_abort_qd_sizes 00:46:31.081 ************************************ 00:46:31.081 16:05:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:46:31.081 * Looking for test storage... 00:46:31.081 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:46:31.081 16:05:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:46:31.081 16:05:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lcov --version 00:46:31.081 16:05:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:46:31.081 16:05:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:46:31.081 16:05:11 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:31.081 16:05:11 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:31.081 16:05:11 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:31.081 16:05:11 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:46:31.081 16:05:11 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:46:31.081 16:05:11 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:46:31.081 16:05:11 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:46:31.081 16:05:11 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:46:31.081 16:05:11 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:46:31.081 16:05:11 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:46:31.081 16:05:11 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:31.081 16:05:11 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:46:31.081 16:05:11 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:46:31.081 16:05:11 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:31.081 16:05:11 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:31.342 16:05:11 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:46:31.342 16:05:11 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:46:31.342 16:05:11 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:31.342 16:05:11 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:46:31.342 16:05:11 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:46:31.342 16:05:11 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:46:31.342 16:05:11 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:46:31.342 16:05:11 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:31.342 16:05:11 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:46:31.342 16:05:11 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:46:31.342 16:05:11 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:31.342 16:05:11 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:31.342 16:05:11 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:46:31.342 16:05:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:31.342 16:05:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:46:31.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:31.342 --rc genhtml_branch_coverage=1 00:46:31.342 --rc genhtml_function_coverage=1 00:46:31.342 --rc genhtml_legend=1 00:46:31.342 --rc geninfo_all_blocks=1 00:46:31.342 --rc geninfo_unexecuted_blocks=1 00:46:31.342 00:46:31.342 ' 00:46:31.342 16:05:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:46:31.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:31.342 --rc genhtml_branch_coverage=1 00:46:31.342 --rc genhtml_function_coverage=1 00:46:31.342 --rc genhtml_legend=1 00:46:31.342 --rc geninfo_all_blocks=1 00:46:31.342 --rc geninfo_unexecuted_blocks=1 00:46:31.342 00:46:31.342 ' 00:46:31.342 16:05:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:46:31.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:31.342 --rc genhtml_branch_coverage=1 00:46:31.342 --rc genhtml_function_coverage=1 00:46:31.342 --rc genhtml_legend=1 00:46:31.342 --rc geninfo_all_blocks=1 00:46:31.342 --rc geninfo_unexecuted_blocks=1 00:46:31.342 00:46:31.342 ' 00:46:31.342 16:05:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:46:31.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:31.342 --rc genhtml_branch_coverage=1 00:46:31.342 --rc genhtml_function_coverage=1 00:46:31.342 --rc genhtml_legend=1 00:46:31.342 --rc geninfo_all_blocks=1 00:46:31.342 --rc geninfo_unexecuted_blocks=1 00:46:31.342 00:46:31.342 ' 00:46:31.342 16:05:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:46:31.342 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:46:31.342 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:31.342 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:31.342 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:31.342 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:31.342 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:31.342 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:31.342 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:31.342 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:31.342 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:31.342 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:31.342 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:46:31.342 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:46:31.342 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:31.342 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:31.342 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:46:31.342 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:31.342 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:46:31.342 16:05:11 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:46:31.342 16:05:11 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:31.342 16:05:11 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:31.342 16:05:11 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:31.342 16:05:11 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:31.343 16:05:11 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:31.343 16:05:11 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:31.343 16:05:11 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:46:31.343 16:05:11 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:31.343 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:46:31.343 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:31.343 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:31.343 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:31.343 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:31.343 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:31.343 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:46:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:46:31.343 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:31.343 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:31.343 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:31.343 16:05:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:46:31.343 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:46:31.343 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:46:31.343 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # prepare_net_devs 00:46:31.343 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@434 -- # local -g is_hw=no 00:46:31.343 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # remove_spdk_ns 00:46:31.343 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:31.343 16:05:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:46:31.343 16:05:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:31.343 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:46:31.343 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:46:31.343 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:46:31.343 16:05:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:46:39.483 Found 0000:31:00.0 (0x8086 - 0x159b) 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@365 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:46:39.483 Found 0000:31:00.1 (0x8086 - 0x159b) 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ up == up ]] 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:46:39.483 Found net devices under 0000:31:00.0: cvl_0_0 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:46:39.483 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:39.484 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:46:39.484 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:39.484 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ up == up ]] 00:46:39.484 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:46:39.484 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:39.484 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:46:39.484 Found net devices under 0000:31:00.1: cvl_0_1 00:46:39.484 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:46:39.484 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:46:39.484 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # is_hw=yes 00:46:39.484 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:46:39.484 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:46:39.484 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:46:39.484 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:46:39.484 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:46:39.484 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:46:39.484 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:46:39.484 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:46:39.484 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:46:39.484 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:46:39.484 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:46:39.484 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:46:39.484 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:46:39.484 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:46:39.484 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:46:39.484 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:46:39.484 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:46:39.484 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:46:39.484 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:46:39.484 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:46:39.484 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:46:39.484 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:46:39.484 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:46:39.484 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:46:39.484 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:46:39.484 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:46:39.484 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:46:39.484 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.512 ms 00:46:39.484 00:46:39.484 --- 10.0.0.2 ping statistics --- 00:46:39.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:39.484 rtt min/avg/max/mdev = 0.512/0.512/0.512/0.000 ms 00:46:39.484 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:46:39.484 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:46:39.484 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:46:39.484 00:46:39.484 --- 10.0.0.1 ping statistics --- 00:46:39.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:39.484 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:46:39.484 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:46:39.484 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # return 0 00:46:39.484 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:46:39.484 16:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@475 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:46:42.033 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:46:42.033 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:46:42.033 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:46:42.033 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:46:42.033 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:46:42.033 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:46:42.033 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:46:42.033 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:46:42.033 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:46:42.295 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:46:42.295 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:46:42.295 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:46:42.295 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:46:42.295 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:46:42.295 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:46:42.295 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:46:42.295 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:46:42.556 16:05:22 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:42.556 16:05:22 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:46:42.556 16:05:22 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:46:42.556 16:05:22 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:42.556 16:05:22 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:46:42.556 16:05:22 nvmf_abort_qd_sizes -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:46:42.556 16:05:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:46:42.556 16:05:23 nvmf_abort_qd_sizes -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:46:42.556 16:05:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:46:42.556 16:05:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:42.816 16:05:23 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # nvmfpid=777647 00:46:42.816 16:05:23 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # waitforlisten 777647 00:46:42.816 16:05:23 nvmf_abort_qd_sizes -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:46:42.816 16:05:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 777647 ']' 00:46:42.816 16:05:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:42.816 16:05:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:46:42.816 16:05:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:42.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:42.816 16:05:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:46:42.816 16:05:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:42.816 [2024-09-27 16:05:23.095350] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:46:42.816 [2024-09-27 16:05:23.095396] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:42.816 [2024-09-27 16:05:23.180760] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:46:42.816 [2024-09-27 16:05:23.214177] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:42.816 [2024-09-27 16:05:23.214213] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:42.816 [2024-09-27 16:05:23.214221] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:42.816 [2024-09-27 16:05:23.214230] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:42.816 [2024-09-27 16:05:23.214236] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:42.816 [2024-09-27 16:05:23.214373] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:46:42.816 [2024-09-27 16:05:23.214525] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:46:42.816 [2024-09-27 16:05:23.214673] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:46:42.816 [2024-09-27 16:05:23.214675] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:46:43.758 16:05:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:46:43.758 16:05:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:46:43.758 16:05:23 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:46:43.758 16:05:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:46:43.758 16:05:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:43.758 16:05:23 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:43.758 16:05:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:46:43.758 16:05:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:46:43.758 16:05:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:46:43.758 16:05:23 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:46:43.758 16:05:23 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:46:43.758 16:05:23 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:46:43.758 16:05:23 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:46:43.758 16:05:23 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:46:43.758 16:05:23 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:46:43.758 16:05:23 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:46:43.758 16:05:23 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:46:43.758 16:05:23 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:46:43.758 16:05:23 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:46:43.758 16:05:23 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:46:43.758 16:05:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:46:43.758 16:05:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:46:43.758 16:05:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:46:43.758 16:05:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:46:43.758 16:05:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:46:43.758 16:05:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:43.758 ************************************ 00:46:43.758 START TEST spdk_target_abort 00:46:43.758 ************************************ 00:46:43.758 16:05:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:46:43.758 16:05:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:46:43.758 16:05:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:46:43.758 16:05:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:43.758 16:05:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:44.032 spdk_targetn1 00:46:44.033 16:05:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:44.033 16:05:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:46:44.033 16:05:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:44.033 16:05:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:44.033 [2024-09-27 16:05:24.296656] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:44.033 16:05:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:44.033 16:05:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:46:44.033 16:05:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:44.033 16:05:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:44.033 16:05:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:44.033 16:05:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:46:44.033 16:05:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:44.033 16:05:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:44.033 16:05:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:44.033 16:05:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:46:44.033 16:05:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:44.033 16:05:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:44.033 [2024-09-27 16:05:24.336956] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:44.033 16:05:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:44.033 16:05:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:46:44.033 16:05:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:46:44.033 16:05:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:46:44.033 16:05:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:46:44.033 16:05:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:46:44.033 16:05:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:46:44.033 16:05:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:46:44.033 16:05:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:46:44.033 16:05:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:46:44.033 16:05:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:44.033 16:05:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:46:44.033 16:05:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:44.033 16:05:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:46:44.033 16:05:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:44.033 16:05:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:46:44.033 16:05:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:44.033 16:05:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:46:44.033 16:05:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:44.033 16:05:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:44.033 16:05:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:44.033 16:05:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:44.294 [2024-09-27 16:05:24.542438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:224 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:46:44.294 [2024-09-27 16:05:24.542482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:001f p:1 m:0 dnr:0 00:46:44.294 [2024-09-27 16:05:24.558619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:704 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:46:44.294 [2024-09-27 16:05:24.558651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:005a p:1 m:0 dnr:0 00:46:44.294 [2024-09-27 16:05:24.577848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:1192 len:8 PRP1 0x2000078be000 PRP2 0x0 00:46:44.294 [2024-09-27 16:05:24.577881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0096 p:1 m:0 dnr:0 00:46:44.294 [2024-09-27 16:05:24.585405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:1408 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:46:44.294 [2024-09-27 16:05:24.585434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00b2 p:1 m:0 dnr:0 00:46:44.294 [2024-09-27 16:05:24.603679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2000 len:8 PRP1 0x2000078be000 PRP2 0x0 00:46:44.294 [2024-09-27 16:05:24.603712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00fb p:1 m:0 dnr:0 00:46:44.294 [2024-09-27 16:05:24.609435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:2112 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:46:44.294 [2024-09-27 16:05:24.609461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:46:44.294 [2024-09-27 16:05:24.661373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:3768 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:46:44.294 [2024-09-27 16:05:24.661408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00d9 p:0 m:0 dnr:0 00:46:44.294 [2024-09-27 16:05:24.672576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:3952 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:46:44.295 [2024-09-27 16:05:24.672604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00f1 p:0 m:0 dnr:0 00:46:47.592 Initializing NVMe Controllers 00:46:47.592 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:46:47.592 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:47.592 Initialization complete. Launching workers. 00:46:47.592 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11115, failed: 8 00:46:47.592 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2298, failed to submit 8825 00:46:47.592 success 770, unsuccessful 1528, failed 0 00:46:47.592 16:05:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:47.592 16:05:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:47.592 [2024-09-27 16:05:27.684182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:216 len:8 PRP1 0x200007c50000 PRP2 0x0 00:46:47.592 [2024-09-27 16:05:27.684222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:002c p:1 m:0 dnr:0 00:46:47.593 [2024-09-27 16:05:27.692261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:176 nsid:1 lba:544 len:8 PRP1 0x200007c58000 PRP2 0x0 00:46:47.593 [2024-09-27 16:05:27.692290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:176 cdw0:0 sqhd:0048 p:1 m:0 dnr:0 00:46:47.593 [2024-09-27 16:05:27.708008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:182 nsid:1 lba:816 len:8 PRP1 0x200007c42000 PRP2 0x0 00:46:47.593 [2024-09-27 16:05:27.708032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:182 cdw0:0 sqhd:0068 p:1 m:0 dnr:0 00:46:47.593 [2024-09-27 16:05:27.723017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:180 nsid:1 lba:1192 len:8 PRP1 0x200007c46000 PRP2 0x0 00:46:47.593 [2024-09-27 16:05:27.723041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:180 cdw0:0 sqhd:0098 p:1 m:0 dnr:0 00:46:47.593 [2024-09-27 16:05:27.748070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:177 nsid:1 lba:1752 len:8 PRP1 0x200007c42000 PRP2 0x0 00:46:47.593 [2024-09-27 16:05:27.748093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:177 cdw0:0 sqhd:00e4 p:1 m:0 dnr:0 00:46:47.593 [2024-09-27 16:05:27.770296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:182 nsid:1 lba:2352 len:8 PRP1 0x200007c4e000 PRP2 0x0 00:46:47.593 [2024-09-27 16:05:27.770317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:182 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:46:49.508 [2024-09-27 16:05:29.842778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:176 nsid:1 lba:50032 len:8 PRP1 0x200007c3a000 PRP2 0x0 00:46:49.508 [2024-09-27 16:05:29.842810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:176 cdw0:0 sqhd:006f p:1 m:0 dnr:0 00:46:50.451 Initializing NVMe Controllers 00:46:50.451 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:46:50.451 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:50.451 Initialization complete. Launching workers. 00:46:50.451 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8573, failed: 7 00:46:50.451 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1228, failed to submit 7352 00:46:50.451 success 348, unsuccessful 880, failed 0 00:46:50.451 16:05:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:50.451 16:05:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:52.363 [2024-09-27 16:05:32.533934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:167 nsid:1 lba:187584 len:8 PRP1 0x2000078f4000 PRP2 0x0 00:46:52.363 [2024-09-27 16:05:32.533971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:167 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:46:52.933 [2024-09-27 16:05:33.213838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:150 nsid:1 lba:267336 len:8 PRP1 0x2000078e6000 PRP2 0x0 00:46:52.933 [2024-09-27 16:05:33.213861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:150 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:53.506 Initializing NVMe Controllers 00:46:53.506 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:46:53.506 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:53.506 Initialization complete. Launching workers. 00:46:53.506 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 44037, failed: 2 00:46:53.506 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2622, failed to submit 41417 00:46:53.506 success 598, unsuccessful 2024, failed 0 00:46:53.506 16:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:46:53.506 16:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:53.506 16:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:53.506 16:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:53.506 16:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:46:53.506 16:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:53.506 16:05:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:55.418 16:05:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:55.418 16:05:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 777647 00:46:55.418 16:05:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 777647 ']' 00:46:55.418 16:05:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 777647 00:46:55.418 16:05:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:46:55.418 16:05:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:46:55.418 16:05:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 777647 00:46:55.418 16:05:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:46:55.418 16:05:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:46:55.418 16:05:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 777647' 00:46:55.418 killing process with pid 777647 00:46:55.418 16:05:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 777647 00:46:55.418 16:05:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 777647 00:46:55.677 00:46:55.677 real 0m12.003s 00:46:55.677 user 0m48.992s 00:46:55.677 sys 0m1.942s 00:46:55.677 16:05:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:55.678 16:05:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:55.678 ************************************ 00:46:55.678 END TEST spdk_target_abort 00:46:55.678 ************************************ 00:46:55.678 16:05:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:46:55.678 16:05:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:46:55.678 16:05:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:46:55.678 16:05:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:55.678 ************************************ 00:46:55.678 START TEST kernel_target_abort 00:46:55.678 ************************************ 00:46:55.678 16:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:46:55.678 16:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:46:55.678 16:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@765 -- # local ip 00:46:55.678 16:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # ip_candidates=() 00:46:55.678 16:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # local -A ip_candidates 00:46:55.678 16:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:46:55.678 16:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:46:55.678 16:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:46:55.678 16:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:46:55.678 16:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:46:55.678 16:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:46:55.678 16:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:46:55.678 16:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:46:55.678 16:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:46:55.678 16:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:46:55.678 16:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:55.678 16:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:46:55.678 16:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:46:55.678 16:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # local block nvme 00:46:55.678 16:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:46:55.678 16:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@666 -- # modprobe nvmet 00:46:55.678 16:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:46:55.678 16:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:46:58.982 Waiting for block devices as requested 00:46:59.243 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:46:59.243 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:46:59.243 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:46:59.504 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:46:59.504 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:46:59.504 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:46:59.765 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:46:59.765 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:46:59.765 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:47:00.026 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:47:00.026 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:47:00.026 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:47:00.287 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:47:00.287 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:47:00.287 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:47:00.548 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:47:00.548 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:47:00.809 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:47:00.809 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:47:00.809 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:47:00.809 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:47:00.809 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:47:00.809 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:47:00.809 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:47:00.809 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:47:00.809 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:47:00.809 No valid GPT data, bailing 00:47:00.809 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:47:00.809 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:47:00.809 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:47:00.809 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:47:00.809 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # [[ -b /dev/nvme0n1 ]] 00:47:00.809 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:47:00.809 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:47:01.071 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:47:01.071 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:47:01.071 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo 1 00:47:01.071 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@692 -- # echo /dev/nvme0n1 00:47:01.071 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:47:01.071 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:47:01.071 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo tcp 00:47:01.071 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 4420 00:47:01.071 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo ipv4 00:47:01.071 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:47:01.071 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:47:01.071 00:47:01.071 Discovery Log Number of Records 2, Generation counter 2 00:47:01.071 =====Discovery Log Entry 0====== 00:47:01.071 trtype: tcp 00:47:01.071 adrfam: ipv4 00:47:01.071 subtype: current discovery subsystem 00:47:01.071 treq: not specified, sq flow control disable supported 00:47:01.071 portid: 1 00:47:01.071 trsvcid: 4420 00:47:01.071 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:47:01.071 traddr: 10.0.0.1 00:47:01.071 eflags: none 00:47:01.071 sectype: none 00:47:01.071 =====Discovery Log Entry 1====== 00:47:01.071 trtype: tcp 00:47:01.071 adrfam: ipv4 00:47:01.071 subtype: nvme subsystem 00:47:01.071 treq: not specified, sq flow control disable supported 00:47:01.071 portid: 1 00:47:01.071 trsvcid: 4420 00:47:01.071 subnqn: nqn.2016-06.io.spdk:testnqn 00:47:01.071 traddr: 10.0.0.1 00:47:01.071 eflags: none 00:47:01.071 sectype: none 00:47:01.071 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:47:01.071 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:47:01.071 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:47:01.071 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:47:01.071 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:47:01.071 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:47:01.071 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:47:01.071 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:47:01.071 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:47:01.071 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:01.071 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:47:01.071 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:01.071 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:47:01.071 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:01.071 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:47:01.071 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:01.071 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:47:01.071 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:01.071 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:47:01.071 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:47:01.071 16:05:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:47:04.368 Initializing NVMe Controllers 00:47:04.368 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:47:04.368 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:47:04.368 Initialization complete. Launching workers. 00:47:04.368 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67563, failed: 0 00:47:04.368 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67563, failed to submit 0 00:47:04.368 success 0, unsuccessful 67563, failed 0 00:47:04.368 16:05:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:47:04.368 16:05:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:47:07.666 Initializing NVMe Controllers 00:47:07.666 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:47:07.666 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:47:07.666 Initialization complete. Launching workers. 00:47:07.666 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 118708, failed: 0 00:47:07.666 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29890, failed to submit 88818 00:47:07.666 success 0, unsuccessful 29890, failed 0 00:47:07.666 16:05:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:47:07.666 16:05:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:47:10.966 Initializing NVMe Controllers 00:47:10.966 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:47:10.966 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:47:10.966 Initialization complete. Launching workers. 00:47:10.966 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 145988, failed: 0 00:47:10.966 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36538, failed to submit 109450 00:47:10.966 success 0, unsuccessful 36538, failed 0 00:47:10.966 16:05:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:47:10.966 16:05:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:47:10.966 16:05:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # echo 0 00:47:10.966 16:05:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:47:10.966 16:05:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:47:10.966 16:05:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:47:10.966 16:05:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:47:10.966 16:05:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:47:10.966 16:05:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:47:10.966 16:05:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@722 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:47:14.268 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:47:14.268 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:47:14.268 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:47:14.268 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:47:14.268 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:47:14.268 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:47:14.268 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:47:14.268 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:47:14.268 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:47:14.268 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:47:14.268 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:47:14.268 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:47:14.268 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:47:14.268 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:47:14.268 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:47:14.268 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:47:16.181 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:47:16.181 00:47:16.181 real 0m20.473s 00:47:16.181 user 0m9.942s 00:47:16.181 sys 0m6.179s 00:47:16.181 16:05:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:47:16.181 16:05:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:16.181 ************************************ 00:47:16.181 END TEST kernel_target_abort 00:47:16.181 ************************************ 00:47:16.181 16:05:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:47:16.181 16:05:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:47:16.181 16:05:56 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # nvmfcleanup 00:47:16.181 16:05:56 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:47:16.181 16:05:56 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:47:16.181 16:05:56 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:47:16.181 16:05:56 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:47:16.181 16:05:56 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:47:16.181 rmmod nvme_tcp 00:47:16.181 rmmod nvme_fabrics 00:47:16.181 rmmod nvme_keyring 00:47:16.181 16:05:56 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:47:16.181 16:05:56 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:47:16.181 16:05:56 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:47:16.181 16:05:56 nvmf_abort_qd_sizes -- nvmf/common.sh@513 -- # '[' -n 777647 ']' 00:47:16.181 16:05:56 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # killprocess 777647 00:47:16.181 16:05:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 777647 ']' 00:47:16.181 16:05:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 777647 00:47:16.181 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (777647) - No such process 00:47:16.181 16:05:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 777647 is not found' 00:47:16.181 Process with pid 777647 is not found 00:47:16.181 16:05:56 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:47:16.181 16:05:56 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:47:20.385 Waiting for block devices as requested 00:47:20.385 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:47:20.385 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:47:20.385 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:47:20.385 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:47:20.385 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:47:20.385 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:47:20.385 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:47:20.385 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:47:20.385 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:47:20.647 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:47:20.647 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:47:20.647 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:47:20.908 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:47:20.908 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:47:20.908 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:47:21.169 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:47:21.169 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:47:21.430 16:06:01 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:47:21.430 16:06:01 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:47:21.430 16:06:01 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:47:21.430 16:06:01 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-save 00:47:21.430 16:06:01 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:47:21.430 16:06:01 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-restore 00:47:21.430 16:06:01 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:47:21.430 16:06:01 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:47:21.430 16:06:01 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:21.430 16:06:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:47:21.430 16:06:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:23.976 16:06:03 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:47:23.976 00:47:23.976 real 0m52.498s 00:47:23.976 user 1m4.358s 00:47:23.976 sys 0m19.299s 00:47:23.976 16:06:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:47:23.976 16:06:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:47:23.976 ************************************ 00:47:23.976 END TEST nvmf_abort_qd_sizes 00:47:23.976 ************************************ 00:47:23.976 16:06:03 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:47:23.976 16:06:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:47:23.976 16:06:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:47:23.976 16:06:03 -- common/autotest_common.sh@10 -- # set +x 00:47:23.977 ************************************ 00:47:23.977 START TEST keyring_file 00:47:23.977 ************************************ 00:47:23.977 16:06:03 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:47:23.977 * Looking for test storage... 00:47:23.977 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:47:23.977 16:06:04 keyring_file -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:47:23.977 16:06:04 keyring_file -- common/autotest_common.sh@1681 -- # lcov --version 00:47:23.977 16:06:04 keyring_file -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:47:23.977 16:06:04 keyring_file -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:47:23.977 16:06:04 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:23.977 16:06:04 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:23.977 16:06:04 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:23.977 16:06:04 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:47:23.977 16:06:04 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:47:23.977 16:06:04 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:47:23.977 16:06:04 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:47:23.977 16:06:04 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:47:23.977 16:06:04 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:47:23.977 16:06:04 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:47:23.977 16:06:04 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:23.977 16:06:04 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:47:23.977 16:06:04 keyring_file -- scripts/common.sh@345 -- # : 1 00:47:23.977 16:06:04 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:23.977 16:06:04 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:23.977 16:06:04 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:47:23.977 16:06:04 keyring_file -- scripts/common.sh@353 -- # local d=1 00:47:23.977 16:06:04 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:23.977 16:06:04 keyring_file -- scripts/common.sh@355 -- # echo 1 00:47:23.977 16:06:04 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:47:23.977 16:06:04 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:47:23.977 16:06:04 keyring_file -- scripts/common.sh@353 -- # local d=2 00:47:23.977 16:06:04 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:23.977 16:06:04 keyring_file -- scripts/common.sh@355 -- # echo 2 00:47:23.977 16:06:04 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:47:23.977 16:06:04 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:23.977 16:06:04 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:23.977 16:06:04 keyring_file -- scripts/common.sh@368 -- # return 0 00:47:23.977 16:06:04 keyring_file -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:23.977 16:06:04 keyring_file -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:47:23.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:23.977 --rc genhtml_branch_coverage=1 00:47:23.977 --rc genhtml_function_coverage=1 00:47:23.977 --rc genhtml_legend=1 00:47:23.977 --rc geninfo_all_blocks=1 00:47:23.977 --rc geninfo_unexecuted_blocks=1 00:47:23.977 00:47:23.977 ' 00:47:23.977 16:06:04 keyring_file -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:47:23.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:23.977 --rc genhtml_branch_coverage=1 00:47:23.977 --rc genhtml_function_coverage=1 00:47:23.977 --rc genhtml_legend=1 00:47:23.977 --rc geninfo_all_blocks=1 00:47:23.977 --rc geninfo_unexecuted_blocks=1 00:47:23.977 00:47:23.977 ' 00:47:23.977 16:06:04 keyring_file -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:47:23.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:23.977 --rc genhtml_branch_coverage=1 00:47:23.977 --rc genhtml_function_coverage=1 00:47:23.977 --rc genhtml_legend=1 00:47:23.977 --rc geninfo_all_blocks=1 00:47:23.977 --rc geninfo_unexecuted_blocks=1 00:47:23.977 00:47:23.977 ' 00:47:23.977 16:06:04 keyring_file -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:47:23.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:23.977 --rc genhtml_branch_coverage=1 00:47:23.977 --rc genhtml_function_coverage=1 00:47:23.977 --rc genhtml_legend=1 00:47:23.977 --rc geninfo_all_blocks=1 00:47:23.977 --rc geninfo_unexecuted_blocks=1 00:47:23.977 00:47:23.977 ' 00:47:23.977 16:06:04 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:47:23.977 16:06:04 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:47:23.977 16:06:04 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:47:23.977 16:06:04 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:23.977 16:06:04 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:23.977 16:06:04 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:23.977 16:06:04 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:23.977 16:06:04 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:23.977 16:06:04 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:23.977 16:06:04 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:23.977 16:06:04 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:23.977 16:06:04 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:23.977 16:06:04 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:23.977 16:06:04 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:47:23.977 16:06:04 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:47:23.977 16:06:04 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:23.977 16:06:04 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:23.977 16:06:04 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:47:23.977 16:06:04 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:23.977 16:06:04 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:47:23.977 16:06:04 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:47:23.977 16:06:04 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:23.977 16:06:04 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:23.977 16:06:04 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:23.977 16:06:04 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:23.977 16:06:04 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:23.977 16:06:04 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:23.977 16:06:04 keyring_file -- paths/export.sh@5 -- # export PATH 00:47:23.977 16:06:04 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:23.977 16:06:04 keyring_file -- nvmf/common.sh@51 -- # : 0 00:47:23.977 16:06:04 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:23.977 16:06:04 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:23.977 16:06:04 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:23.977 16:06:04 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:23.977 16:06:04 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:23.977 16:06:04 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:23.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:23.977 16:06:04 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:23.977 16:06:04 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:23.977 16:06:04 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:23.977 16:06:04 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:47:23.977 16:06:04 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:47:23.977 16:06:04 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:47:23.977 16:06:04 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:47:23.977 16:06:04 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:47:23.977 16:06:04 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:47:23.977 16:06:04 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:47:23.977 16:06:04 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:47:23.977 16:06:04 keyring_file -- keyring/common.sh@17 -- # name=key0 00:47:23.977 16:06:04 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:47:23.977 16:06:04 keyring_file -- keyring/common.sh@17 -- # digest=0 00:47:23.977 16:06:04 keyring_file -- keyring/common.sh@18 -- # mktemp 00:47:23.977 16:06:04 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.FhNO8nkpzX 00:47:23.977 16:06:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:47:23.977 16:06:04 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:47:23.978 16:06:04 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:47:23.978 16:06:04 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:47:23.978 16:06:04 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:47:23.978 16:06:04 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:47:23.978 16:06:04 keyring_file -- nvmf/common.sh@729 -- # python - 00:47:23.978 16:06:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.FhNO8nkpzX 00:47:23.978 16:06:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.FhNO8nkpzX 00:47:23.978 16:06:04 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.FhNO8nkpzX 00:47:23.978 16:06:04 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:47:23.978 16:06:04 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:47:23.978 16:06:04 keyring_file -- keyring/common.sh@17 -- # name=key1 00:47:23.978 16:06:04 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:47:23.978 16:06:04 keyring_file -- keyring/common.sh@17 -- # digest=0 00:47:23.978 16:06:04 keyring_file -- keyring/common.sh@18 -- # mktemp 00:47:23.978 16:06:04 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.xHziMsasUy 00:47:23.978 16:06:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:47:23.978 16:06:04 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:47:23.978 16:06:04 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:47:23.978 16:06:04 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:47:23.978 16:06:04 keyring_file -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:47:23.978 16:06:04 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:47:23.978 16:06:04 keyring_file -- nvmf/common.sh@729 -- # python - 00:47:23.978 16:06:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.xHziMsasUy 00:47:23.978 16:06:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.xHziMsasUy 00:47:23.978 16:06:04 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.xHziMsasUy 00:47:23.978 16:06:04 keyring_file -- keyring/file.sh@30 -- # tgtpid=787938 00:47:23.978 16:06:04 keyring_file -- keyring/file.sh@32 -- # waitforlisten 787938 00:47:23.978 16:06:04 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:47:23.978 16:06:04 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 787938 ']' 00:47:23.978 16:06:04 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:23.978 16:06:04 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:47:23.978 16:06:04 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:23.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:23.978 16:06:04 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:47:23.978 16:06:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:47:23.978 [2024-09-27 16:06:04.379308] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:47:23.978 [2024-09-27 16:06:04.379386] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid787938 ] 00:47:23.978 [2024-09-27 16:06:04.461597] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:24.239 [2024-09-27 16:06:04.509218] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:47:24.810 16:06:05 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:47:24.810 16:06:05 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:47:24.810 16:06:05 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:47:24.810 16:06:05 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:24.810 16:06:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:47:24.810 [2024-09-27 16:06:05.184817] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:24.810 null0 00:47:24.810 [2024-09-27 16:06:05.216858] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:47:24.810 [2024-09-27 16:06:05.217187] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:47:24.810 16:06:05 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:24.810 16:06:05 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:47:24.810 16:06:05 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:47:24.810 16:06:05 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:47:24.810 16:06:05 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:47:24.810 16:06:05 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:47:24.810 16:06:05 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:47:24.810 16:06:05 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:47:24.810 16:06:05 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:47:24.810 16:06:05 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:24.810 16:06:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:47:24.810 [2024-09-27 16:06:05.248933] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:47:24.810 request: 00:47:24.810 { 00:47:24.810 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:47:24.810 "secure_channel": false, 00:47:24.810 "listen_address": { 00:47:24.810 "trtype": "tcp", 00:47:24.810 "traddr": "127.0.0.1", 00:47:24.810 "trsvcid": "4420" 00:47:24.810 }, 00:47:24.810 "method": "nvmf_subsystem_add_listener", 00:47:24.810 "req_id": 1 00:47:24.810 } 00:47:24.810 Got JSON-RPC error response 00:47:24.810 response: 00:47:24.810 { 00:47:24.810 "code": -32602, 00:47:24.810 "message": "Invalid parameters" 00:47:24.810 } 00:47:24.810 16:06:05 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:47:24.810 16:06:05 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:47:24.810 16:06:05 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:47:24.810 16:06:05 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:47:24.810 16:06:05 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:47:24.810 16:06:05 keyring_file -- keyring/file.sh@47 -- # bperfpid=788049 00:47:24.810 16:06:05 keyring_file -- keyring/file.sh@49 -- # waitforlisten 788049 /var/tmp/bperf.sock 00:47:24.810 16:06:05 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:47:24.810 16:06:05 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 788049 ']' 00:47:24.810 16:06:05 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:47:24.810 16:06:05 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:47:24.810 16:06:05 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:47:24.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:47:24.810 16:06:05 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:47:24.810 16:06:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:47:25.071 [2024-09-27 16:06:05.306825] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:47:25.071 [2024-09-27 16:06:05.306876] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid788049 ] 00:47:25.071 [2024-09-27 16:06:05.384116] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:25.071 [2024-09-27 16:06:05.415720] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:47:25.641 16:06:06 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:47:25.641 16:06:06 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:47:25.641 16:06:06 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FhNO8nkpzX 00:47:25.641 16:06:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FhNO8nkpzX 00:47:25.902 16:06:06 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.xHziMsasUy 00:47:25.902 16:06:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.xHziMsasUy 00:47:26.162 16:06:06 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:47:26.162 16:06:06 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:47:26.162 16:06:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:26.162 16:06:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:26.162 16:06:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:26.162 16:06:06 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.FhNO8nkpzX == \/\t\m\p\/\t\m\p\.\F\h\N\O\8\n\k\p\z\X ]] 00:47:26.162 16:06:06 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:47:26.162 16:06:06 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:47:26.162 16:06:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:26.162 16:06:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:26.162 16:06:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:47:26.423 16:06:06 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.xHziMsasUy == \/\t\m\p\/\t\m\p\.\x\H\z\i\M\s\a\s\U\y ]] 00:47:26.423 16:06:06 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:47:26.423 16:06:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:26.423 16:06:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:26.423 16:06:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:26.423 16:06:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:26.423 16:06:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:26.683 16:06:06 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:47:26.683 16:06:06 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:47:26.683 16:06:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:47:26.683 16:06:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:26.683 16:06:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:26.683 16:06:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:26.683 16:06:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:47:26.944 16:06:07 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:47:26.944 16:06:07 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:26.944 16:06:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:26.944 [2024-09-27 16:06:07.344181] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:47:26.944 nvme0n1 00:47:27.205 16:06:07 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:47:27.205 16:06:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:27.205 16:06:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:27.205 16:06:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:27.205 16:06:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:27.205 16:06:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:27.205 16:06:07 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:47:27.205 16:06:07 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:47:27.205 16:06:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:47:27.205 16:06:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:27.205 16:06:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:27.205 16:06:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:47:27.205 16:06:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:27.466 16:06:07 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:47:27.466 16:06:07 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:47:27.466 Running I/O for 1 seconds... 00:47:28.849 18491.00 IOPS, 72.23 MiB/s 00:47:28.849 Latency(us) 00:47:28.849 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:28.849 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:47:28.849 nvme0n1 : 1.00 18550.77 72.46 0.00 0.00 6888.30 2362.03 12888.75 00:47:28.850 =================================================================================================================== 00:47:28.850 Total : 18550.77 72.46 0.00 0.00 6888.30 2362.03 12888.75 00:47:28.850 { 00:47:28.850 "results": [ 00:47:28.850 { 00:47:28.850 "job": "nvme0n1", 00:47:28.850 "core_mask": "0x2", 00:47:28.850 "workload": "randrw", 00:47:28.850 "percentage": 50, 00:47:28.850 "status": "finished", 00:47:28.850 "queue_depth": 128, 00:47:28.850 "io_size": 4096, 00:47:28.850 "runtime": 1.003786, 00:47:28.850 "iops": 18550.76679690691, 00:47:28.850 "mibps": 72.46393280041762, 00:47:28.850 "io_failed": 0, 00:47:28.850 "io_timeout": 0, 00:47:28.850 "avg_latency_us": 6888.296937865851, 00:47:28.850 "min_latency_us": 2362.0266666666666, 00:47:28.850 "max_latency_us": 12888.746666666666 00:47:28.850 } 00:47:28.850 ], 00:47:28.850 "core_count": 1 00:47:28.850 } 00:47:28.850 16:06:08 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:47:28.850 16:06:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:47:28.850 16:06:09 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:47:28.850 16:06:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:28.850 16:06:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:28.850 16:06:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:28.850 16:06:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:28.850 16:06:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:28.850 16:06:09 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:47:28.850 16:06:09 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:47:28.850 16:06:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:47:28.850 16:06:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:28.850 16:06:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:28.850 16:06:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:47:28.850 16:06:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:29.110 16:06:09 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:47:29.111 16:06:09 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:47:29.111 16:06:09 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:47:29.111 16:06:09 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:47:29.111 16:06:09 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:47:29.111 16:06:09 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:47:29.111 16:06:09 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:47:29.111 16:06:09 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:47:29.111 16:06:09 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:47:29.111 16:06:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:47:29.371 [2024-09-27 16:06:09.630262] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:47:29.371 [2024-09-27 16:06:09.631027] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249cc50 (107): Transport endpoint is not connected 00:47:29.371 [2024-09-27 16:06:09.632023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249cc50 (9): Bad file descriptor 00:47:29.371 [2024-09-27 16:06:09.633024] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:47:29.371 [2024-09-27 16:06:09.633032] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:47:29.371 [2024-09-27 16:06:09.633038] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:47:29.371 [2024-09-27 16:06:09.633044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:47:29.371 request: 00:47:29.371 { 00:47:29.371 "name": "nvme0", 00:47:29.371 "trtype": "tcp", 00:47:29.371 "traddr": "127.0.0.1", 00:47:29.371 "adrfam": "ipv4", 00:47:29.371 "trsvcid": "4420", 00:47:29.371 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:29.371 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:29.371 "prchk_reftag": false, 00:47:29.371 "prchk_guard": false, 00:47:29.371 "hdgst": false, 00:47:29.371 "ddgst": false, 00:47:29.371 "psk": "key1", 00:47:29.371 "allow_unrecognized_csi": false, 00:47:29.371 "method": "bdev_nvme_attach_controller", 00:47:29.371 "req_id": 1 00:47:29.371 } 00:47:29.371 Got JSON-RPC error response 00:47:29.371 response: 00:47:29.371 { 00:47:29.371 "code": -5, 00:47:29.371 "message": "Input/output error" 00:47:29.371 } 00:47:29.371 16:06:09 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:47:29.371 16:06:09 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:47:29.371 16:06:09 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:47:29.371 16:06:09 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:47:29.371 16:06:09 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:47:29.371 16:06:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:29.371 16:06:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:29.371 16:06:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:29.371 16:06:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:29.371 16:06:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:29.371 16:06:09 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:47:29.371 16:06:09 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:47:29.371 16:06:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:47:29.371 16:06:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:29.371 16:06:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:29.371 16:06:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:47:29.371 16:06:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:29.632 16:06:09 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:47:29.632 16:06:09 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:47:29.632 16:06:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:47:29.892 16:06:10 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:47:29.892 16:06:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:47:29.892 16:06:10 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:47:29.892 16:06:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:29.892 16:06:10 keyring_file -- keyring/file.sh@78 -- # jq length 00:47:30.151 16:06:10 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:47:30.151 16:06:10 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.FhNO8nkpzX 00:47:30.151 16:06:10 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.FhNO8nkpzX 00:47:30.151 16:06:10 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:47:30.151 16:06:10 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.FhNO8nkpzX 00:47:30.151 16:06:10 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:47:30.151 16:06:10 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:47:30.151 16:06:10 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:47:30.151 16:06:10 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:47:30.151 16:06:10 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FhNO8nkpzX 00:47:30.151 16:06:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FhNO8nkpzX 00:47:30.412 [2024-09-27 16:06:10.689780] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.FhNO8nkpzX': 0100660 00:47:30.412 [2024-09-27 16:06:10.689801] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:47:30.412 request: 00:47:30.412 { 00:47:30.412 "name": "key0", 00:47:30.412 "path": "/tmp/tmp.FhNO8nkpzX", 00:47:30.412 "method": "keyring_file_add_key", 00:47:30.412 "req_id": 1 00:47:30.412 } 00:47:30.412 Got JSON-RPC error response 00:47:30.412 response: 00:47:30.412 { 00:47:30.412 "code": -1, 00:47:30.412 "message": "Operation not permitted" 00:47:30.412 } 00:47:30.412 16:06:10 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:47:30.412 16:06:10 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:47:30.412 16:06:10 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:47:30.412 16:06:10 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:47:30.412 16:06:10 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.FhNO8nkpzX 00:47:30.412 16:06:10 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FhNO8nkpzX 00:47:30.412 16:06:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FhNO8nkpzX 00:47:30.412 16:06:10 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.FhNO8nkpzX 00:47:30.412 16:06:10 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:47:30.412 16:06:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:30.412 16:06:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:30.412 16:06:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:30.412 16:06:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:30.412 16:06:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:30.672 16:06:11 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:47:30.672 16:06:11 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:30.672 16:06:11 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:47:30.672 16:06:11 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:30.672 16:06:11 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:47:30.672 16:06:11 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:47:30.672 16:06:11 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:47:30.672 16:06:11 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:47:30.673 16:06:11 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:30.673 16:06:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:30.933 [2024-09-27 16:06:11.199066] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.FhNO8nkpzX': No such file or directory 00:47:30.933 [2024-09-27 16:06:11.199080] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:47:30.933 [2024-09-27 16:06:11.199093] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:47:30.933 [2024-09-27 16:06:11.199098] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:47:30.933 [2024-09-27 16:06:11.199103] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:47:30.933 [2024-09-27 16:06:11.199108] bdev_nvme.c:6447:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:47:30.933 request: 00:47:30.933 { 00:47:30.933 "name": "nvme0", 00:47:30.933 "trtype": "tcp", 00:47:30.933 "traddr": "127.0.0.1", 00:47:30.933 "adrfam": "ipv4", 00:47:30.933 "trsvcid": "4420", 00:47:30.933 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:30.933 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:30.933 "prchk_reftag": false, 00:47:30.933 "prchk_guard": false, 00:47:30.933 "hdgst": false, 00:47:30.933 "ddgst": false, 00:47:30.933 "psk": "key0", 00:47:30.933 "allow_unrecognized_csi": false, 00:47:30.933 "method": "bdev_nvme_attach_controller", 00:47:30.933 "req_id": 1 00:47:30.933 } 00:47:30.933 Got JSON-RPC error response 00:47:30.933 response: 00:47:30.933 { 00:47:30.933 "code": -19, 00:47:30.933 "message": "No such device" 00:47:30.933 } 00:47:30.933 16:06:11 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:47:30.933 16:06:11 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:47:30.933 16:06:11 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:47:30.933 16:06:11 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:47:30.933 16:06:11 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:47:30.933 16:06:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:47:30.933 16:06:11 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:47:30.933 16:06:11 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:47:30.933 16:06:11 keyring_file -- keyring/common.sh@17 -- # name=key0 00:47:30.933 16:06:11 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:47:30.933 16:06:11 keyring_file -- keyring/common.sh@17 -- # digest=0 00:47:30.933 16:06:11 keyring_file -- keyring/common.sh@18 -- # mktemp 00:47:30.933 16:06:11 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.9ZK6hQICqw 00:47:30.933 16:06:11 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:47:30.934 16:06:11 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:47:30.934 16:06:11 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:47:30.934 16:06:11 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:47:30.934 16:06:11 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:47:30.934 16:06:11 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:47:30.934 16:06:11 keyring_file -- nvmf/common.sh@729 -- # python - 00:47:31.194 16:06:11 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.9ZK6hQICqw 00:47:31.194 16:06:11 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.9ZK6hQICqw 00:47:31.194 16:06:11 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.9ZK6hQICqw 00:47:31.194 16:06:11 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.9ZK6hQICqw 00:47:31.194 16:06:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.9ZK6hQICqw 00:47:31.194 16:06:11 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:31.194 16:06:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:31.455 nvme0n1 00:47:31.455 16:06:11 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:47:31.455 16:06:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:31.455 16:06:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:31.455 16:06:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:31.455 16:06:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:31.455 16:06:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:31.715 16:06:12 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:47:31.715 16:06:12 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:47:31.715 16:06:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:47:31.715 16:06:12 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:47:31.715 16:06:12 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:47:31.715 16:06:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:31.715 16:06:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:31.715 16:06:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:31.974 16:06:12 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:47:31.974 16:06:12 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:47:31.974 16:06:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:31.974 16:06:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:31.974 16:06:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:31.974 16:06:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:31.974 16:06:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:32.233 16:06:12 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:47:32.233 16:06:12 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:47:32.233 16:06:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:47:32.493 16:06:12 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:47:32.493 16:06:12 keyring_file -- keyring/file.sh@105 -- # jq length 00:47:32.493 16:06:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:32.493 16:06:12 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:47:32.493 16:06:12 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.9ZK6hQICqw 00:47:32.493 16:06:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.9ZK6hQICqw 00:47:32.752 16:06:13 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.xHziMsasUy 00:47:32.752 16:06:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.xHziMsasUy 00:47:33.038 16:06:13 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:33.038 16:06:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:33.038 nvme0n1 00:47:33.038 16:06:13 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:47:33.038 16:06:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:47:33.299 16:06:13 keyring_file -- keyring/file.sh@113 -- # config='{ 00:47:33.299 "subsystems": [ 00:47:33.299 { 00:47:33.299 "subsystem": "keyring", 00:47:33.299 "config": [ 00:47:33.299 { 00:47:33.299 "method": "keyring_file_add_key", 00:47:33.299 "params": { 00:47:33.299 "name": "key0", 00:47:33.299 "path": "/tmp/tmp.9ZK6hQICqw" 00:47:33.299 } 00:47:33.299 }, 00:47:33.299 { 00:47:33.299 "method": "keyring_file_add_key", 00:47:33.299 "params": { 00:47:33.299 "name": "key1", 00:47:33.299 "path": "/tmp/tmp.xHziMsasUy" 00:47:33.299 } 00:47:33.299 } 00:47:33.299 ] 00:47:33.299 }, 00:47:33.299 { 00:47:33.299 "subsystem": "iobuf", 00:47:33.299 "config": [ 00:47:33.299 { 00:47:33.299 "method": "iobuf_set_options", 00:47:33.299 "params": { 00:47:33.299 "small_pool_count": 8192, 00:47:33.299 "large_pool_count": 1024, 00:47:33.299 "small_bufsize": 8192, 00:47:33.299 "large_bufsize": 135168 00:47:33.299 } 00:47:33.299 } 00:47:33.299 ] 00:47:33.299 }, 00:47:33.299 { 00:47:33.299 "subsystem": "sock", 00:47:33.299 "config": [ 00:47:33.299 { 00:47:33.299 "method": "sock_set_default_impl", 00:47:33.299 "params": { 00:47:33.299 "impl_name": "posix" 00:47:33.299 } 00:47:33.299 }, 00:47:33.299 { 00:47:33.299 "method": "sock_impl_set_options", 00:47:33.299 "params": { 00:47:33.299 "impl_name": "ssl", 00:47:33.299 "recv_buf_size": 4096, 00:47:33.299 "send_buf_size": 4096, 00:47:33.299 "enable_recv_pipe": true, 00:47:33.299 "enable_quickack": false, 00:47:33.299 "enable_placement_id": 0, 00:47:33.299 "enable_zerocopy_send_server": true, 00:47:33.299 "enable_zerocopy_send_client": false, 00:47:33.299 "zerocopy_threshold": 0, 00:47:33.299 "tls_version": 0, 00:47:33.299 "enable_ktls": false 00:47:33.299 } 00:47:33.299 }, 00:47:33.299 { 00:47:33.299 "method": "sock_impl_set_options", 00:47:33.299 "params": { 00:47:33.299 "impl_name": "posix", 00:47:33.299 "recv_buf_size": 2097152, 00:47:33.299 "send_buf_size": 2097152, 00:47:33.299 "enable_recv_pipe": true, 00:47:33.299 "enable_quickack": false, 00:47:33.299 "enable_placement_id": 0, 00:47:33.299 "enable_zerocopy_send_server": true, 00:47:33.299 "enable_zerocopy_send_client": false, 00:47:33.299 "zerocopy_threshold": 0, 00:47:33.299 "tls_version": 0, 00:47:33.299 "enable_ktls": false 00:47:33.299 } 00:47:33.299 } 00:47:33.299 ] 00:47:33.299 }, 00:47:33.299 { 00:47:33.299 "subsystem": "vmd", 00:47:33.299 "config": [] 00:47:33.299 }, 00:47:33.299 { 00:47:33.299 "subsystem": "accel", 00:47:33.299 "config": [ 00:47:33.299 { 00:47:33.299 "method": "accel_set_options", 00:47:33.299 "params": { 00:47:33.299 "small_cache_size": 128, 00:47:33.299 "large_cache_size": 16, 00:47:33.299 "task_count": 2048, 00:47:33.299 "sequence_count": 2048, 00:47:33.299 "buf_count": 2048 00:47:33.299 } 00:47:33.299 } 00:47:33.299 ] 00:47:33.299 }, 00:47:33.299 { 00:47:33.299 "subsystem": "bdev", 00:47:33.299 "config": [ 00:47:33.299 { 00:47:33.299 "method": "bdev_set_options", 00:47:33.299 "params": { 00:47:33.299 "bdev_io_pool_size": 65535, 00:47:33.299 "bdev_io_cache_size": 256, 00:47:33.299 "bdev_auto_examine": true, 00:47:33.299 "iobuf_small_cache_size": 128, 00:47:33.299 "iobuf_large_cache_size": 16 00:47:33.299 } 00:47:33.299 }, 00:47:33.299 { 00:47:33.299 "method": "bdev_raid_set_options", 00:47:33.299 "params": { 00:47:33.299 "process_window_size_kb": 1024, 00:47:33.299 "process_max_bandwidth_mb_sec": 0 00:47:33.299 } 00:47:33.299 }, 00:47:33.299 { 00:47:33.299 "method": "bdev_iscsi_set_options", 00:47:33.299 "params": { 00:47:33.299 "timeout_sec": 30 00:47:33.299 } 00:47:33.299 }, 00:47:33.299 { 00:47:33.299 "method": "bdev_nvme_set_options", 00:47:33.299 "params": { 00:47:33.299 "action_on_timeout": "none", 00:47:33.299 "timeout_us": 0, 00:47:33.299 "timeout_admin_us": 0, 00:47:33.299 "keep_alive_timeout_ms": 10000, 00:47:33.299 "arbitration_burst": 0, 00:47:33.299 "low_priority_weight": 0, 00:47:33.299 "medium_priority_weight": 0, 00:47:33.299 "high_priority_weight": 0, 00:47:33.299 "nvme_adminq_poll_period_us": 10000, 00:47:33.299 "nvme_ioq_poll_period_us": 0, 00:47:33.299 "io_queue_requests": 512, 00:47:33.299 "delay_cmd_submit": true, 00:47:33.299 "transport_retry_count": 4, 00:47:33.299 "bdev_retry_count": 3, 00:47:33.299 "transport_ack_timeout": 0, 00:47:33.299 "ctrlr_loss_timeout_sec": 0, 00:47:33.299 "reconnect_delay_sec": 0, 00:47:33.299 "fast_io_fail_timeout_sec": 0, 00:47:33.299 "disable_auto_failback": false, 00:47:33.299 "generate_uuids": false, 00:47:33.299 "transport_tos": 0, 00:47:33.299 "nvme_error_stat": false, 00:47:33.299 "rdma_srq_size": 0, 00:47:33.299 "io_path_stat": false, 00:47:33.299 "allow_accel_sequence": false, 00:47:33.299 "rdma_max_cq_size": 0, 00:47:33.299 "rdma_cm_event_timeout_ms": 0, 00:47:33.299 "dhchap_digests": [ 00:47:33.299 "sha256", 00:47:33.299 "sha384", 00:47:33.299 "sha512" 00:47:33.299 ], 00:47:33.299 "dhchap_dhgroups": [ 00:47:33.299 "null", 00:47:33.299 "ffdhe2048", 00:47:33.299 "ffdhe3072", 00:47:33.299 "ffdhe4096", 00:47:33.299 "ffdhe6144", 00:47:33.299 "ffdhe8192" 00:47:33.299 ] 00:47:33.299 } 00:47:33.299 }, 00:47:33.299 { 00:47:33.299 "method": "bdev_nvme_attach_controller", 00:47:33.299 "params": { 00:47:33.299 "name": "nvme0", 00:47:33.299 "trtype": "TCP", 00:47:33.299 "adrfam": "IPv4", 00:47:33.299 "traddr": "127.0.0.1", 00:47:33.299 "trsvcid": "4420", 00:47:33.299 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:33.299 "prchk_reftag": false, 00:47:33.299 "prchk_guard": false, 00:47:33.299 "ctrlr_loss_timeout_sec": 0, 00:47:33.299 "reconnect_delay_sec": 0, 00:47:33.300 "fast_io_fail_timeout_sec": 0, 00:47:33.300 "psk": "key0", 00:47:33.300 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:33.300 "hdgst": false, 00:47:33.300 "ddgst": false 00:47:33.300 } 00:47:33.300 }, 00:47:33.300 { 00:47:33.300 "method": "bdev_nvme_set_hotplug", 00:47:33.300 "params": { 00:47:33.300 "period_us": 100000, 00:47:33.300 "enable": false 00:47:33.300 } 00:47:33.300 }, 00:47:33.300 { 00:47:33.300 "method": "bdev_wait_for_examine" 00:47:33.300 } 00:47:33.300 ] 00:47:33.300 }, 00:47:33.300 { 00:47:33.300 "subsystem": "nbd", 00:47:33.300 "config": [] 00:47:33.300 } 00:47:33.300 ] 00:47:33.300 }' 00:47:33.300 16:06:13 keyring_file -- keyring/file.sh@115 -- # killprocess 788049 00:47:33.300 16:06:13 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 788049 ']' 00:47:33.300 16:06:13 keyring_file -- common/autotest_common.sh@954 -- # kill -0 788049 00:47:33.300 16:06:13 keyring_file -- common/autotest_common.sh@955 -- # uname 00:47:33.300 16:06:13 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:47:33.300 16:06:13 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 788049 00:47:33.561 16:06:13 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:47:33.561 16:06:13 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:47:33.561 16:06:13 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 788049' 00:47:33.561 killing process with pid 788049 00:47:33.561 16:06:13 keyring_file -- common/autotest_common.sh@969 -- # kill 788049 00:47:33.561 Received shutdown signal, test time was about 1.000000 seconds 00:47:33.561 00:47:33.561 Latency(us) 00:47:33.561 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:33.561 =================================================================================================================== 00:47:33.561 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:47:33.561 16:06:13 keyring_file -- common/autotest_common.sh@974 -- # wait 788049 00:47:33.561 16:06:13 keyring_file -- keyring/file.sh@118 -- # bperfpid=790301 00:47:33.561 16:06:13 keyring_file -- keyring/file.sh@120 -- # waitforlisten 790301 /var/tmp/bperf.sock 00:47:33.561 16:06:13 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 790301 ']' 00:47:33.561 16:06:13 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:47:33.561 16:06:13 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:47:33.561 16:06:13 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:47:33.561 16:06:13 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:47:33.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:47:33.561 16:06:13 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:47:33.561 16:06:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:47:33.561 16:06:13 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:47:33.561 "subsystems": [ 00:47:33.561 { 00:47:33.561 "subsystem": "keyring", 00:47:33.561 "config": [ 00:47:33.561 { 00:47:33.561 "method": "keyring_file_add_key", 00:47:33.561 "params": { 00:47:33.561 "name": "key0", 00:47:33.561 "path": "/tmp/tmp.9ZK6hQICqw" 00:47:33.561 } 00:47:33.561 }, 00:47:33.561 { 00:47:33.561 "method": "keyring_file_add_key", 00:47:33.561 "params": { 00:47:33.561 "name": "key1", 00:47:33.561 "path": "/tmp/tmp.xHziMsasUy" 00:47:33.561 } 00:47:33.561 } 00:47:33.561 ] 00:47:33.561 }, 00:47:33.561 { 00:47:33.561 "subsystem": "iobuf", 00:47:33.561 "config": [ 00:47:33.561 { 00:47:33.561 "method": "iobuf_set_options", 00:47:33.561 "params": { 00:47:33.561 "small_pool_count": 8192, 00:47:33.561 "large_pool_count": 1024, 00:47:33.561 "small_bufsize": 8192, 00:47:33.561 "large_bufsize": 135168 00:47:33.561 } 00:47:33.561 } 00:47:33.561 ] 00:47:33.561 }, 00:47:33.561 { 00:47:33.561 "subsystem": "sock", 00:47:33.561 "config": [ 00:47:33.561 { 00:47:33.561 "method": "sock_set_default_impl", 00:47:33.561 "params": { 00:47:33.561 "impl_name": "posix" 00:47:33.561 } 00:47:33.561 }, 00:47:33.561 { 00:47:33.561 "method": "sock_impl_set_options", 00:47:33.561 "params": { 00:47:33.561 "impl_name": "ssl", 00:47:33.561 "recv_buf_size": 4096, 00:47:33.561 "send_buf_size": 4096, 00:47:33.561 "enable_recv_pipe": true, 00:47:33.561 "enable_quickack": false, 00:47:33.561 "enable_placement_id": 0, 00:47:33.561 "enable_zerocopy_send_server": true, 00:47:33.561 "enable_zerocopy_send_client": false, 00:47:33.561 "zerocopy_threshold": 0, 00:47:33.561 "tls_version": 0, 00:47:33.561 "enable_ktls": false 00:47:33.561 } 00:47:33.561 }, 00:47:33.561 { 00:47:33.561 "method": "sock_impl_set_options", 00:47:33.561 "params": { 00:47:33.561 "impl_name": "posix", 00:47:33.561 "recv_buf_size": 2097152, 00:47:33.561 "send_buf_size": 2097152, 00:47:33.561 "enable_recv_pipe": true, 00:47:33.561 "enable_quickack": false, 00:47:33.561 "enable_placement_id": 0, 00:47:33.561 "enable_zerocopy_send_server": true, 00:47:33.561 "enable_zerocopy_send_client": false, 00:47:33.561 "zerocopy_threshold": 0, 00:47:33.561 "tls_version": 0, 00:47:33.561 "enable_ktls": false 00:47:33.561 } 00:47:33.561 } 00:47:33.561 ] 00:47:33.561 }, 00:47:33.561 { 00:47:33.561 "subsystem": "vmd", 00:47:33.561 "config": [] 00:47:33.561 }, 00:47:33.561 { 00:47:33.561 "subsystem": "accel", 00:47:33.561 "config": [ 00:47:33.561 { 00:47:33.561 "method": "accel_set_options", 00:47:33.561 "params": { 00:47:33.561 "small_cache_size": 128, 00:47:33.561 "large_cache_size": 16, 00:47:33.561 "task_count": 2048, 00:47:33.561 "sequence_count": 2048, 00:47:33.561 "buf_count": 2048 00:47:33.561 } 00:47:33.561 } 00:47:33.561 ] 00:47:33.561 }, 00:47:33.561 { 00:47:33.561 "subsystem": "bdev", 00:47:33.561 "config": [ 00:47:33.561 { 00:47:33.561 "method": "bdev_set_options", 00:47:33.561 "params": { 00:47:33.561 "bdev_io_pool_size": 65535, 00:47:33.561 "bdev_io_cache_size": 256, 00:47:33.561 "bdev_auto_examine": true, 00:47:33.561 "iobuf_small_cache_size": 128, 00:47:33.561 "iobuf_large_cache_size": 16 00:47:33.561 } 00:47:33.561 }, 00:47:33.561 { 00:47:33.561 "method": "bdev_raid_set_options", 00:47:33.561 "params": { 00:47:33.561 "process_window_size_kb": 1024, 00:47:33.561 "process_max_bandwidth_mb_sec": 0 00:47:33.561 } 00:47:33.561 }, 00:47:33.561 { 00:47:33.561 "method": "bdev_iscsi_set_options", 00:47:33.561 "params": { 00:47:33.561 "timeout_sec": 30 00:47:33.561 } 00:47:33.561 }, 00:47:33.561 { 00:47:33.561 "method": "bdev_nvme_set_options", 00:47:33.561 "params": { 00:47:33.561 "action_on_timeout": "none", 00:47:33.561 "timeout_us": 0, 00:47:33.561 "timeout_admin_us": 0, 00:47:33.561 "keep_alive_timeout_ms": 10000, 00:47:33.561 "arbitration_burst": 0, 00:47:33.561 "low_priority_weight": 0, 00:47:33.561 "medium_priority_weight": 0, 00:47:33.561 "high_priority_weight": 0, 00:47:33.561 "nvme_adminq_poll_period_us": 10000, 00:47:33.561 "nvme_ioq_poll_period_us": 0, 00:47:33.561 "io_queue_requests": 512, 00:47:33.561 "delay_cmd_submit": true, 00:47:33.561 "transport_retry_count": 4, 00:47:33.561 "bdev_retry_count": 3, 00:47:33.561 "transport_ack_timeout": 0, 00:47:33.561 "ctrlr_loss_timeout_sec": 0, 00:47:33.561 "reconnect_delay_sec": 0, 00:47:33.561 "fast_io_fail_timeout_sec": 0, 00:47:33.561 "disable_auto_failback": false, 00:47:33.561 "generate_uuids": false, 00:47:33.561 "transport_tos": 0, 00:47:33.561 "nvme_error_stat": false, 00:47:33.561 "rdma_srq_size": 0, 00:47:33.561 "io_path_stat": false, 00:47:33.561 "allow_accel_sequence": false, 00:47:33.561 "rdma_max_cq_size": 0, 00:47:33.561 "rdma_cm_event_timeout_ms": 0, 00:47:33.561 "dhchap_digests": [ 00:47:33.561 "sha256", 00:47:33.561 "sha384", 00:47:33.561 "sha512" 00:47:33.561 ], 00:47:33.561 "dhchap_dhgroups": [ 00:47:33.561 "null", 00:47:33.561 "ffdhe2048", 00:47:33.561 "ffdhe3072", 00:47:33.561 "ffdhe4096", 00:47:33.561 "ffdhe6144", 00:47:33.561 "ffdhe8192" 00:47:33.561 ] 00:47:33.561 } 00:47:33.561 }, 00:47:33.561 { 00:47:33.561 "method": "bdev_nvme_attach_controller", 00:47:33.561 "params": { 00:47:33.561 "name": "nvme0", 00:47:33.561 "trtype": "TCP", 00:47:33.561 "adrfam": "IPv4", 00:47:33.561 "traddr": "127.0.0.1", 00:47:33.561 "trsvcid": "4420", 00:47:33.561 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:33.561 "prchk_reftag": false, 00:47:33.561 "prchk_guard": false, 00:47:33.561 "ctrlr_loss_timeout_sec": 0, 00:47:33.561 "reconnect_delay_sec": 0, 00:47:33.561 "fast_io_fail_timeout_sec": 0, 00:47:33.561 "psk": "key0", 00:47:33.561 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:33.561 "hdgst": false, 00:47:33.561 "ddgst": false 00:47:33.561 } 00:47:33.561 }, 00:47:33.561 { 00:47:33.561 "method": "bdev_nvme_set_hotplug", 00:47:33.561 "params": { 00:47:33.561 "period_us": 100000, 00:47:33.561 "enable": false 00:47:33.561 } 00:47:33.561 }, 00:47:33.561 { 00:47:33.561 "method": "bdev_wait_for_examine" 00:47:33.561 } 00:47:33.561 ] 00:47:33.561 }, 00:47:33.561 { 00:47:33.561 "subsystem": "nbd", 00:47:33.561 "config": [] 00:47:33.561 } 00:47:33.561 ] 00:47:33.561 }' 00:47:33.561 [2024-09-27 16:06:13.992650] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:47:33.561 [2024-09-27 16:06:13.992721] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid790301 ] 00:47:33.820 [2024-09-27 16:06:14.070711] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:33.820 [2024-09-27 16:06:14.098773] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:47:33.820 [2024-09-27 16:06:14.236106] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:47:34.488 16:06:14 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:47:34.488 16:06:14 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:47:34.488 16:06:14 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:47:34.488 16:06:14 keyring_file -- keyring/file.sh@121 -- # jq length 00:47:34.488 16:06:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:34.488 16:06:14 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:47:34.488 16:06:14 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:47:34.488 16:06:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:34.488 16:06:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:34.489 16:06:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:34.489 16:06:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:34.489 16:06:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:34.768 16:06:15 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:47:34.768 16:06:15 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:47:34.768 16:06:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:34.768 16:06:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:47:34.768 16:06:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:34.768 16:06:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:47:34.768 16:06:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:35.030 16:06:15 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:47:35.030 16:06:15 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:47:35.030 16:06:15 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:47:35.030 16:06:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:47:35.030 16:06:15 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:47:35.030 16:06:15 keyring_file -- keyring/file.sh@1 -- # cleanup 00:47:35.030 16:06:15 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.9ZK6hQICqw /tmp/tmp.xHziMsasUy 00:47:35.030 16:06:15 keyring_file -- keyring/file.sh@20 -- # killprocess 790301 00:47:35.030 16:06:15 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 790301 ']' 00:47:35.030 16:06:15 keyring_file -- common/autotest_common.sh@954 -- # kill -0 790301 00:47:35.030 16:06:15 keyring_file -- common/autotest_common.sh@955 -- # uname 00:47:35.365 16:06:15 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:47:35.365 16:06:15 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 790301 00:47:35.365 16:06:15 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:47:35.365 16:06:15 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:47:35.365 16:06:15 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 790301' 00:47:35.365 killing process with pid 790301 00:47:35.365 16:06:15 keyring_file -- common/autotest_common.sh@969 -- # kill 790301 00:47:35.365 Received shutdown signal, test time was about 1.000000 seconds 00:47:35.365 00:47:35.365 Latency(us) 00:47:35.365 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:35.365 =================================================================================================================== 00:47:35.365 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:47:35.365 16:06:15 keyring_file -- common/autotest_common.sh@974 -- # wait 790301 00:47:35.365 16:06:15 keyring_file -- keyring/file.sh@21 -- # killprocess 787938 00:47:35.365 16:06:15 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 787938 ']' 00:47:35.365 16:06:15 keyring_file -- common/autotest_common.sh@954 -- # kill -0 787938 00:47:35.365 16:06:15 keyring_file -- common/autotest_common.sh@955 -- # uname 00:47:35.365 16:06:15 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:47:35.365 16:06:15 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 787938 00:47:35.365 16:06:15 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:47:35.365 16:06:15 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:47:35.365 16:06:15 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 787938' 00:47:35.365 killing process with pid 787938 00:47:35.365 16:06:15 keyring_file -- common/autotest_common.sh@969 -- # kill 787938 00:47:35.365 16:06:15 keyring_file -- common/autotest_common.sh@974 -- # wait 787938 00:47:35.643 00:47:35.643 real 0m11.987s 00:47:35.643 user 0m28.951s 00:47:35.643 sys 0m2.656s 00:47:35.643 16:06:15 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:47:35.643 16:06:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:47:35.643 ************************************ 00:47:35.643 END TEST keyring_file 00:47:35.643 ************************************ 00:47:35.643 16:06:15 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:47:35.643 16:06:15 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:47:35.643 16:06:15 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:47:35.643 16:06:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:47:35.643 16:06:15 -- common/autotest_common.sh@10 -- # set +x 00:47:35.643 ************************************ 00:47:35.643 START TEST keyring_linux 00:47:35.643 ************************************ 00:47:35.643 16:06:16 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:47:35.643 Joined session keyring: 1038826480 00:47:35.925 * Looking for test storage... 00:47:35.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:47:35.925 16:06:16 keyring_linux -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:47:35.925 16:06:16 keyring_linux -- common/autotest_common.sh@1681 -- # lcov --version 00:47:35.925 16:06:16 keyring_linux -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:47:35.925 16:06:16 keyring_linux -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:47:35.925 16:06:16 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:35.925 16:06:16 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:35.925 16:06:16 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:35.925 16:06:16 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:47:35.925 16:06:16 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:47:35.925 16:06:16 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:47:35.925 16:06:16 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:47:35.925 16:06:16 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:47:35.925 16:06:16 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:47:35.925 16:06:16 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:47:35.925 16:06:16 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:35.925 16:06:16 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:47:35.925 16:06:16 keyring_linux -- scripts/common.sh@345 -- # : 1 00:47:35.925 16:06:16 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:35.925 16:06:16 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:35.925 16:06:16 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:47:35.925 16:06:16 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:47:35.925 16:06:16 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:35.925 16:06:16 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:47:35.925 16:06:16 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:47:35.925 16:06:16 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:47:35.925 16:06:16 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:47:35.925 16:06:16 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:35.925 16:06:16 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:47:35.925 16:06:16 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:47:35.925 16:06:16 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:35.925 16:06:16 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:35.925 16:06:16 keyring_linux -- scripts/common.sh@368 -- # return 0 00:47:35.925 16:06:16 keyring_linux -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:35.925 16:06:16 keyring_linux -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:47:35.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:35.925 --rc genhtml_branch_coverage=1 00:47:35.925 --rc genhtml_function_coverage=1 00:47:35.925 --rc genhtml_legend=1 00:47:35.925 --rc geninfo_all_blocks=1 00:47:35.925 --rc geninfo_unexecuted_blocks=1 00:47:35.925 00:47:35.925 ' 00:47:35.925 16:06:16 keyring_linux -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:47:35.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:35.925 --rc genhtml_branch_coverage=1 00:47:35.925 --rc genhtml_function_coverage=1 00:47:35.925 --rc genhtml_legend=1 00:47:35.925 --rc geninfo_all_blocks=1 00:47:35.925 --rc geninfo_unexecuted_blocks=1 00:47:35.925 00:47:35.925 ' 00:47:35.925 16:06:16 keyring_linux -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:47:35.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:35.925 --rc genhtml_branch_coverage=1 00:47:35.925 --rc genhtml_function_coverage=1 00:47:35.925 --rc genhtml_legend=1 00:47:35.925 --rc geninfo_all_blocks=1 00:47:35.925 --rc geninfo_unexecuted_blocks=1 00:47:35.925 00:47:35.925 ' 00:47:35.925 16:06:16 keyring_linux -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:47:35.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:35.925 --rc genhtml_branch_coverage=1 00:47:35.925 --rc genhtml_function_coverage=1 00:47:35.925 --rc genhtml_legend=1 00:47:35.925 --rc geninfo_all_blocks=1 00:47:35.925 --rc geninfo_unexecuted_blocks=1 00:47:35.925 00:47:35.925 ' 00:47:35.925 16:06:16 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:47:35.925 16:06:16 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:47:35.925 16:06:16 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:47:35.925 16:06:16 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:35.925 16:06:16 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:35.925 16:06:16 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:35.925 16:06:16 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:35.925 16:06:16 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:35.925 16:06:16 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:35.925 16:06:16 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:35.925 16:06:16 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:35.925 16:06:16 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:35.925 16:06:16 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:35.925 16:06:16 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:47:35.925 16:06:16 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:47:35.925 16:06:16 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:35.925 16:06:16 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:35.925 16:06:16 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:47:35.925 16:06:16 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:35.925 16:06:16 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:47:35.925 16:06:16 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:47:35.925 16:06:16 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:35.925 16:06:16 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:35.925 16:06:16 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:35.925 16:06:16 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:35.925 16:06:16 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:35.925 16:06:16 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:35.925 16:06:16 keyring_linux -- paths/export.sh@5 -- # export PATH 00:47:35.925 16:06:16 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:35.925 16:06:16 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:47:35.925 16:06:16 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:35.925 16:06:16 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:35.925 16:06:16 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:35.925 16:06:16 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:35.925 16:06:16 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:35.925 16:06:16 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:35.925 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:35.925 16:06:16 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:35.925 16:06:16 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:35.925 16:06:16 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:35.925 16:06:16 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:47:35.925 16:06:16 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:47:35.925 16:06:16 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:47:35.925 16:06:16 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:47:35.925 16:06:16 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:47:35.925 16:06:16 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:47:35.925 16:06:16 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:47:35.925 16:06:16 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:47:35.925 16:06:16 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:47:35.925 16:06:16 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:47:35.925 16:06:16 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:47:35.925 16:06:16 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:47:35.925 16:06:16 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:47:35.925 16:06:16 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:47:35.925 16:06:16 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:47:35.925 16:06:16 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:47:35.925 16:06:16 keyring_linux -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:47:35.925 16:06:16 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:47:35.925 16:06:16 keyring_linux -- nvmf/common.sh@729 -- # python - 00:47:35.925 16:06:16 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:47:35.925 16:06:16 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:47:35.925 /tmp/:spdk-test:key0 00:47:35.925 16:06:16 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:47:35.925 16:06:16 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:47:35.925 16:06:16 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:47:35.925 16:06:16 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:47:35.925 16:06:16 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:47:35.925 16:06:16 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:47:35.925 16:06:16 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:47:35.925 16:06:16 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:47:35.925 16:06:16 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:47:35.925 16:06:16 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:47:35.925 16:06:16 keyring_linux -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:47:35.925 16:06:16 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:47:35.925 16:06:16 keyring_linux -- nvmf/common.sh@729 -- # python - 00:47:35.925 16:06:16 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:47:35.925 16:06:16 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:47:35.925 /tmp/:spdk-test:key1 00:47:35.925 16:06:16 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=790754 00:47:35.925 16:06:16 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 790754 00:47:35.925 16:06:16 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:47:35.925 16:06:16 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 790754 ']' 00:47:35.925 16:06:16 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:35.925 16:06:16 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:47:35.925 16:06:16 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:35.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:35.925 16:06:16 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:47:35.926 16:06:16 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:47:36.228 [2024-09-27 16:06:16.408075] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:47:36.228 [2024-09-27 16:06:16.408131] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid790754 ] 00:47:36.228 [2024-09-27 16:06:16.484972] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:36.228 [2024-09-27 16:06:16.513704] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:47:36.805 16:06:17 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:47:36.805 16:06:17 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:47:36.805 16:06:17 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:47:36.805 16:06:17 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:36.805 16:06:17 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:47:36.805 [2024-09-27 16:06:17.192823] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:36.805 null0 00:47:36.805 [2024-09-27 16:06:17.224873] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:47:36.805 [2024-09-27 16:06:17.225207] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:47:36.805 16:06:17 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:36.805 16:06:17 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:47:36.805 182049821 00:47:36.805 16:06:17 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:47:36.805 345075348 00:47:36.805 16:06:17 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=791037 00:47:36.805 16:06:17 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 791037 /var/tmp/bperf.sock 00:47:36.805 16:06:17 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:47:36.805 16:06:17 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 791037 ']' 00:47:36.805 16:06:17 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:47:36.805 16:06:17 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:47:36.805 16:06:17 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:47:36.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:47:36.805 16:06:17 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:47:36.805 16:06:17 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:47:37.065 [2024-09-27 16:06:17.302565] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:47:37.065 [2024-09-27 16:06:17.302615] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid791037 ] 00:47:37.065 [2024-09-27 16:06:17.379594] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:37.065 [2024-09-27 16:06:17.408282] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:47:37.633 16:06:18 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:47:37.633 16:06:18 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:47:37.633 16:06:18 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:47:37.633 16:06:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:47:37.893 16:06:18 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:47:37.893 16:06:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:47:38.153 16:06:18 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:47:38.153 16:06:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:47:38.413 [2024-09-27 16:06:18.642938] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:47:38.413 nvme0n1 00:47:38.413 16:06:18 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:47:38.413 16:06:18 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:47:38.413 16:06:18 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:47:38.413 16:06:18 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:47:38.413 16:06:18 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:47:38.414 16:06:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:38.674 16:06:18 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:47:38.674 16:06:18 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:47:38.674 16:06:18 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:47:38.674 16:06:18 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:47:38.674 16:06:18 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:38.674 16:06:18 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:47:38.674 16:06:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:38.674 16:06:19 keyring_linux -- keyring/linux.sh@25 -- # sn=182049821 00:47:38.674 16:06:19 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:47:38.674 16:06:19 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:47:38.674 16:06:19 keyring_linux -- keyring/linux.sh@26 -- # [[ 182049821 == \1\8\2\0\4\9\8\2\1 ]] 00:47:38.674 16:06:19 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 182049821 00:47:38.674 16:06:19 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:47:38.674 16:06:19 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:47:38.934 Running I/O for 1 seconds... 00:47:39.873 24298.00 IOPS, 94.91 MiB/s 00:47:39.873 Latency(us) 00:47:39.873 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:39.873 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:47:39.873 nvme0n1 : 1.01 24298.85 94.92 0.00 0.00 5252.18 4041.39 10103.47 00:47:39.873 =================================================================================================================== 00:47:39.873 Total : 24298.85 94.92 0.00 0.00 5252.18 4041.39 10103.47 00:47:39.873 { 00:47:39.873 "results": [ 00:47:39.873 { 00:47:39.873 "job": "nvme0n1", 00:47:39.873 "core_mask": "0x2", 00:47:39.873 "workload": "randread", 00:47:39.873 "status": "finished", 00:47:39.873 "queue_depth": 128, 00:47:39.873 "io_size": 4096, 00:47:39.873 "runtime": 1.005274, 00:47:39.873 "iops": 24298.84787630039, 00:47:39.873 "mibps": 94.9173745167984, 00:47:39.873 "io_failed": 0, 00:47:39.873 "io_timeout": 0, 00:47:39.873 "avg_latency_us": 5252.176921166469, 00:47:39.873 "min_latency_us": 4041.3866666666668, 00:47:39.873 "max_latency_us": 10103.466666666667 00:47:39.873 } 00:47:39.873 ], 00:47:39.873 "core_count": 1 00:47:39.873 } 00:47:39.873 16:06:20 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:47:39.873 16:06:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:47:40.133 16:06:20 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:47:40.133 16:06:20 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:47:40.133 16:06:20 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:47:40.133 16:06:20 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:47:40.133 16:06:20 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:47:40.133 16:06:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:40.133 16:06:20 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:47:40.133 16:06:20 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:47:40.133 16:06:20 keyring_linux -- keyring/linux.sh@23 -- # return 00:47:40.133 16:06:20 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:47:40.133 16:06:20 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:47:40.133 16:06:20 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:47:40.133 16:06:20 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:47:40.133 16:06:20 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:47:40.133 16:06:20 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:47:40.133 16:06:20 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:47:40.133 16:06:20 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:47:40.133 16:06:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:47:40.394 [2024-09-27 16:06:20.724314] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:47:40.394 [2024-09-27 16:06:20.724631] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c39e0 (107): Transport endpoint is not connected 00:47:40.394 [2024-09-27 16:06:20.725628] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c39e0 (9): Bad file descriptor 00:47:40.394 [2024-09-27 16:06:20.726629] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:47:40.394 [2024-09-27 16:06:20.726638] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:47:40.394 [2024-09-27 16:06:20.726643] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:47:40.394 [2024-09-27 16:06:20.726649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:47:40.394 request: 00:47:40.394 { 00:47:40.394 "name": "nvme0", 00:47:40.394 "trtype": "tcp", 00:47:40.394 "traddr": "127.0.0.1", 00:47:40.394 "adrfam": "ipv4", 00:47:40.394 "trsvcid": "4420", 00:47:40.394 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:40.394 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:40.394 "prchk_reftag": false, 00:47:40.394 "prchk_guard": false, 00:47:40.394 "hdgst": false, 00:47:40.394 "ddgst": false, 00:47:40.394 "psk": ":spdk-test:key1", 00:47:40.394 "allow_unrecognized_csi": false, 00:47:40.394 "method": "bdev_nvme_attach_controller", 00:47:40.394 "req_id": 1 00:47:40.394 } 00:47:40.394 Got JSON-RPC error response 00:47:40.394 response: 00:47:40.394 { 00:47:40.394 "code": -5, 00:47:40.394 "message": "Input/output error" 00:47:40.394 } 00:47:40.394 16:06:20 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:47:40.394 16:06:20 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:47:40.394 16:06:20 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:47:40.394 16:06:20 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:47:40.394 16:06:20 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:47:40.394 16:06:20 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:47:40.394 16:06:20 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:47:40.394 16:06:20 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:47:40.394 16:06:20 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:47:40.394 16:06:20 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:47:40.394 16:06:20 keyring_linux -- keyring/linux.sh@33 -- # sn=182049821 00:47:40.394 16:06:20 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 182049821 00:47:40.394 1 links removed 00:47:40.394 16:06:20 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:47:40.394 16:06:20 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:47:40.394 16:06:20 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:47:40.394 16:06:20 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:47:40.394 16:06:20 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:47:40.394 16:06:20 keyring_linux -- keyring/linux.sh@33 -- # sn=345075348 00:47:40.394 16:06:20 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 345075348 00:47:40.394 1 links removed 00:47:40.394 16:06:20 keyring_linux -- keyring/linux.sh@41 -- # killprocess 791037 00:47:40.394 16:06:20 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 791037 ']' 00:47:40.394 16:06:20 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 791037 00:47:40.394 16:06:20 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:47:40.394 16:06:20 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:47:40.394 16:06:20 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 791037 00:47:40.394 16:06:20 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:47:40.394 16:06:20 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:47:40.394 16:06:20 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 791037' 00:47:40.394 killing process with pid 791037 00:47:40.394 16:06:20 keyring_linux -- common/autotest_common.sh@969 -- # kill 791037 00:47:40.394 Received shutdown signal, test time was about 1.000000 seconds 00:47:40.394 00:47:40.394 Latency(us) 00:47:40.394 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:40.395 =================================================================================================================== 00:47:40.395 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:47:40.395 16:06:20 keyring_linux -- common/autotest_common.sh@974 -- # wait 791037 00:47:40.655 16:06:20 keyring_linux -- keyring/linux.sh@42 -- # killprocess 790754 00:47:40.655 16:06:20 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 790754 ']' 00:47:40.655 16:06:20 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 790754 00:47:40.655 16:06:20 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:47:40.655 16:06:20 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:47:40.655 16:06:20 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 790754 00:47:40.655 16:06:20 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:47:40.655 16:06:20 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:47:40.655 16:06:20 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 790754' 00:47:40.655 killing process with pid 790754 00:47:40.655 16:06:20 keyring_linux -- common/autotest_common.sh@969 -- # kill 790754 00:47:40.655 16:06:20 keyring_linux -- common/autotest_common.sh@974 -- # wait 790754 00:47:40.916 00:47:40.916 real 0m5.169s 00:47:40.916 user 0m9.548s 00:47:40.916 sys 0m1.488s 00:47:40.916 16:06:21 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:47:40.916 16:06:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:47:40.916 ************************************ 00:47:40.916 END TEST keyring_linux 00:47:40.916 ************************************ 00:47:40.916 16:06:21 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:47:40.916 16:06:21 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:47:40.916 16:06:21 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:47:40.916 16:06:21 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:47:40.916 16:06:21 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:47:40.916 16:06:21 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:47:40.916 16:06:21 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:47:40.916 16:06:21 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:47:40.916 16:06:21 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:47:40.916 16:06:21 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:47:40.916 16:06:21 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:47:40.916 16:06:21 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:47:40.916 16:06:21 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:47:40.916 16:06:21 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:47:40.916 16:06:21 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:47:40.916 16:06:21 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:47:40.916 16:06:21 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:47:40.916 16:06:21 -- common/autotest_common.sh@724 -- # xtrace_disable 00:47:40.916 16:06:21 -- common/autotest_common.sh@10 -- # set +x 00:47:40.916 16:06:21 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:47:40.916 16:06:21 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:47:40.916 16:06:21 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:47:40.916 16:06:21 -- common/autotest_common.sh@10 -- # set +x 00:47:49.054 INFO: APP EXITING 00:47:49.054 INFO: killing all VMs 00:47:49.054 INFO: killing vhost app 00:47:49.054 INFO: EXIT DONE 00:47:52.360 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:47:52.360 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:47:52.360 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:47:52.360 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:47:52.360 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:47:52.360 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:47:52.360 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:47:52.360 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:47:52.360 0000:65:00.0 (144d a80a): Already using the nvme driver 00:47:52.360 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:47:52.360 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:47:52.360 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:47:52.360 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:47:52.621 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:47:52.621 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:47:52.621 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:47:52.621 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:47:56.827 Cleaning 00:47:56.827 Removing: /var/run/dpdk/spdk0/config 00:47:56.827 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:47:56.827 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:47:56.827 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:47:56.827 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:47:56.827 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:47:56.827 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:47:56.827 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:47:56.827 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:47:56.827 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:47:56.827 Removing: /var/run/dpdk/spdk0/hugepage_info 00:47:56.827 Removing: /var/run/dpdk/spdk1/config 00:47:56.827 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:47:56.827 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:47:56.827 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:47:56.827 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:47:56.827 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:47:56.827 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:47:56.827 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:47:56.828 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:47:56.828 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:47:56.828 Removing: /var/run/dpdk/spdk1/hugepage_info 00:47:56.828 Removing: /var/run/dpdk/spdk2/config 00:47:56.828 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:47:56.828 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:47:56.828 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:47:56.828 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:47:56.828 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:47:56.828 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:47:56.828 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:47:56.828 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:47:56.828 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:47:56.828 Removing: /var/run/dpdk/spdk2/hugepage_info 00:47:56.828 Removing: /var/run/dpdk/spdk3/config 00:47:56.828 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:47:56.828 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:47:56.828 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:47:56.828 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:47:56.828 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:47:56.828 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:47:56.828 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:47:56.828 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:47:56.828 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:47:56.828 Removing: /var/run/dpdk/spdk3/hugepage_info 00:47:56.828 Removing: /var/run/dpdk/spdk4/config 00:47:56.828 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:47:56.828 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:47:56.828 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:47:56.828 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:47:56.828 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:47:56.828 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:47:56.828 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:47:56.828 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:47:56.828 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:47:56.828 Removing: /var/run/dpdk/spdk4/hugepage_info 00:47:56.828 Removing: /dev/shm/bdev_svc_trace.1 00:47:56.828 Removing: /dev/shm/nvmf_trace.0 00:47:56.828 Removing: /dev/shm/spdk_tgt_trace.pid110250 00:47:56.828 Removing: /var/run/dpdk/spdk0 00:47:56.828 Removing: /var/run/dpdk/spdk1 00:47:56.828 Removing: /var/run/dpdk/spdk2 00:47:56.828 Removing: /var/run/dpdk/spdk3 00:47:56.828 Removing: /var/run/dpdk/spdk4 00:47:56.828 Removing: /var/run/dpdk/spdk_pid108761 00:47:56.828 Removing: /var/run/dpdk/spdk_pid110250 00:47:56.828 Removing: /var/run/dpdk/spdk_pid111091 00:47:56.828 Removing: /var/run/dpdk/spdk_pid112137 00:47:56.828 Removing: /var/run/dpdk/spdk_pid112476 00:47:56.828 Removing: /var/run/dpdk/spdk_pid113537 00:47:56.828 Removing: /var/run/dpdk/spdk_pid113744 00:47:56.828 Removing: /var/run/dpdk/spdk_pid114015 00:47:56.828 Removing: /var/run/dpdk/spdk_pid115260 00:47:56.828 Removing: /var/run/dpdk/spdk_pid115971 00:47:56.828 Removing: /var/run/dpdk/spdk_pid116456 00:47:56.828 Removing: /var/run/dpdk/spdk_pid117010 00:47:56.828 Removing: /var/run/dpdk/spdk_pid117459 00:47:56.828 Removing: /var/run/dpdk/spdk_pid117783 00:47:56.828 Removing: /var/run/dpdk/spdk_pid118135 00:47:56.828 Removing: /var/run/dpdk/spdk_pid118483 00:47:56.828 Removing: /var/run/dpdk/spdk_pid118843 00:47:56.828 Removing: /var/run/dpdk/spdk_pid119945 00:47:56.828 Removing: /var/run/dpdk/spdk_pid123521 00:47:56.828 Removing: /var/run/dpdk/spdk_pid123851 00:47:56.828 Removing: /var/run/dpdk/spdk_pid124196 00:47:56.828 Removing: /var/run/dpdk/spdk_pid124280 00:47:56.828 Removing: /var/run/dpdk/spdk_pid124767 00:47:56.828 Removing: /var/run/dpdk/spdk_pid124990 00:47:56.828 Removing: /var/run/dpdk/spdk_pid125380 00:47:56.828 Removing: /var/run/dpdk/spdk_pid125697 00:47:56.828 Removing: /var/run/dpdk/spdk_pid125913 00:47:56.828 Removing: /var/run/dpdk/spdk_pid126078 00:47:56.828 Removing: /var/run/dpdk/spdk_pid126384 00:47:56.828 Removing: /var/run/dpdk/spdk_pid126452 00:47:56.828 Removing: /var/run/dpdk/spdk_pid126975 00:47:56.828 Removing: /var/run/dpdk/spdk_pid127250 00:47:56.828 Removing: /var/run/dpdk/spdk_pid127653 00:47:56.828 Removing: /var/run/dpdk/spdk_pid132540 00:47:56.828 Removing: /var/run/dpdk/spdk_pid137829 00:47:56.828 Removing: /var/run/dpdk/spdk_pid150089 00:47:56.828 Removing: /var/run/dpdk/spdk_pid150855 00:47:56.828 Removing: /var/run/dpdk/spdk_pid156061 00:47:56.828 Removing: /var/run/dpdk/spdk_pid156542 00:47:56.828 Removing: /var/run/dpdk/spdk_pid161809 00:47:56.828 Removing: /var/run/dpdk/spdk_pid169605 00:47:56.828 Removing: /var/run/dpdk/spdk_pid172919 00:47:56.828 Removing: /var/run/dpdk/spdk_pid185607 00:47:56.828 Removing: /var/run/dpdk/spdk_pid196779 00:47:56.828 Removing: /var/run/dpdk/spdk_pid198798 00:47:56.828 Removing: /var/run/dpdk/spdk_pid199826 00:47:56.828 Removing: /var/run/dpdk/spdk_pid221096 00:47:56.828 Removing: /var/run/dpdk/spdk_pid226458 00:47:56.828 Removing: /var/run/dpdk/spdk_pid327577 00:47:56.828 Removing: /var/run/dpdk/spdk_pid334039 00:47:56.828 Removing: /var/run/dpdk/spdk_pid341293 00:47:56.828 Removing: /var/run/dpdk/spdk_pid348567 00:47:56.828 Removing: /var/run/dpdk/spdk_pid348570 00:47:56.828 Removing: /var/run/dpdk/spdk_pid349573 00:47:56.828 Removing: /var/run/dpdk/spdk_pid350572 00:47:56.828 Removing: /var/run/dpdk/spdk_pid351579 00:47:56.828 Removing: /var/run/dpdk/spdk_pid352245 00:47:56.828 Removing: /var/run/dpdk/spdk_pid352256 00:47:56.828 Removing: /var/run/dpdk/spdk_pid352585 00:47:56.828 Removing: /var/run/dpdk/spdk_pid352643 00:47:56.828 Removing: /var/run/dpdk/spdk_pid352766 00:47:56.828 Removing: /var/run/dpdk/spdk_pid353794 00:47:56.828 Removing: /var/run/dpdk/spdk_pid354794 00:47:56.828 Removing: /var/run/dpdk/spdk_pid355831 00:47:57.090 Removing: /var/run/dpdk/spdk_pid356412 00:47:57.090 Removing: /var/run/dpdk/spdk_pid356533 00:47:57.090 Removing: /var/run/dpdk/spdk_pid356762 00:47:57.090 Removing: /var/run/dpdk/spdk_pid358163 00:47:57.090 Removing: /var/run/dpdk/spdk_pid359998 00:47:57.090 Removing: /var/run/dpdk/spdk_pid369734 00:47:57.090 Removing: /var/run/dpdk/spdk_pid404662 00:47:57.090 Removing: /var/run/dpdk/spdk_pid410354 00:47:57.090 Removing: /var/run/dpdk/spdk_pid412238 00:47:57.090 Removing: /var/run/dpdk/spdk_pid414396 00:47:57.090 Removing: /var/run/dpdk/spdk_pid414726 00:47:57.090 Removing: /var/run/dpdk/spdk_pid414922 00:47:57.090 Removing: /var/run/dpdk/spdk_pid415121 00:47:57.090 Removing: /var/run/dpdk/spdk_pid415936 00:47:57.090 Removing: /var/run/dpdk/spdk_pid418165 00:47:57.090 Removing: /var/run/dpdk/spdk_pid419254 00:47:57.090 Removing: /var/run/dpdk/spdk_pid419962 00:47:57.090 Removing: /var/run/dpdk/spdk_pid422650 00:47:57.090 Removing: /var/run/dpdk/spdk_pid423193 00:47:57.090 Removing: /var/run/dpdk/spdk_pid424078 00:47:57.090 Removing: /var/run/dpdk/spdk_pid429204 00:47:57.090 Removing: /var/run/dpdk/spdk_pid435781 00:47:57.090 Removing: /var/run/dpdk/spdk_pid435782 00:47:57.090 Removing: /var/run/dpdk/spdk_pid435784 00:47:57.090 Removing: /var/run/dpdk/spdk_pid440565 00:47:57.090 Removing: /var/run/dpdk/spdk_pid445439 00:47:57.090 Removing: /var/run/dpdk/spdk_pid451667 00:47:57.090 Removing: /var/run/dpdk/spdk_pid496163 00:47:57.090 Removing: /var/run/dpdk/spdk_pid500978 00:47:57.090 Removing: /var/run/dpdk/spdk_pid508261 00:47:57.090 Removing: /var/run/dpdk/spdk_pid509754 00:47:57.090 Removing: /var/run/dpdk/spdk_pid511309 00:47:57.090 Removing: /var/run/dpdk/spdk_pid513121 00:47:57.090 Removing: /var/run/dpdk/spdk_pid518850 00:47:57.090 Removing: /var/run/dpdk/spdk_pid523827 00:47:57.090 Removing: /var/run/dpdk/spdk_pid533177 00:47:57.090 Removing: /var/run/dpdk/spdk_pid533274 00:47:57.090 Removing: /var/run/dpdk/spdk_pid538419 00:47:57.090 Removing: /var/run/dpdk/spdk_pid538747 00:47:57.090 Removing: /var/run/dpdk/spdk_pid538882 00:47:57.090 Removing: /var/run/dpdk/spdk_pid539533 00:47:57.090 Removing: /var/run/dpdk/spdk_pid539539 00:47:57.090 Removing: /var/run/dpdk/spdk_pid541319 00:47:57.090 Removing: /var/run/dpdk/spdk_pid543341 00:47:57.090 Removing: /var/run/dpdk/spdk_pid545167 00:47:57.090 Removing: /var/run/dpdk/spdk_pid547017 00:47:57.090 Removing: /var/run/dpdk/spdk_pid549010 00:47:57.090 Removing: /var/run/dpdk/spdk_pid551007 00:47:57.090 Removing: /var/run/dpdk/spdk_pid558471 00:47:57.090 Removing: /var/run/dpdk/spdk_pid559063 00:47:57.090 Removing: /var/run/dpdk/spdk_pid560172 00:47:57.090 Removing: /var/run/dpdk/spdk_pid561351 00:47:57.090 Removing: /var/run/dpdk/spdk_pid567765 00:47:57.090 Removing: /var/run/dpdk/spdk_pid570936 00:47:57.090 Removing: /var/run/dpdk/spdk_pid577398 00:47:57.090 Removing: /var/run/dpdk/spdk_pid584100 00:47:57.090 Removing: /var/run/dpdk/spdk_pid594569 00:47:57.090 Removing: /var/run/dpdk/spdk_pid603261 00:47:57.090 Removing: /var/run/dpdk/spdk_pid603264 00:47:57.090 Removing: /var/run/dpdk/spdk_pid626753 00:47:57.090 Removing: /var/run/dpdk/spdk_pid627432 00:47:57.352 Removing: /var/run/dpdk/spdk_pid628118 00:47:57.352 Removing: /var/run/dpdk/spdk_pid628885 00:47:57.352 Removing: /var/run/dpdk/spdk_pid629861 00:47:57.352 Removing: /var/run/dpdk/spdk_pid630619 00:47:57.352 Removing: /var/run/dpdk/spdk_pid631395 00:47:57.352 Removing: /var/run/dpdk/spdk_pid632153 00:47:57.352 Removing: /var/run/dpdk/spdk_pid637912 00:47:57.352 Removing: /var/run/dpdk/spdk_pid638256 00:47:57.352 Removing: /var/run/dpdk/spdk_pid645352 00:47:57.352 Removing: /var/run/dpdk/spdk_pid645729 00:47:57.352 Removing: /var/run/dpdk/spdk_pid652208 00:47:57.352 Removing: /var/run/dpdk/spdk_pid657337 00:47:57.352 Removing: /var/run/dpdk/spdk_pid668720 00:47:57.352 Removing: /var/run/dpdk/spdk_pid669400 00:47:57.352 Removing: /var/run/dpdk/spdk_pid674509 00:47:57.352 Removing: /var/run/dpdk/spdk_pid674864 00:47:57.352 Removing: /var/run/dpdk/spdk_pid679969 00:47:57.352 Removing: /var/run/dpdk/spdk_pid686866 00:47:57.352 Removing: /var/run/dpdk/spdk_pid690385 00:47:57.352 Removing: /var/run/dpdk/spdk_pid702673 00:47:57.352 Removing: /var/run/dpdk/spdk_pid713386 00:47:57.352 Removing: /var/run/dpdk/spdk_pid715204 00:47:57.352 Removing: /var/run/dpdk/spdk_pid716227 00:47:57.352 Removing: /var/run/dpdk/spdk_pid735995 00:47:57.352 Removing: /var/run/dpdk/spdk_pid740892 00:47:57.352 Removing: /var/run/dpdk/spdk_pid744481 00:47:57.352 Removing: /var/run/dpdk/spdk_pid752038 00:47:57.352 Removing: /var/run/dpdk/spdk_pid752043 00:47:57.352 Removing: /var/run/dpdk/spdk_pid758171 00:47:57.352 Removing: /var/run/dpdk/spdk_pid760503 00:47:57.352 Removing: /var/run/dpdk/spdk_pid762724 00:47:57.352 Removing: /var/run/dpdk/spdk_pid763962 00:47:57.352 Removing: /var/run/dpdk/spdk_pid766425 00:47:57.352 Removing: /var/run/dpdk/spdk_pid767752 00:47:57.352 Removing: /var/run/dpdk/spdk_pid777901 00:47:57.352 Removing: /var/run/dpdk/spdk_pid778373 00:47:57.352 Removing: /var/run/dpdk/spdk_pid779019 00:47:57.352 Removing: /var/run/dpdk/spdk_pid781987 00:47:57.352 Removing: /var/run/dpdk/spdk_pid782546 00:47:57.352 Removing: /var/run/dpdk/spdk_pid783007 00:47:57.352 Removing: /var/run/dpdk/spdk_pid787938 00:47:57.352 Removing: /var/run/dpdk/spdk_pid788049 00:47:57.352 Removing: /var/run/dpdk/spdk_pid790301 00:47:57.352 Removing: /var/run/dpdk/spdk_pid790754 00:47:57.352 Removing: /var/run/dpdk/spdk_pid791037 00:47:57.352 Clean 00:47:57.613 16:06:37 -- common/autotest_common.sh@1451 -- # return 0 00:47:57.613 16:06:37 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:47:57.613 16:06:37 -- common/autotest_common.sh@730 -- # xtrace_disable 00:47:57.613 16:06:37 -- common/autotest_common.sh@10 -- # set +x 00:47:57.613 16:06:37 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:47:57.613 16:06:37 -- common/autotest_common.sh@730 -- # xtrace_disable 00:47:57.613 16:06:37 -- common/autotest_common.sh@10 -- # set +x 00:47:57.613 16:06:37 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:47:57.613 16:06:37 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:47:57.613 16:06:37 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:47:57.613 16:06:37 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:47:57.613 16:06:37 -- spdk/autotest.sh@394 -- # hostname 00:47:57.613 16:06:37 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:47:57.874 geninfo: WARNING: invalid characters removed from testname! 00:48:24.451 16:07:03 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:48:25.834 16:07:06 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:48:27.742 16:07:07 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:48:29.122 16:07:09 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:48:31.039 16:07:11 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:48:32.423 16:07:12 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:48:34.336 16:07:14 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:48:34.336 16:07:14 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:48:34.336 16:07:14 -- common/autotest_common.sh@1681 -- $ lcov --version 00:48:34.336 16:07:14 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:48:34.336 16:07:14 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:48:34.336 16:07:14 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:48:34.336 16:07:14 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:48:34.336 16:07:14 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:48:34.336 16:07:14 -- scripts/common.sh@336 -- $ IFS=.-: 00:48:34.336 16:07:14 -- scripts/common.sh@336 -- $ read -ra ver1 00:48:34.336 16:07:14 -- scripts/common.sh@337 -- $ IFS=.-: 00:48:34.336 16:07:14 -- scripts/common.sh@337 -- $ read -ra ver2 00:48:34.336 16:07:14 -- scripts/common.sh@338 -- $ local 'op=<' 00:48:34.336 16:07:14 -- scripts/common.sh@340 -- $ ver1_l=2 00:48:34.336 16:07:14 -- scripts/common.sh@341 -- $ ver2_l=1 00:48:34.336 16:07:14 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:48:34.336 16:07:14 -- scripts/common.sh@344 -- $ case "$op" in 00:48:34.336 16:07:14 -- scripts/common.sh@345 -- $ : 1 00:48:34.336 16:07:14 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:48:34.336 16:07:14 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:48:34.336 16:07:14 -- scripts/common.sh@365 -- $ decimal 1 00:48:34.336 16:07:14 -- scripts/common.sh@353 -- $ local d=1 00:48:34.336 16:07:14 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:48:34.336 16:07:14 -- scripts/common.sh@355 -- $ echo 1 00:48:34.336 16:07:14 -- scripts/common.sh@365 -- $ ver1[v]=1 00:48:34.336 16:07:14 -- scripts/common.sh@366 -- $ decimal 2 00:48:34.336 16:07:14 -- scripts/common.sh@353 -- $ local d=2 00:48:34.336 16:07:14 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:48:34.336 16:07:14 -- scripts/common.sh@355 -- $ echo 2 00:48:34.336 16:07:14 -- scripts/common.sh@366 -- $ ver2[v]=2 00:48:34.336 16:07:14 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:48:34.336 16:07:14 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:48:34.336 16:07:14 -- scripts/common.sh@368 -- $ return 0 00:48:34.336 16:07:14 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:48:34.336 16:07:14 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:48:34.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:34.336 --rc genhtml_branch_coverage=1 00:48:34.336 --rc genhtml_function_coverage=1 00:48:34.336 --rc genhtml_legend=1 00:48:34.336 --rc geninfo_all_blocks=1 00:48:34.336 --rc geninfo_unexecuted_blocks=1 00:48:34.336 00:48:34.336 ' 00:48:34.336 16:07:14 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:48:34.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:34.336 --rc genhtml_branch_coverage=1 00:48:34.336 --rc genhtml_function_coverage=1 00:48:34.336 --rc genhtml_legend=1 00:48:34.336 --rc geninfo_all_blocks=1 00:48:34.336 --rc geninfo_unexecuted_blocks=1 00:48:34.336 00:48:34.336 ' 00:48:34.336 16:07:14 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:48:34.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:34.337 --rc genhtml_branch_coverage=1 00:48:34.337 --rc genhtml_function_coverage=1 00:48:34.337 --rc genhtml_legend=1 00:48:34.337 --rc geninfo_all_blocks=1 00:48:34.337 --rc geninfo_unexecuted_blocks=1 00:48:34.337 00:48:34.337 ' 00:48:34.337 16:07:14 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:48:34.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:34.337 --rc genhtml_branch_coverage=1 00:48:34.337 --rc genhtml_function_coverage=1 00:48:34.337 --rc genhtml_legend=1 00:48:34.337 --rc geninfo_all_blocks=1 00:48:34.337 --rc geninfo_unexecuted_blocks=1 00:48:34.337 00:48:34.337 ' 00:48:34.337 16:07:14 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:48:34.337 16:07:14 -- scripts/common.sh@15 -- $ shopt -s extglob 00:48:34.337 16:07:14 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:48:34.337 16:07:14 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:48:34.337 16:07:14 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:48:34.337 16:07:14 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:34.337 16:07:14 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:34.337 16:07:14 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:34.337 16:07:14 -- paths/export.sh@5 -- $ export PATH 00:48:34.337 16:07:14 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:34.337 16:07:14 -- common/autobuild_common.sh@478 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:48:34.337 16:07:14 -- common/autobuild_common.sh@479 -- $ date +%s 00:48:34.337 16:07:14 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727446034.XXXXXX 00:48:34.337 16:07:14 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727446034.bKfHNX 00:48:34.337 16:07:14 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:48:34.337 16:07:14 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:48:34.337 16:07:14 -- common/autobuild_common.sh@486 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:48:34.337 16:07:14 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:48:34.337 16:07:14 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:48:34.337 16:07:14 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:48:34.337 16:07:14 -- common/autobuild_common.sh@495 -- $ get_config_params 00:48:34.337 16:07:14 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:48:34.337 16:07:14 -- common/autotest_common.sh@10 -- $ set +x 00:48:34.337 16:07:14 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:48:34.337 16:07:14 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:48:34.337 16:07:14 -- pm/common@17 -- $ local monitor 00:48:34.337 16:07:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:48:34.337 16:07:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:48:34.337 16:07:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:48:34.337 16:07:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:48:34.337 16:07:14 -- pm/common@21 -- $ date +%s 00:48:34.337 16:07:14 -- pm/common@25 -- $ sleep 1 00:48:34.337 16:07:14 -- pm/common@21 -- $ date +%s 00:48:34.337 16:07:14 -- pm/common@21 -- $ date +%s 00:48:34.337 16:07:14 -- pm/common@21 -- $ date +%s 00:48:34.337 16:07:14 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1727446034 00:48:34.337 16:07:14 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1727446034 00:48:34.337 16:07:14 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1727446034 00:48:34.337 16:07:14 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1727446034 00:48:34.337 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1727446034_collect-cpu-load.pm.log 00:48:34.337 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1727446034_collect-vmstat.pm.log 00:48:34.337 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1727446034_collect-cpu-temp.pm.log 00:48:34.337 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1727446034_collect-bmc-pm.bmc.pm.log 00:48:35.279 16:07:15 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:48:35.279 16:07:15 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:48:35.279 16:07:15 -- spdk/autopackage.sh@14 -- $ timing_finish 00:48:35.279 16:07:15 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:48:35.279 16:07:15 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:48:35.279 16:07:15 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:48:35.279 16:07:15 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:48:35.279 16:07:15 -- pm/common@29 -- $ signal_monitor_resources TERM 00:48:35.279 16:07:15 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:48:35.279 16:07:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:48:35.279 16:07:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:48:35.279 16:07:15 -- pm/common@44 -- $ pid=805099 00:48:35.279 16:07:15 -- pm/common@50 -- $ kill -TERM 805099 00:48:35.279 16:07:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:48:35.279 16:07:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:48:35.279 16:07:15 -- pm/common@44 -- $ pid=805100 00:48:35.279 16:07:15 -- pm/common@50 -- $ kill -TERM 805100 00:48:35.279 16:07:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:48:35.279 16:07:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:48:35.279 16:07:15 -- pm/common@44 -- $ pid=805102 00:48:35.279 16:07:15 -- pm/common@50 -- $ kill -TERM 805102 00:48:35.279 16:07:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:48:35.279 16:07:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:48:35.279 16:07:15 -- pm/common@44 -- $ pid=805125 00:48:35.279 16:07:15 -- pm/common@50 -- $ sudo -E kill -TERM 805125 00:48:35.279 + [[ -n 6829 ]] 00:48:35.279 + sudo kill 6829 00:48:35.291 [Pipeline] } 00:48:35.306 [Pipeline] // stage 00:48:35.311 [Pipeline] } 00:48:35.325 [Pipeline] // timeout 00:48:35.330 [Pipeline] } 00:48:35.345 [Pipeline] // catchError 00:48:35.349 [Pipeline] } 00:48:35.364 [Pipeline] // wrap 00:48:35.368 [Pipeline] } 00:48:35.380 [Pipeline] // catchError 00:48:35.388 [Pipeline] stage 00:48:35.390 [Pipeline] { (Epilogue) 00:48:35.402 [Pipeline] catchError 00:48:35.403 [Pipeline] { 00:48:35.415 [Pipeline] echo 00:48:35.416 Cleanup processes 00:48:35.421 [Pipeline] sh 00:48:35.712 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:48:35.713 805238 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:48:35.713 805794 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:48:35.726 [Pipeline] sh 00:48:36.014 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:48:36.014 ++ grep -v 'sudo pgrep' 00:48:36.014 ++ awk '{print $1}' 00:48:36.014 + sudo kill -9 805238 00:48:36.027 [Pipeline] sh 00:48:36.319 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:48:48.562 [Pipeline] sh 00:48:48.854 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:48:48.854 Artifacts sizes are good 00:48:48.868 [Pipeline] archiveArtifacts 00:48:48.874 Archiving artifacts 00:48:49.557 [Pipeline] sh 00:48:49.849 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:48:49.865 [Pipeline] cleanWs 00:48:49.876 [WS-CLEANUP] Deleting project workspace... 00:48:49.876 [WS-CLEANUP] Deferred wipeout is used... 00:48:49.883 [WS-CLEANUP] done 00:48:49.885 [Pipeline] } 00:48:49.903 [Pipeline] // catchError 00:48:49.915 [Pipeline] sh 00:48:50.203 + logger -p user.info -t JENKINS-CI 00:48:50.213 [Pipeline] } 00:48:50.226 [Pipeline] // stage 00:48:50.231 [Pipeline] } 00:48:50.245 [Pipeline] // node 00:48:50.250 [Pipeline] End of Pipeline 00:48:50.291 Finished: SUCCESS